Thursday, 2019-05-23

*** jamesmcarthur has joined #openstack-publiccloud00:08
*** jamesmcarthur has quit IRC00:18
*** jamesmcarthur has joined #openstack-publiccloud00:19
*** jamesmcarthur has quit IRC00:25
*** jamesmcarthur has joined #openstack-publiccloud00:52
*** jamesmcarthur has quit IRC00:58
*** jamesmcarthur has joined #openstack-publiccloud01:28
*** jamesmcarthur has quit IRC01:33
*** ricolin has joined #openstack-publiccloud01:36
*** jamesmcarthur has joined #openstack-publiccloud01:45
*** jamesmcarthur has quit IRC01:49
*** jamesmcarthur has joined #openstack-publiccloud02:12
*** jamesmcarthur has quit IRC02:17
*** irclogbot_2 has quit IRC02:26
*** irclogbot_0 has joined #openstack-publiccloud02:29
*** jamesmcarthur has joined #openstack-publiccloud02:35
*** jamesmcarthur has quit IRC03:05
*** jamesmcarthur has joined #openstack-publiccloud03:26
*** jamesmcarthur has quit IRC03:48
*** jamesmcarthur has joined #openstack-publiccloud03:56
*** jamesmcarthur has quit IRC04:13
*** jamesmcarthur has joined #openstack-publiccloud04:40
*** jamesmcarthur has quit IRC04:51
*** jamesmcarthur has joined #openstack-publiccloud04:57
*** jamesmcarthur has quit IRC05:04
*** jamesmcarthur has joined #openstack-publiccloud05:12
*** jamesmcarthur has quit IRC05:57
*** jamesmcarthur has joined #openstack-publiccloud05:58
*** jamesmcarthur has quit IRC06:54
*** gtema has joined #openstack-publiccloud07:48
*** trident has quit IRC08:04
*** trident has joined #openstack-publiccloud08:05
*** jamesmcarthur has joined #openstack-publiccloud08:51
*** jamesmcarthur has quit IRC08:55
*** jamesmcarthur has joined #openstack-publiccloud09:04
*** damien_r has joined #openstack-publiccloud09:12
*** damien_r has quit IRC09:13
*** damien_r has joined #openstack-publiccloud09:13
*** gtema has quit IRC09:15
*** ricolin has quit IRC09:15
*** gtema has joined #openstack-publiccloud09:16
*** jamesmcarthur has quit IRC09:44
*** jamesmcarthur has joined #openstack-publiccloud09:45
*** jamesmcarthur has quit IRC09:58
*** jamesmcarthur has joined #openstack-publiccloud10:10
*** jamesmcarthur has quit IRC10:16
*** jamesmcarthur has joined #openstack-publiccloud10:21
*** jamesmcarthur has quit IRC10:26
*** jamesmcarthur has joined #openstack-publiccloud10:39
*** jamesmcarthur has quit IRC10:43
*** jamesmcarthur has joined #openstack-publiccloud10:48
*** jamesmcarthur has quit IRC10:52
*** gtema has quit IRC10:57
*** gtema has joined #openstack-publiccloud10:58
*** jamesmcarthur has joined #openstack-publiccloud11:16
*** hberaud has joined #openstack-publiccloud11:53
*** gtema has quit IRC12:17
*** jamesmcarthur has quit IRC12:18
*** jamesmcarthur has joined #openstack-publiccloud12:18
*** jamesmcarthur has quit IRC12:30
*** gtema has joined #openstack-publiccloud12:47
*** jamesmcarthur has joined #openstack-publiccloud12:52
*** ncastele has joined #openstack-publiccloud13:02
*** jamesmcarthur has quit IRC13:05
*** gtema has quit IRC13:27
*** ncastele has quit IRC13:45
*** ncastele has joined #openstack-publiccloud13:45
*** ricolin has joined #openstack-publiccloud13:49
ncasteleHi there :)13:59
tobberydbergHi ncastele :-)13:59
*** witek has joined #openstack-publiccloud13:59
tobberydberg#startmeeting publiccloud_wg14:00
openstackMeeting started Thu May 23 14:00:16 2019 UTC and is due to finish in 60 minutes.  The chair is tobberydberg. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: publiccloud_wg)"14:00
openstackThe meeting name has been set to 'publiccloud_wg'14:00
tobberydbergo/14:00
witekhi14:00
ncastele\o/14:01
ncasteleLot of stuff and many different topics in the brainstorming :o14:02
tobberydbergHope you are all doing well! Lets wait a few minutes before we get started to see if more people are on their way in :-)14:02
tobberydbergYes ncastele  .... a lot of them indeed :-)14:02
tobberydbergJust to have it said, like your comments there, pretty much aligned with my personal thinking here as well14:03
tobberydbergSo, are there more people in here at this point interested in the billing topic?14:04
tobberydbergmnaser ?14:04
mnasero/ :>14:04
tobberydberg+114:04
tobberydbergDon't see anyone else from the "comments list" that are online, so lets start14:06
tobberydberg#topic 1. Joint development effort billing14:06
*** openstack changes topic to "1. Joint development effort billing (Meeting topic: publiccloud_wg)"14:06
tobberydbergSo, the plan for this initiative to get going was to start to collect some raw ideas until this meeting14:07
tobberydbergThere are a lot of ideas and thoughts around this topic in the etherpad14:08
tobberydberg#link https://etherpad.openstack.org/p/publiccloud-sig-billing-implementation-proposal14:08
tobberydbergSo, today the plan is to discuss these ideas and see if we can find a smart way to move forward, limit the scope to something that we all things are a good first step etc14:09
ncasteleI see multiple topics: collect data, store data, customize, aggregate, display, price data, bill data (invoice)14:10
tobberydbergReading all the comments, I believe it will be really important to limit the scope first of all14:10
ncastele+114:10
tobberydbergyes, agree, and I guess we will touch a lot of them in some sense in our limitation14:11
ncastelecollecting and storing looks pretty obvious and required, imo14:12
tobberydbergIn my mind, what I would love to see as a end result would be something that can respond to a api call and return the cost for a resource/project/etc14:12
tobberydbergthat definitely includes those parts ncastele14:12
tobberydbergAlso, I would love to be able to use tools already implemented and that we can collaborate with14:13
ncasteleI could agree, but one of the issue if we start to think about cost, then we talk about prices, currency, taxes, catalog, etc.14:13
ncastele(which is a bit pain in the ***)14:14
witektobberydberg: is CloudKitty fulfilling your requirements?14:14
tobberydbergI added "Limitation of scope" at the etherpad where we can put our suggested limitation14:14
tobberydbergncastele yes, agree...but that would be a nice end goal for us and for endusers. RAW cost ... nothing to do with taxes, nothing with discounts etc etc14:16
*** pilgrimstack has joined #openstack-publiccloud14:16
tobberydbergjust the raw cost based on usage and a cost per unit of some kind14:17
tobberydbergIt might be that is to much for the scope14:17
tobberydbergwitek No, it does not14:17
ncasteleit means that the first iteration should handle the definition of product/catalog/prices, or a way to have some kind of middleware that can call an external service to get the price/cost of a resource14:19
tobberydbergIt's been some time since I looked into it now though ... but for example it supports billing plan for flavors, that is not what we bill for today14:19
witekI thought they can define flexible billing models, but I haven14:19
ncasteleto be honest, in OVH we already have a platform where we define the catalog/prices/special stuff, and I don't want to synchronize this tool with some other table/database in OpenStack14:20
witekhaven't looked at it myself14:20
tobberydbergYea, I guess if that were to happen some configuration of which resources to bill for in which unit to which price will be needed14:20
ncasteleyes14:21
tobberydbergncastele Well, I would like to have good support in openstack for users that would like to be able to do that14:21
ncasteleAs a first iteration, I definitely think it's easier to focus on collecting/storing data, with the goal of displaying quantities14:22
tobberydbergFor me personally, I would be super happy with a good reliable backend that in real time can give me the aggregated usage14:22
ncasteleThen, from quantities/usage, we can iterate to something that can handle prices/cost14:22
tobberydbergcan agree on that as well ncastele14:22
tobberydbergThought, ideas from anyone else here?14:23
witekI would recommend Monasca for collecting/storing, but I'm biased :)14:24
tobberydbergWould ceilometer  agent work as the collector?14:24
tobberydbergcan Monasca collect all events, usage etc in real time at this point?14:25
ncasteleI don't have enough knowledge on ceilometer/monasca to confirm stuff about it14:25
witeknot yet, we can collect system usage metrics, libvirt, ovs metrics14:26
ncasteleBut I have strange usecases that we can challenge with ceilometer/monasca to see if it fits14:26
witekany prometheus metrics14:26
witekand ceilometer metrics14:26
tobberydbergmnaser do you have insight in that?14:26
mnaserI wanted to share something on that14:26
tobberydbergplease do14:27
mnaser(but google is rough)14:27
tobberydbergYou will just have to write if all you self then :-)14:27
mnaser(trying to look for it)14:29
mnaserits a prometheus exporter14:29
mnaserthat actually captured the project id and user id14:29
mnaserhttps://github.com/zhangjianweibj/prometheus-libvirt-exporter14:29
mnaserthere14:29
mnaserits actually pretty neat14:29
mnaserzhangjianweibj seems to work at inspur too who contribute to openstack often14:30
ncastele*written in Go* > Ok we can use it <314:31
tobberydbergSo this would be a kind of replacer for what ceilometer does today? Collecting the data? Or, also the storing of data?14:31
*** gtema has joined #openstack-publiccloud14:32
witekmnaser: the project you've posted seems to be one man effort, zhangjianweibj was evaluating different projects, I've seen him contributing to Monasca as well14:32
mnaserit is an exporter than prometheus can scrape14:33
mnaserwitek: it could be a start, that was the idea.. it's an approach14:33
mnaserjust wanted to throw it out here14:33
ncasteleit's mainly oriented for nova instances, we need to focus also on other components (volumes, snapshots, storage, etc.)14:35
ncasteleRegarding the first iteration, it could be a good idea to scope the resources/services we want to collect/store from14:35
tobberydbergYea, implementations of any kind to be able to be something official in the openstack community, it should be written in python, but the interesting parts are the approach as mnaser says14:35
mnaserI felt it was more interesting as an overall approach14:36
mnaserpersonally in my experience, we need something that is sharded14:36
mnaserit's a catch 2214:36
tobberydbergsaw that Linaro had one as well, as well as Cananoical14:36
mnasermachines polling their own instances is nice because it scales out .. but ends up all in one system trying to handle saving it all which melts things down14:37
mnasermachines storing state on their own is not reliable14:37
ncasteleit has to be aggregated somewhere so we can easily manage/compute/display through API14:38
tobberydbergSo, not being able to use anything for the collection that is already implemented will increase the scope a bit :-)14:38
mnaserso I think the idea here is two step: 1) collection, 2) (auto-?)aggregation14:38
mnaserI do think we all share a common pain point14:39
tobberydberg1)collection and storing :-)14:39
mnaserwhich is: I need to know how many seconds/minutes/hours this resource has consumed14:39
mnaserthat is *mostly* what we care about..14:39
ncastele+114:39
tobberydbergyes. I was first thinking of using ceilometerm but implement a smarter way of storing the data, but that might be hard to accomplish ?14:40
tobberydbergagree with that mnaser14:40
mnaserhistorically openstack telemetry never gave us that14:40
ncastele(and I also care about Swift usage but it's completely a different topic)14:40
mnaserit was just _data_ without what we want which is raw numbers14:40
witekI think on collection side the best approach is for services to instrument their code themselves and expose in the standardized way14:41
mnaserwell ceilometer stores into gnocchi now which includes several stores14:41
tobberydbergyea, WHAT to measure and collect I guess will be another toic, but definitely as swift as well =)14:41
witekthey can be then scraped by prometheus, pushed to Monasca or Gnocchi14:41
mnaserHONESTLY I'm a bit in favour of letting prometheus do the scraping14:41
mnaserit's a big project, it's got people behind it, no need for us to try and just make things happen in ceilometer when the world has found a 'better' alternative14:42
mnaserits also ~cLoUd nAtIvE~14:42
witekhttps://prometheus.io/docs/practices/instrumentation/14:42
tobberydbergLike that approach a lot14:42
witekthat's what prometheus advises about collecting best metrics14:42
tobberydbergwill it be able to scrape data for all resources that exists in OpenStack space?14:43
mnasernow for me the issue remains: <collector> <= <prometheus> => <??? storage ???> <=> <?? exposed api ??>14:43
mnaserthe storage part is an implementation detail imho, people will decide to implement whatever they want depending on their operational experience14:43
mnaserbut the tooling to get $data => $hours_consumed .. that's the hard part which would drive the scope down14:44
witektobberydberg: that's the point, the exporter won't be able to collect all the data, the services are best suited to provide the valuable information, they should instrument their code14:45
ncastelefrom what I understood, prometheus will collect metrics from the infrastructure, and we plan to base the usage/hours from those metrics, right ?14:45
ncasteleNot sure following well the part linked to prometheus14:46
tobberydbergMy experience with prometheus is pretty limited unfortunate14:46
tobberydbergmnaser Are you of another opinion when it comes to the ability of being able to use prometheus for collecting all the data we need?14:47
tobberydberg(you seam to have some experience with this)14:47
mnaserI like prometheus, it can also push data to remote stores natively14:47
ncastelewhat information prometheus will bring to us regarding the usage ? cpus used, ram used ?14:48
witekexporter implements the so called `black box` monitoring, view from outside of the application14:48
tobberydbergI mean, as you said witek, involving all teams to do this for us would of course super, but don't see that to happen14:48
mnaserhttps://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage14:48
mnaserimho I'd be inclined on actually having exporters live outside projects14:48
mnaserthe velocity and release cycle of openstack would kill if we wanted to add a new thing to monitor :)14:49
witekbut is requested from different parties on a regular basis14:49
witekrecently from Tim Bell on openstack-discuss14:49
mnaserrequested != going to happen imho :p14:50
mnaserand again, release cycles would hurt us a lot14:50
mnaseresp with back port policies :)14:50
tobberydbergyea, it will probably not happen14:51
mnaserneed to get $x metered but running stein? push a patch to train and it won't be back portable to stein because it will involve adding a new feature14:51
mnaserwhich breaks our stable policy14:51
witekwell, as a starting point we could collect the list services and metrics we're interested in for billing14:52
mnaseryeah. imho an openstack collector would be eneat14:52
mnaserauto-detects running services and gets all the info14:52
mnaserin https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage -- is there a remote storage endpoint that's of preference for folks here.. maybe trying to see if we can all agree on something (so we can work on tooling that talks to it)14:53
tobberydbergSo, time is sooooon up for todays meeting unfortunate so would be good to summarize a bit and find out next steps14:53
ncastelejust want to challenge, with an example14:53
ncasteleif we base the usage on metrics14:53
ncastele(just want to be sure to understand)14:53
ncastelewhat happen when a server is suspended for example ? Does it stop to send metrics ?14:54
mnaserno, prometheus should still report it, with a state14:54
mnaserin the tooling, we can decide to provide # of hours grouped by state14:55
mnaserx hours active, y hours shutoff, z hours shelved14:55
ncasteleOkay. I'm not enough aware of prometheus and what metrics/information it can bring to us to understand properly this discussion, will need to have a look14:56
tobberydbergSo my suggestion here, we all think about this approach and see if that would be possible working solution, as well as thinking about what remote storage endpoint that would be preferable to see if we can come to a agreement on that14:56
tobberydbergIf we do that, should we try to schedule a new meeting before next schedule meeting for the group?14:57
tobberydbergOr, is it good to have a new meeting in 2 weeks?14:57
*** damien_r has quit IRC14:57
* mnaser feels more often is good14:58
*** damien_r has joined #openstack-publiccloud14:58
ncasteleas we are bootstraping the topic, we should meet more often in the coming weeks if we want to achieve something14:58
ncastele(especially with the summer break that is coming :( )14:58
tobberydbergOk, can agree on that.14:58
tobberydbergnext wednesday at the same time?14:59
witekwednesday or thursday?14:59
witekoh, thursday is holiday for me15:00
tobberydbergPreferably wednesday for me, public holiday in sweden next thursday, and I know I'm not at home... :-)15:00
tobberydbergyea, guess that goes for most of europe15:01
ncastelewould love to have it Tuesday as wednesday/thursday is holidays, but I can find a way to be there if it's on wednesday15:01
tobberydbergmnaser ... tuesday wednesday ?15:01
mnaserI will be at CERN next week15:02
mnaserbut after that it should be okay15:02
tobberydbergPlease put some thought around this area until next time15:02
tobberydbergLets say Tuesday 1400 UTC next week, after that we use the timeslot we have for this meeting in two weeks15:03
witekthanks tobberydberg15:03
tobberydbergThanks a lot for today folks! See you all next week, and have fun at CERN mnaser :-)15:04
mnaserthanks for hosting tobberydberg15:04
mnaserthaaanks15:04
ncastelethanks everyone :)15:04
tobberydberg#endmeeting15:05
*** openstack changes topic to "New meeting time!! Thursday odd weeks at 1400 UTC in this channel!!"15:05
openstackMeeting ended Thu May 23 15:05:16 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:05
openstackMinutes:        http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-05-23-14.00.html15:05
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-05-23-14.00.txt15:05
openstackLog:            http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-05-23-14.00.log.html15:05
tobberydbergI will send out a reminder for that meeting on monday!15:06
*** jamesmcarthur has joined #openstack-publiccloud15:21
*** jamesmcarthur_ has joined #openstack-publiccloud15:32
*** jamesmcarthur has quit IRC15:36
*** ncastele has quit IRC15:37
*** gtema has quit IRC16:13
*** witek has quit IRC16:24
*** damien_r has quit IRC16:36
*** ricolin has quit IRC16:55
*** hberaud is now known as hberaud|gone16:58
*** pilgrimstack has quit IRC17:03
*** pilgrimstack has joined #openstack-publiccloud18:54
*** pilgrimstack has quit IRC19:02
*** jamesmcarthur_ has quit IRC20:16
*** trident has quit IRC23:51
*** trident has joined #openstack-publiccloud23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!