*** jamesmcarthur has joined #openstack-publiccloud | 00:08 | |
*** jamesmcarthur has quit IRC | 00:18 | |
*** jamesmcarthur has joined #openstack-publiccloud | 00:19 | |
*** jamesmcarthur has quit IRC | 00:25 | |
*** jamesmcarthur has joined #openstack-publiccloud | 00:52 | |
*** jamesmcarthur has quit IRC | 00:58 | |
*** jamesmcarthur has joined #openstack-publiccloud | 01:28 | |
*** jamesmcarthur has quit IRC | 01:33 | |
*** ricolin has joined #openstack-publiccloud | 01:36 | |
*** jamesmcarthur has joined #openstack-publiccloud | 01:45 | |
*** jamesmcarthur has quit IRC | 01:49 | |
*** jamesmcarthur has joined #openstack-publiccloud | 02:12 | |
*** jamesmcarthur has quit IRC | 02:17 | |
*** irclogbot_2 has quit IRC | 02:26 | |
*** irclogbot_0 has joined #openstack-publiccloud | 02:29 | |
*** jamesmcarthur has joined #openstack-publiccloud | 02:35 | |
*** jamesmcarthur has quit IRC | 03:05 | |
*** jamesmcarthur has joined #openstack-publiccloud | 03:26 | |
*** jamesmcarthur has quit IRC | 03:48 | |
*** jamesmcarthur has joined #openstack-publiccloud | 03:56 | |
*** jamesmcarthur has quit IRC | 04:13 | |
*** jamesmcarthur has joined #openstack-publiccloud | 04:40 | |
*** jamesmcarthur has quit IRC | 04:51 | |
*** jamesmcarthur has joined #openstack-publiccloud | 04:57 | |
*** jamesmcarthur has quit IRC | 05:04 | |
*** jamesmcarthur has joined #openstack-publiccloud | 05:12 | |
*** jamesmcarthur has quit IRC | 05:57 | |
*** jamesmcarthur has joined #openstack-publiccloud | 05:58 | |
*** jamesmcarthur has quit IRC | 06:54 | |
*** gtema has joined #openstack-publiccloud | 07:48 | |
*** trident has quit IRC | 08:04 | |
*** trident has joined #openstack-publiccloud | 08:05 | |
*** jamesmcarthur has joined #openstack-publiccloud | 08:51 | |
*** jamesmcarthur has quit IRC | 08:55 | |
*** jamesmcarthur has joined #openstack-publiccloud | 09:04 | |
*** damien_r has joined #openstack-publiccloud | 09:12 | |
*** damien_r has quit IRC | 09:13 | |
*** damien_r has joined #openstack-publiccloud | 09:13 | |
*** gtema has quit IRC | 09:15 | |
*** ricolin has quit IRC | 09:15 | |
*** gtema has joined #openstack-publiccloud | 09:16 | |
*** jamesmcarthur has quit IRC | 09:44 | |
*** jamesmcarthur has joined #openstack-publiccloud | 09:45 | |
*** jamesmcarthur has quit IRC | 09:58 | |
*** jamesmcarthur has joined #openstack-publiccloud | 10:10 | |
*** jamesmcarthur has quit IRC | 10:16 | |
*** jamesmcarthur has joined #openstack-publiccloud | 10:21 | |
*** jamesmcarthur has quit IRC | 10:26 | |
*** jamesmcarthur has joined #openstack-publiccloud | 10:39 | |
*** jamesmcarthur has quit IRC | 10:43 | |
*** jamesmcarthur has joined #openstack-publiccloud | 10:48 | |
*** jamesmcarthur has quit IRC | 10:52 | |
*** gtema has quit IRC | 10:57 | |
*** gtema has joined #openstack-publiccloud | 10:58 | |
*** jamesmcarthur has joined #openstack-publiccloud | 11:16 | |
*** hberaud has joined #openstack-publiccloud | 11:53 | |
*** gtema has quit IRC | 12:17 | |
*** jamesmcarthur has quit IRC | 12:18 | |
*** jamesmcarthur has joined #openstack-publiccloud | 12:18 | |
*** jamesmcarthur has quit IRC | 12:30 | |
*** gtema has joined #openstack-publiccloud | 12:47 | |
*** jamesmcarthur has joined #openstack-publiccloud | 12:52 | |
*** ncastele has joined #openstack-publiccloud | 13:02 | |
*** jamesmcarthur has quit IRC | 13:05 | |
*** gtema has quit IRC | 13:27 | |
*** ncastele has quit IRC | 13:45 | |
*** ncastele has joined #openstack-publiccloud | 13:45 | |
*** ricolin has joined #openstack-publiccloud | 13:49 | |
ncastele | Hi there :) | 13:59 |
---|---|---|
tobberydberg | Hi ncastele :-) | 13:59 |
*** witek has joined #openstack-publiccloud | 13:59 | |
tobberydberg | #startmeeting publiccloud_wg | 14:00 |
openstack | Meeting started Thu May 23 14:00:16 2019 UTC and is due to finish in 60 minutes. The chair is tobberydberg. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:00 |
*** openstack changes topic to " (Meeting topic: publiccloud_wg)" | 14:00 | |
openstack | The meeting name has been set to 'publiccloud_wg' | 14:00 |
tobberydberg | o/ | 14:00 |
witek | hi | 14:00 |
ncastele | \o/ | 14:01 |
ncastele | Lot of stuff and many different topics in the brainstorming :o | 14:02 |
tobberydberg | Hope you are all doing well! Lets wait a few minutes before we get started to see if more people are on their way in :-) | 14:02 |
tobberydberg | Yes ncastele .... a lot of them indeed :-) | 14:02 |
tobberydberg | Just to have it said, like your comments there, pretty much aligned with my personal thinking here as well | 14:03 |
tobberydberg | So, are there more people in here at this point interested in the billing topic? | 14:04 |
tobberydberg | mnaser ? | 14:04 |
mnaser | o/ :> | 14:04 |
tobberydberg | +1 | 14:04 |
tobberydberg | Don't see anyone else from the "comments list" that are online, so lets start | 14:06 |
tobberydberg | #topic 1. Joint development effort billing | 14:06 |
*** openstack changes topic to "1. Joint development effort billing (Meeting topic: publiccloud_wg)" | 14:06 | |
tobberydberg | So, the plan for this initiative to get going was to start to collect some raw ideas until this meeting | 14:07 |
tobberydberg | There are a lot of ideas and thoughts around this topic in the etherpad | 14:08 |
tobberydberg | #link https://etherpad.openstack.org/p/publiccloud-sig-billing-implementation-proposal | 14:08 |
tobberydberg | So, today the plan is to discuss these ideas and see if we can find a smart way to move forward, limit the scope to something that we all things are a good first step etc | 14:09 |
ncastele | I see multiple topics: collect data, store data, customize, aggregate, display, price data, bill data (invoice) | 14:10 |
tobberydberg | Reading all the comments, I believe it will be really important to limit the scope first of all | 14:10 |
ncastele | +1 | 14:10 |
tobberydberg | yes, agree, and I guess we will touch a lot of them in some sense in our limitation | 14:11 |
ncastele | collecting and storing looks pretty obvious and required, imo | 14:12 |
tobberydberg | In my mind, what I would love to see as a end result would be something that can respond to a api call and return the cost for a resource/project/etc | 14:12 |
tobberydberg | that definitely includes those parts ncastele | 14:12 |
tobberydberg | Also, I would love to be able to use tools already implemented and that we can collaborate with | 14:13 |
ncastele | I could agree, but one of the issue if we start to think about cost, then we talk about prices, currency, taxes, catalog, etc. | 14:13 |
ncastele | (which is a bit pain in the ***) | 14:14 |
witek | tobberydberg: is CloudKitty fulfilling your requirements? | 14:14 |
tobberydberg | I added "Limitation of scope" at the etherpad where we can put our suggested limitation | 14:14 |
tobberydberg | ncastele yes, agree...but that would be a nice end goal for us and for endusers. RAW cost ... nothing to do with taxes, nothing with discounts etc etc | 14:16 |
*** pilgrimstack has joined #openstack-publiccloud | 14:16 | |
tobberydberg | just the raw cost based on usage and a cost per unit of some kind | 14:17 |
tobberydberg | It might be that is to much for the scope | 14:17 |
tobberydberg | witek No, it does not | 14:17 |
ncastele | it means that the first iteration should handle the definition of product/catalog/prices, or a way to have some kind of middleware that can call an external service to get the price/cost of a resource | 14:19 |
tobberydberg | It's been some time since I looked into it now though ... but for example it supports billing plan for flavors, that is not what we bill for today | 14:19 |
witek | I thought they can define flexible billing models, but I haven | 14:19 |
ncastele | to be honest, in OVH we already have a platform where we define the catalog/prices/special stuff, and I don't want to synchronize this tool with some other table/database in OpenStack | 14:20 |
witek | haven't looked at it myself | 14:20 |
tobberydberg | Yea, I guess if that were to happen some configuration of which resources to bill for in which unit to which price will be needed | 14:20 |
ncastele | yes | 14:21 |
tobberydberg | ncastele Well, I would like to have good support in openstack for users that would like to be able to do that | 14:21 |
ncastele | As a first iteration, I definitely think it's easier to focus on collecting/storing data, with the goal of displaying quantities | 14:22 |
tobberydberg | For me personally, I would be super happy with a good reliable backend that in real time can give me the aggregated usage | 14:22 |
ncastele | Then, from quantities/usage, we can iterate to something that can handle prices/cost | 14:22 |
tobberydberg | can agree on that as well ncastele | 14:22 |
tobberydberg | Thought, ideas from anyone else here? | 14:23 |
witek | I would recommend Monasca for collecting/storing, but I'm biased :) | 14:24 |
tobberydberg | Would ceilometer agent work as the collector? | 14:24 |
tobberydberg | can Monasca collect all events, usage etc in real time at this point? | 14:25 |
ncastele | I don't have enough knowledge on ceilometer/monasca to confirm stuff about it | 14:25 |
witek | not yet, we can collect system usage metrics, libvirt, ovs metrics | 14:26 |
ncastele | But I have strange usecases that we can challenge with ceilometer/monasca to see if it fits | 14:26 |
witek | any prometheus metrics | 14:26 |
witek | and ceilometer metrics | 14:26 |
tobberydberg | mnaser do you have insight in that? | 14:26 |
mnaser | I wanted to share something on that | 14:26 |
tobberydberg | please do | 14:27 |
mnaser | (but google is rough) | 14:27 |
tobberydberg | You will just have to write if all you self then :-) | 14:27 |
mnaser | (trying to look for it) | 14:29 |
mnaser | its a prometheus exporter | 14:29 |
mnaser | that actually captured the project id and user id | 14:29 |
mnaser | https://github.com/zhangjianweibj/prometheus-libvirt-exporter | 14:29 |
mnaser | there | 14:29 |
mnaser | its actually pretty neat | 14:29 |
mnaser | zhangjianweibj seems to work at inspur too who contribute to openstack often | 14:30 |
ncastele | *written in Go* > Ok we can use it <3 | 14:31 |
tobberydberg | So this would be a kind of replacer for what ceilometer does today? Collecting the data? Or, also the storing of data? | 14:31 |
*** gtema has joined #openstack-publiccloud | 14:32 | |
witek | mnaser: the project you've posted seems to be one man effort, zhangjianweibj was evaluating different projects, I've seen him contributing to Monasca as well | 14:32 |
mnaser | it is an exporter than prometheus can scrape | 14:33 |
mnaser | witek: it could be a start, that was the idea.. it's an approach | 14:33 |
mnaser | just wanted to throw it out here | 14:33 |
ncastele | it's mainly oriented for nova instances, we need to focus also on other components (volumes, snapshots, storage, etc.) | 14:35 |
ncastele | Regarding the first iteration, it could be a good idea to scope the resources/services we want to collect/store from | 14:35 |
tobberydberg | Yea, implementations of any kind to be able to be something official in the openstack community, it should be written in python, but the interesting parts are the approach as mnaser says | 14:35 |
mnaser | I felt it was more interesting as an overall approach | 14:36 |
mnaser | personally in my experience, we need something that is sharded | 14:36 |
mnaser | it's a catch 22 | 14:36 |
tobberydberg | saw that Linaro had one as well, as well as Cananoical | 14:36 |
mnaser | machines polling their own instances is nice because it scales out .. but ends up all in one system trying to handle saving it all which melts things down | 14:37 |
mnaser | machines storing state on their own is not reliable | 14:37 |
ncastele | it has to be aggregated somewhere so we can easily manage/compute/display through API | 14:38 |
tobberydberg | So, not being able to use anything for the collection that is already implemented will increase the scope a bit :-) | 14:38 |
mnaser | so I think the idea here is two step: 1) collection, 2) (auto-?)aggregation | 14:38 |
mnaser | I do think we all share a common pain point | 14:39 |
tobberydberg | 1)collection and storing :-) | 14:39 |
mnaser | which is: I need to know how many seconds/minutes/hours this resource has consumed | 14:39 |
mnaser | that is *mostly* what we care about.. | 14:39 |
ncastele | +1 | 14:39 |
tobberydberg | yes. I was first thinking of using ceilometerm but implement a smarter way of storing the data, but that might be hard to accomplish ? | 14:40 |
tobberydberg | agree with that mnaser | 14:40 |
mnaser | historically openstack telemetry never gave us that | 14:40 |
ncastele | (and I also care about Swift usage but it's completely a different topic) | 14:40 |
mnaser | it was just _data_ without what we want which is raw numbers | 14:40 |
witek | I think on collection side the best approach is for services to instrument their code themselves and expose in the standardized way | 14:41 |
mnaser | well ceilometer stores into gnocchi now which includes several stores | 14:41 |
tobberydberg | yea, WHAT to measure and collect I guess will be another toic, but definitely as swift as well =) | 14:41 |
witek | they can be then scraped by prometheus, pushed to Monasca or Gnocchi | 14:41 |
mnaser | HONESTLY I'm a bit in favour of letting prometheus do the scraping | 14:41 |
mnaser | it's a big project, it's got people behind it, no need for us to try and just make things happen in ceilometer when the world has found a 'better' alternative | 14:42 |
mnaser | its also ~cLoUd nAtIvE~ | 14:42 |
witek | https://prometheus.io/docs/practices/instrumentation/ | 14:42 |
tobberydberg | Like that approach a lot | 14:42 |
witek | that's what prometheus advises about collecting best metrics | 14:42 |
tobberydberg | will it be able to scrape data for all resources that exists in OpenStack space? | 14:43 |
mnaser | now for me the issue remains: <collector> <= <prometheus> => <??? storage ???> <=> <?? exposed api ??> | 14:43 |
mnaser | the storage part is an implementation detail imho, people will decide to implement whatever they want depending on their operational experience | 14:43 |
mnaser | but the tooling to get $data => $hours_consumed .. that's the hard part which would drive the scope down | 14:44 |
witek | tobberydberg: that's the point, the exporter won't be able to collect all the data, the services are best suited to provide the valuable information, they should instrument their code | 14:45 |
ncastele | from what I understood, prometheus will collect metrics from the infrastructure, and we plan to base the usage/hours from those metrics, right ? | 14:45 |
ncastele | Not sure following well the part linked to prometheus | 14:46 |
tobberydberg | My experience with prometheus is pretty limited unfortunate | 14:46 |
tobberydberg | mnaser Are you of another opinion when it comes to the ability of being able to use prometheus for collecting all the data we need? | 14:47 |
tobberydberg | (you seam to have some experience with this) | 14:47 |
mnaser | I like prometheus, it can also push data to remote stores natively | 14:47 |
ncastele | what information prometheus will bring to us regarding the usage ? cpus used, ram used ? | 14:48 |
witek | exporter implements the so called `black box` monitoring, view from outside of the application | 14:48 |
tobberydberg | I mean, as you said witek, involving all teams to do this for us would of course super, but don't see that to happen | 14:48 |
mnaser | https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage | 14:48 |
mnaser | imho I'd be inclined on actually having exporters live outside projects | 14:48 |
mnaser | the velocity and release cycle of openstack would kill if we wanted to add a new thing to monitor :) | 14:49 |
witek | but is requested from different parties on a regular basis | 14:49 |
witek | recently from Tim Bell on openstack-discuss | 14:49 |
mnaser | requested != going to happen imho :p | 14:50 |
mnaser | and again, release cycles would hurt us a lot | 14:50 |
mnaser | esp with back port policies :) | 14:50 |
tobberydberg | yea, it will probably not happen | 14:51 |
mnaser | need to get $x metered but running stein? push a patch to train and it won't be back portable to stein because it will involve adding a new feature | 14:51 |
mnaser | which breaks our stable policy | 14:51 |
witek | well, as a starting point we could collect the list services and metrics we're interested in for billing | 14:52 |
mnaser | yeah. imho an openstack collector would be eneat | 14:52 |
mnaser | auto-detects running services and gets all the info | 14:52 |
mnaser | in https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage -- is there a remote storage endpoint that's of preference for folks here.. maybe trying to see if we can all agree on something (so we can work on tooling that talks to it) | 14:53 |
tobberydberg | So, time is sooooon up for todays meeting unfortunate so would be good to summarize a bit and find out next steps | 14:53 |
ncastele | just want to challenge, with an example | 14:53 |
ncastele | if we base the usage on metrics | 14:53 |
ncastele | (just want to be sure to understand) | 14:53 |
ncastele | what happen when a server is suspended for example ? Does it stop to send metrics ? | 14:54 |
mnaser | no, prometheus should still report it, with a state | 14:54 |
mnaser | in the tooling, we can decide to provide # of hours grouped by state | 14:55 |
mnaser | x hours active, y hours shutoff, z hours shelved | 14:55 |
ncastele | Okay. I'm not enough aware of prometheus and what metrics/information it can bring to us to understand properly this discussion, will need to have a look | 14:56 |
tobberydberg | So my suggestion here, we all think about this approach and see if that would be possible working solution, as well as thinking about what remote storage endpoint that would be preferable to see if we can come to a agreement on that | 14:56 |
tobberydberg | If we do that, should we try to schedule a new meeting before next schedule meeting for the group? | 14:57 |
tobberydberg | Or, is it good to have a new meeting in 2 weeks? | 14:57 |
*** damien_r has quit IRC | 14:57 | |
* mnaser feels more often is good | 14:58 | |
*** damien_r has joined #openstack-publiccloud | 14:58 | |
ncastele | as we are bootstraping the topic, we should meet more often in the coming weeks if we want to achieve something | 14:58 |
ncastele | (especially with the summer break that is coming :( ) | 14:58 |
tobberydberg | Ok, can agree on that. | 14:58 |
tobberydberg | next wednesday at the same time? | 14:59 |
witek | wednesday or thursday? | 14:59 |
witek | oh, thursday is holiday for me | 15:00 |
tobberydberg | Preferably wednesday for me, public holiday in sweden next thursday, and I know I'm not at home... :-) | 15:00 |
tobberydberg | yea, guess that goes for most of europe | 15:01 |
ncastele | would love to have it Tuesday as wednesday/thursday is holidays, but I can find a way to be there if it's on wednesday | 15:01 |
tobberydberg | mnaser ... tuesday wednesday ? | 15:01 |
mnaser | I will be at CERN next week | 15:02 |
mnaser | but after that it should be okay | 15:02 |
tobberydberg | Please put some thought around this area until next time | 15:02 |
tobberydberg | Lets say Tuesday 1400 UTC next week, after that we use the timeslot we have for this meeting in two weeks | 15:03 |
witek | thanks tobberydberg | 15:03 |
tobberydberg | Thanks a lot for today folks! See you all next week, and have fun at CERN mnaser :-) | 15:04 |
mnaser | thanks for hosting tobberydberg | 15:04 |
mnaser | thaaanks | 15:04 |
ncastele | thanks everyone :) | 15:04 |
tobberydberg | #endmeeting | 15:05 |
*** openstack changes topic to "New meeting time!! Thursday odd weeks at 1400 UTC in this channel!!" | 15:05 | |
openstack | Meeting ended Thu May 23 15:05:16 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:05 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-05-23-14.00.html | 15:05 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-05-23-14.00.txt | 15:05 |
openstack | Log: http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-05-23-14.00.log.html | 15:05 |
tobberydberg | I will send out a reminder for that meeting on monday! | 15:06 |
*** jamesmcarthur has joined #openstack-publiccloud | 15:21 | |
*** jamesmcarthur_ has joined #openstack-publiccloud | 15:32 | |
*** jamesmcarthur has quit IRC | 15:36 | |
*** ncastele has quit IRC | 15:37 | |
*** gtema has quit IRC | 16:13 | |
*** witek has quit IRC | 16:24 | |
*** damien_r has quit IRC | 16:36 | |
*** ricolin has quit IRC | 16:55 | |
*** hberaud is now known as hberaud|gone | 16:58 | |
*** pilgrimstack has quit IRC | 17:03 | |
*** pilgrimstack has joined #openstack-publiccloud | 18:54 | |
*** pilgrimstack has quit IRC | 19:02 | |
*** jamesmcarthur_ has quit IRC | 20:16 | |
*** trident has quit IRC | 23:51 | |
*** trident has joined #openstack-publiccloud | 23:53 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!