Thursday, 2019-06-06

*** jamesmcarthur has joined #openstack-publiccloud02:41
*** jamesmcarthur has quit IRC03:08
*** gtema has joined #openstack-publiccloud06:58
*** hberaud|gone is now known as hberaud07:03
*** witek has joined #openstack-publiccloud07:34
*** hberaud is now known as hberaud|lunch10:00
*** gtema_ has joined #openstack-publiccloud10:42
*** gtema has quit IRC10:42
*** hberaud|lunch is now known as hberaud11:08
*** gtema_ has quit IRC12:05
*** trident has quit IRC12:21
*** trident has joined #openstack-publiccloud12:26
*** gtema_ has joined #openstack-publiccloud12:27
*** gtema_ has quit IRC13:32
*** ncastele has joined #openstack-publiccloud13:49
*** ricolin has joined #openstack-publiccloud13:57
tobberydbergo/14:00
ncastelehi there14:02
tobberydbergHi ncastele14:05
tobberydbergI wonder if all the rest are on an early summer vacation =)14:05
*** jamesmcarthur has joined #openstack-publiccloud14:05
ncasteleSummer vacation, in the beginning of June ? :o14:05
ncasteleToo early, I'm not ready yet to see people on vacation :D14:06
tobberydbergNational holiday in Sweden today, but not many swedes in these meetings anyway ;-)14:06
tobberydbergHehehe...we have like 28 degrees and sunny the last couple of days ... hot as hell for Sweden14:06
tobberydbergIn a lot of ways I'm ready for vacation ;-)14:07
ncasteleTwo days of storm and rain in France. Can we switch ?14:07
tobberydbergHahaha, shit ... well, not sure, think I keep our weather for a few more days to be honest ;-)14:07
*** wondra has joined #openstack-publiccloud14:08
tobberydbergSo, are we anymore here for todays meeting? mnaser?14:08
mnaseri will jump in and out, tc meeting right now (once a month)14:09
tobberydbergwondra ? zhipeng ?14:09
wondraHi!14:09
tobberydbergok ok, totally fine of course ... just would like to be a few here before kicking it off14:09
tobberydbergwelcome wondra14:10
mnaseri'll def be in and out14:10
tobberydbergSo, lets start then14:10
tobberydberg#startmeeting publiccloud_wg14:10
openstackMeeting started Thu Jun  6 14:10:45 2019 UTC and is due to finish in 60 minutes.  The chair is tobberydberg. Information about MeetBot at http://wiki.debian.org/MeetBot.14:10
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:10
*** openstack changes topic to " (Meeting topic: publiccloud_wg)"14:10
openstackThe meeting name has been set to 'publiccloud_wg'14:10
tobberydbergSimple agenda this time14:11
ncasteleYep14:11
tobberydberg#topic 1. Joint development effort billing14:11
*** openstack changes topic to "1. Joint development effort billing (Meeting topic: publiccloud_wg)"14:11
tobberydbergnotes from last meeting is here: #link http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-05-28-14.05.log.html14:12
ncastelehomeworks have been done in the etherpad, and I'm quite happy we both wrote the same stuff14:12
ncastelehttps://etherpad.openstack.org/p/publiccloud-sig-billing-implementation-proposal14:12
tobberydbergYea, some parts are there for sure ... might be more there ... would be good if more people fill in their needs as well so we can get a full list of the needs14:13
ncastele+114:13
tobberydbergBut yes, pretty much the same14:13
tobberydbergI guess we both feel that we need something that fires events as well as continues reports .... scraping method potentially14:14
*** gtema_ has joined #openstack-publiccloud14:16
ncasteleScraping is a way to go, but can lead to an impact on control plane performances14:16
tobberydbergWhat I was thinking about is that we basically need a tool or a set of tools that can handle all form of metrics, not only the metrics for the resources we bill for, but for things that are for general information/statistic purposes as well14:17
tobberydbergwell ... depends on how that scraping is done I would assume14:17
ncastelewhat do you mean by things for general information ?14:17
tobberydbergcould be security groups and rules for instance14:18
tobberydbergSOME might bill for that (we do not), but it would still be of interest for me to see the usage of that14:18
tobberydbergHard to track such thing today since neutron doesn't store anything deleted in the databse14:19
ncasteleYes14:19
tobberydbergbarbican stuff is another thing14:19
wondraWhat do you mean by scraping?14:19
wondraAPI requests?14:19
ncastelethere are no retention at all in neutron db ?14:19
ncasteleAPI requests, or nice SQL queries on database14:20
wondraNice? :-)14:20
tobberydbergnot what I know of, no (on the retention)14:20
wondraWhat about the usage of ceilometer events as I suggested? That is a log of the history in itself.14:20
tobberydbergwondra I guess you can doing scraping in different forms ... but one can be db queries, you can also scrape each compute node for its resources ...14:21
ncasteleI'm not enough into ceilometer to ensure it can answer all our needs14:21
tobberydbergWhat I hear from people, the ceilometer agents are pretty stable in that sense, but still, people do not fully rely on it anyone14:22
tobberydberg*anyway14:22
wondraOur billing is based on that. We did it back in Kilo and will have to rework it due to an API change for Ocata.14:22
tobberydbergusually having something else as well and then comparing the results14:23
wondraYou get events about every entity along with details, like the owner, size, etc.14:23
tobberydbergyes, which is good14:23
ncasteleIs it easy to enhance the content of an event with custom information depending of the service ?14:23
tobberydbergthe storage of that data and possibility to query it is hard to work with though14:24
wondraDunno. But you can choose what you store from the notification which is being sent by the particular OpenStack project in the events pipeline.yaml14:24
wondrahttps://docs.openstack.org/ceilometer/ocata/events.html14:24
wondraQuerying it would be done with the Panko API, which we haven't learned yet. Still on the old Ceilometer one. We basically query it every day on midnight for every tenant and compute the usage.14:25
tobberydbergIt's been a little bit to long since I looked into all the side projects around ceilometer, but listening to people it seams not to work very well14:26
*** hberaud is now known as hberaud|school-r14:26
wondraHow does CloudKitty work anyway? Doest it query Panko or Gnocchi?14:26
tobberydberggnocchi being outside of openstack makes it worse14:27
tobberydbergthink it uses gnocchi14:27
witekthey don't rely on a specific DB from what I know14:27
ncasteleCloudkitty offers drivers for prometheus/ceilometer/gnocchi14:27
witekthey can use Gnocchi, Monasca, Prometheus14:27
tobberydbergok ok14:28
witek:)14:28
tobberydbergare you folks basing your billing today on all native openstack services?14:29
tobberydbergwe do not, now days not at all using openstack telemetry services for that (unfortunately)14:30
ncastelefor the collecting part, yes, we are relying on ceilometer (in a heartbeat way, not in an event way)14:30
ncastelefor instances14:30
ncasteleFor volumes, we have some sql queries that are pushing usage data into ceilometer14:31
tobberydbergok, so ceilometer doesn't work for that?14:31
ncastelebut it's a scrapping way (scrapping hypervisor, scrapping our ceph clusters, etc.)14:31
ncasteleceilometer does the work because we scrap each 5 minutes, but we bill by hour, meaning that we accept we can lose some points14:32
mnaseroh i didn't know cloudkitty can use prometheus TIL14:32
tobberydbergok. I would say that we need something that is down at seconds level of usage for all type of resources14:33
mnaseri think if we forget scraping and bring evented monitoring, we dont have to track things per second anymore, because you can easily introspect things "what time did it start and end"14:33
ncastelebut when we come to seconds billing, it's not possible for us to use the actual system because for this precision, we need to base on events/on precise dates and time14:33
ncastele+1 mnaser14:34
tobberydbergmnaser totally agree14:34
mnaserthen it's $bill_in_whatever_increment_you_want14:34
tobberydbergbyes14:34
witekseconds meaning 1s, 10s or 30s ?14:35
tobberydbergdo we need the scraping for other purposes?14:35
tobberydberg1s14:35
ncasteleseconds meaning 1s :D14:35
tobberydbergor even less than seconds event based will give you14:35
wondraI believe that we do it for Floating IPs. Cinder is fine with cinder-volume-audit in cron, but Neutron does not have that.14:36
witekthe actual question is if such fine resolution is really useful for billing?14:36
ncasteleit depends, if we trust at 100% our event based collecting, then we do not need scrapping. But scrapping can consolidate the event base process14:36
tobberydbergI would say CPU load etc etc will need scraping if we sould like to cover those bits as well14:36
*** hberaud|school-r is now known as hberaud14:36
wondraTo clarify - most openstack components issue the .exist event, which allows you to find entities that you missed the .start notifications for. Floating IPs do not have it.14:37
tobberydbergso same question again, if events is enough, is ceilometer reliable to use in its current shape and form?14:38
wondraBilling by CPU load - the white unicorn of public clouds?14:39
tobberydberg(in just collecting the "metrics")14:39
wondraDunno. I'm 4 releases behind.14:39
ncasteleDunno either14:39
tobberydbergmnaser what is you feeling there?14:39
wondraEh, actually 6. Damn.14:39
tobberydbergwondra I was thinking just to have the same source of data to stuff that are interesting to visualise for users ... not necessarily bill for it =)14:40
tobberydbergwondra just go directly to train this fall ;-)14:41
wondraHaving visualisations in our customer portal would be nice. I've got example code from a bachelor's thesis for Gnocchi.14:41
tobberydberg+114:42
wondraWith our main product being VPS, we don't need it in Horizon.14:42
mnaseri think it's probably easier to start building the structure for the billable stuff14:43
tobberydbergThe biggest issue that I see here moving forward will be the storage and real time query bits14:43
mnaserimho14:43
mnaserwell, we think of things as a resource, that's what ceilometer did at the time, resource X (could be cinder uuid) which starts created_at $x and deleted_at $y (or none)14:43
mnaserand then we have a tool that says "query how many seconds has resource X been used in the period A => B" and it can do all the math14:44
mnaseri have a lot of that code14:44
tobberydbergevent driven only, and I think that will cover the billing bits14:45
tobberydbergso, using ceilometer for that mnaser? Reading off the rabbit queue?14:46
mnasernope, reading the db :\14:46
tobberydbergneutron?14:46
mnasernot billing anything for that14:46
tobberydbergok. I totally agree that reading o the database is very easy for a lot of the resources, but not for all14:47
tobberydbergobject storage is another thing that will be har to track that way14:47
tobberydbergdynamic size stuff14:48
mnaserwell i was thinking for that sort of thing, parsing logs might be the way to go14:48
mnaserlogs are pretty much events of download/upload i guess14:49
wondraReading the DB is bad. There's no clear contract. Any OpenStack project can change it and break the code.14:49
tobberydbergtrue14:49
ncasteleagree with wondra, the db contract is not stable. But it's the easiest and less impacting way to read data14:52
tobberydbergSo, just a few more minutes left of todays meeting, would be good to sum it up and find action points until next meeting14:52
ncasteleI tried a PoC on reading from Nova APIs as an admin, and I quickly reached some limits14:52
wondraBut the deployment tools are trailing the OpenStack releases. Maybe a billing product that reads the database could only exist for 1 year old releases...14:53
tobberydberg#action All - continue to define what we really need to track14:53
tobberydbergwhat else? more clear suggestions on collection method of events? DB queries? whatnot?14:55
tobberydbergShould we try to book a meeting for next week as well to keep up the pace before all vacations?14:55
tobberydbergI'm happy to take a meeting same time next week14:56
wondraNot against. I have said most of what I know, though.15:00
tobberydbergwell, time is up for today ... any opinions on above before we close down?15:01
tobberydbergThanks wondra ... have you added your needs of metrics to the etherpad as well?15:01
witekgood understanding of what exactly has to be collected is important15:01
wondrayes, I have.15:02
tobberydbergyes, I think so too and that is what we need to base tooling etc on to be successful15:02
tobberydberg+1 wondra15:02
tobberydbergOk. So I'll end todays meeting. I send out a reminder for a meeting next week, and we continue with once per week for a few more weeks and see what we can get out of that15:03
tobberydbergThanks for today folks!15:03
witekthanks, I'm in vacation, see you in 3 weeks15:04
tobberydberg#endmeeting15:05
*** openstack changes topic to "New meeting time!! Thursday odd weeks at 1400 UTC in this channel!!"15:05
openstackMeeting ended Thu Jun  6 15:05:17 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:05
openstackMinutes:        http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-06-06-14.10.html15:05
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-06-06-14.10.txt15:05
openstackLog:            http://eavesdrop.openstack.org/meetings/publiccloud_wg/2019/publiccloud_wg.2019-06-06-14.10.log.html15:05
ncastelethanks for the meeting (sorry for the end of the meeting, I was a bit on an emergency on the infra :o )15:05
ncasteleGoal for next meeting is to precise our needs for collecting, got it :)15:06
*** wondra has quit IRC15:12
*** hberaud is now known as hberaud|gone16:19
*** witek has quit IRC16:32
*** ricolin has quit IRC16:54
*** mrhillsman is now known as openlab17:02
*** openlab is now known as codebauss17:05
*** codebauss is now known as openlab17:13
*** openlab is now known as codebauss17:14
*** codebauss is now known as openlab17:15
*** openlab is now known as codebauss17:16
*** codebauss is now known as mrhillsman17:24
*** jamesmcarthur has quit IRC17:33
*** gtema_ has quit IRC17:50
*** jamesmcarthur has joined #openstack-publiccloud18:56
*** jamesmcarthur has quit IRC19:12
*** jamesmcarthur has joined #openstack-publiccloud19:14
*** jamesmcarthur has quit IRC19:55
*** jamesmcarthur has joined #openstack-publiccloud19:56
*** ncastele has quit IRC20:02
*** jamesmcarthur has quit IRC20:24
*** jamesmcarthur has joined #openstack-publiccloud20:27
*** jamesmcarthur_ has joined #openstack-publiccloud20:29
*** jamesmcarthur has quit IRC20:31
*** jamesmcarthur_ has quit IRC21:03
*** jamesmcarthur has joined #openstack-publiccloud21:05
*** jamesmcarthur_ has joined #openstack-publiccloud21:16
*** jamesmcarthur has quit IRC21:18
*** jamesmcarthur_ has quit IRC21:36
*** jamesmcarthur has joined #openstack-publiccloud21:37
*** jamesmcarthur has quit IRC21:46
*** jamesmcarthur has joined #openstack-publiccloud23:13
*** jamesmcarthur has quit IRC23:16
*** jamesmcarthur has joined #openstack-publiccloud23:17
*** jamesmcarthur has quit IRC23:42
*** jamesmcarthur has joined #openstack-publiccloud23:45
*** jamesmcarthur has quit IRC23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!