15:00:02 #startmeeting Ceilometer 15:00:02 #meetingtopic Ceilometer 15:00:02 #chair nijaba 15:00:02 #link http://wiki.openstack.org/Meetings/MeteringAgenda 15:00:03 Meeting started Thu Nov 29 15:00:02 2012 UTC. The chair is nijaba. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:07 The meeting name has been set to 'ceilometer' 15:00:09 Current chairs: nijaba 15:00:12 ATTENTION: please keep discussion focussed on topic until we reach open discussion topic 15:00:12 Hello everyone! Show of hands, who is around for the ceilometer meeting? 15:00:12 o/ 15:00:17 o/ 15:00:25 o/ 15:00:31 o/ 15:01:04 #topic actions from previous meeting 15:01:04 #topic eglynn drive fleshing out of nova-virt API to completion 15:01:18 So lots of circular discussion on the upstream ML 15:01:31 o/ 15:01:38 and we discussed this further on #openstack-metering yesterday 15:01:46 the new preferred approach is to code directly to the underlying hypervisor layer 15:01:54 (e.g. the libvirt API) 15:02:09 so as to avoid the release mgmt / versioning complexities of nova releasing a separate library 15:02:09 how does this differ from what we have today? 15:02:35 we currently code to the hypervisor driver in nova 15:02:49 (not to the libvirt-python API) 15:03:04 ah, k 15:03:05 the latter is packaged as a separate library 15:03:13 (independently versioned etc.) 15:03:31 o/ 15:03:36 but would we be able to retrieve openstack context that way (ie meta-data)? 15:03:37 we only use a tiny chunk of the nova hypervisor driver 15:03:38 does libvirt control Xen & qemu VMs in addition to KVM? 15:03:53 n0ano: it can do 15:04:10 so the idea would be also take a similar approach with xenapi etc. 15:04:11 does that mean it doesn't right now? 15:04:19 no 15:04:22 n0ano: it's one of 2 possibilities for xen, and it is the case for lxc 15:04:34 OK, but what about qemu? 15:04:35 it can do, if configured to do so 15:05:10 eglynn: not sure you got to answer my q: 15:05:12 yeah qemu too 15:05:12 but would we be able to retrieve openstack context that way (ie meta-data)? 15:05:31 nijaba: so that's one wrinkle 15:05:39 a big one... 15:05:48 the libvirt domain representation doesn't give us the flavor or display name 15:06:05 those are pretty important details for billing 15:06:13 (both of which we need for metering, certainly the flavor for the instance. meter is needed, display name could be dropped maybe...) 15:06:20 hmmm... important as in essential, yes 15:06:21 so we'll still have to call out to nova-api to get this info 15:06:28 (though we may not have to call out to it on every polling cycle) 15:06:31 otoh, if we query nova for a list of instances first we wouldn't have to worry about the formula for vm names, right? 15:06:37 i.e. only when we see a new instance UUID 15:07:14 eglynn: ah, and then we could retrieve the info through standard api calls? 15:07:22 how would we see that? through notifications? 15:07:36 nijaba: standard API, as in nova public API? 15:07:38 dhellmann: Through novaclient or notification, either way 15:07:53 eglynn: yep? 15:07:57 nijaba: sure 15:08:11 nijaba: (that's how we list the instances currently) 15:08:16 does the nova api allow queries for instances on a given host? 15:08:23 (i.e. we no longer go direct to the nova DB as before) 15:08:41 dhellmann: that's curent method, nova client to get all instance on given host 15:08:53 yjiang5_away: current where? in ceilometer? 15:09:05 I thought we were using an internal API to talk to nova now. 15:09:06 yep it does 15:09:23 dhellmann: yes, I remember we get instance for one host through novaclient? 15:09:27 dhellman not to grab the instance list, that goes thru the public nova API 15:09:32 dhellmann ^^^ 15:09:39 eglynn: ah, I thought everything was internal 15:09:54 ok, so we have a way to get the data without relying on internals, we just need to work out how often 15:09:58 so it used to be grabbed via the nova DB, but that changed recently 15:10:18 dhellmann: so I'm thinking every time we see a new instance UUID 15:10:23 I suggest we start simple and continue to ask every time we start to poll, then optimize as a separate step after we have the libvirt wrapper built 15:10:33 yep reasonable 15:10:40 eglynn: I don't know what you mean by "see a new instance UUID", though 15:10:49 what bit of code is seeing it? 15:11:18 dhellmann: when libvirt reports an instance UUID that wasn't included in the instances described by the *last* call to the public nova API 15:11:28 aha, so libvirt knows the uuid? 15:11:29 right, we don't currently maintain any list, IIRC... 15:11:34 I thought it only had a "name" or something 15:11:35 dhellmann: yep 15:12:13 dhellmann: nova uuid, and a libvirt ID, and a formated instance-000000%x tyep name 15:12:14 easy, then, we can just ask libvirt for instances and then ask nova for details as we discover new ones 15:12:21 exactomundo! 15:12:32 eglynn: we need check nova code invoked by ceilometer currently, to see if any thing nova added after they got information from libvirt? 15:12:44 so, if I understand this, we're now requiring libvirt for things to work, if you configure to use xenapi you won't get this info 15:12:45 ok, so who takes the action to transform the agent that way? 15:13:03 n0ano the idea to do the same for xenapi also 15:13:09 n0ano: it's already like this anyway. we currently only spport libvirt 15:13:19 now this may not be quite as neat with the xenapi version 15:13:28 (where there's some RRD file parsing needed to get the hypervisor stats reliably) 15:13:37 * eglynn needs to look into that in more detail 15:13:53 yjiang5_away: I didn't understand the question 15:14:03 just so long as we don't forget the hypervisor APIs 15:14:18 s/hypervisor/other hypervisor 15:14:18 so I have a prototype of this working for libvirt, need to polish a bit more before proposing for review 15:14:25 n0ano: agree 15:14:40 should have a patch tmrw 15:14:42 eglynn: currently we invoke nova code to fetch libvirt information,right? Did nova added anything before return to ceilometer? If yes, that added stuff should be handled by ceilometer then. 15:14:59 eglynn: cool, thanks. should you action yourself on this, of you may need more than a week? 15:14:59 n0ano: yep, we want to support the other hypervisors, eventually 15:15:04 hi 15:15:11 hey jd__! 15:15:29 :) 15:15:49 yjiang5_away: so I've removed the direct nova usage, instead calling libvirt API directly, but based on the original nova code (the v. small subset we use...) 15:15:57 eglynn: is there a blueprint for this stuff? that may make more sense than an action item 15:16:12 yjiang5_away: so I don't think I've missed any nova "secret sauce" 15:16:12 dhellmann: true 15:16:18 dhellmann: I'll file a BP 15:16:20 eglynn: great. 15:16:23 thanks 15:16:26 eglynn: cool, thanks 15:16:29 np! 15:16:31 shall we move on? 15:16:43 cool 15:16:45 also, eglynn, double extra plus thanks for taking the lead on this. I'm *so* tired of nova breaking us. 15:16:56 yeah, I hear ya! 15:17:04 #topic yjiang5 to start a thread on transformer 15:17:17 * nijaba thanks eglynn warmly too 15:17:28 nijaba: yes, send mail to ML for discussion 15:17:48 any conclusion to report? 15:18:12 I think we now have a good overview of what we want 15:18:13 no objection to the transfomer method, only concern is dhellmann think it's too flexible 15:18:28 yeah, how "too flexible" can be a concern, really, dhellmann ;) 15:18:46 dhellmann: really ? ;) 15:18:51 I'm still a little worried about the notion of chaining transformers, and the complexity of configuring that. But I'm content to wait for a patch. 15:19:17 I plan to use this method to finish CW publisher and then sent out for discussion. At that time, we will have a solid base patch for future discussion. 15:19:36 works for me 15:19:39 the more complicated it is, the more likely someone will have trouble setting it up 15:19:40 yjiang5_away: cool 15:19:49 yjiang5_away: excellent 15:20:02 dhellmann: yes, so I think mostly they should use default one. 15:20:17 yjiang5_away: action? 15:20:24 yjiang5_away: right. but then what's the point of building a configuration system? :-) 15:20:55 dhellmann: some advanced user, or someone may want to add special metrics themselves through configuration 15:20:58 dhellmann: evolution! :) 15:21:32 nijaba: yes, my action to send patches for transformer 15:21:50 yjiang5_away: OK. I just prefer to add features one at a time. Multiple publishers first, then deployer configuration for them. 15:21:51 #action yjiang5_away to send patches for transformer 15:21:51 so in terms of configuration, should be mostly pre-canned right? 15:21:55 #topic eglynn to report on synaps' decision 15:22:02 the synaps guys have come to a conclusion 15:22:12 and they want to come on board 15:22:16 w00t! 15:22:19 \o/ 15:22:25 +1 15:22:29 +1 :) 15:22:40 but have limited bandwidth initially (other internal demands on their time) 15:22:40 great :) 15:22:52 don't we all 15:22:52 so I'm proposing to have the conversation now on the upstream ML 15:23:02 that's fine. Let's make sure we help them feel "at home" 15:23:04 (re-)invite them on board and discuss the mechanics of bringing their code under the ceilo umbrella 15:23:10 dhellmann: yes, I'm now working on multiple publisher 15:23:39 so we'll have a good starting point I think for a standalone monitoring solution 15:23:58 I have a lundry list of initial tasks that I'll translate to blueprints 15:24:34 #action eglynn kick off Synaps discussion on upstream ML 15:24:44 #topic nijaba to get rid of roadmap page content to point to lp 15:24:44 This was done, see: 15:24:44 #link http://wiki.openstack.org/EfficientMetering/RoadMap 15:25:06 #topic dhellmann test more lenient anyjson support for folsom and grizzly compatibility 15:25:14 I think this was done and merged 15:25:36 yes, that's in now 15:25:36 dhellmann: correct? 15:25:40 or at least up for review 15:25:46 perfect, thanks! 15:25:52 #topic dhellmann update user-api blueprint 15:25:55 just checked and it has merged 15:26:18 I did update that blue print with a link to a wiki page where I described the work a bit 15:26:37 dhellmann: http://wiki.openstack.org/spec-ceilometer-user-api? 15:26:49 yes 15:26:57 #link http://wiki.openstack.org/spec-ceilometer-user-api 15:27:32 any action, appart from coding, left? 15:27:40 no, I don't think so 15:27:47 I'm waiting to start coding until my WSME conversion is completed 15:27:53 thanks! 15:27:55 baby steps... 15:27:56 #topic dhellmann update pecan port blueprint 15:28:06 I've updated the blueprint with some details 15:28:09 dhellmann: you had a lot of action! 15:28:23 I have the port to pecan complete, but ran into an issue with WSME so haven't finished that part yet 15:28:41 I hope to have that done before I go on vacation next weekend 15:28:41 #link http://wiki.openstack.org/spec-ceilometer-api-server-pecan-wsme 15:29:07 then we can look at asalkeld's ideas for changes to the API and terminology (most of which I really like) 15:29:22 sounds good 15:29:37 cool 15:29:43 "action week" 15:29:50 #topic zykes to report status on Bufunfa / BillingStack port 15:29:55 * dhellmann is cramming work in before going out for 2 weeks 15:30:10 zykes- there? 15:30:27 * nijaba think dhellmann won't be able to do any work after having dinner with jd and him 15:30:45 does not look like he is around, let's move on 15:30:49 * dhellmann wasn't planning on it 15:30:57 #topic Bug squashing day 15:30:57 It was proposed to organize a bug squashing day, or bugfest... 15:31:12 so, when should this be? 15:31:28 what's our bug queue length like? 15:31:28 just before g2 release? 15:31:42 just before g2, during g3, both? 15:31:45 it's good to have a pre-prepared list of low hanging fruit 15:32:04 (for noobs who happen upon the bug squshing day ...) 15:32:09 eglynn: 64 open bugs of all sorts 15:32:17 nijaba: :)) 15:32:43 so we should triage this queue and try to identify some nice self-contained fixes 15:32:59 a first one I think before g2 would be good idea 15:33:20 (and resist the temptation to fix them ourselves!) 15:33:27 jd__ agree 15:33:56 eglynn: only 9 of them need triaging (new) 15:34:11 cool 15:34:20 30 are either new, confirmed or triaged, quite a few of them wishlist 15:34:58 so at least two things we can acheive with such a bug day: 1. drive quality 2. find some new active contributors 15:35:09 for #2, wishlist low-priority bugs are fine 15:35:13 12 are effort-s, too 15:35:37 ok, so g2 is jan 10. bug day on jan 7th (monday)? 15:35:55 hmmm, that seems a bit tight 15:36:08 maybe late on the week before? 15:36:18 3rd or 4th? 15:36:18 eglynn: I am afraid of hangover or vacations if we do that the week before 15:36:26 (otherwise we may be getting jumpy about regressions) 15:36:40 nijaba: a-ha, Ok, forgot about that ;) 15:37:03 oh I think 3rd or 4th should be almost hangover free 15:37:05 not sure about vacations though 15:37:38 ok, let's vote then ;) 15:37:40 anyway, both are fine for me 15:37:41 so if we go with the 7th, some of the bug day fixes will prolly miss g2 15:37:49 (but maybe that's OK) 15:38:06 (g3 will be along soon enough...) 15:38:09 eglynn: agree 15:38:13 #startvote when to do bugday? jan4, jan7 15:38:14 Begin voting on: when to do bugday? Valid vote options are jan4, jan7. 15:38:15 Vote using '#vote OPTION'. Only your last vote counts. 15:38:25 #vote jan7 15:38:30 #vote jan7 15:38:37 #vote jan4 15:38:38 #vote jan4 15:38:39 #vote jan4 15:38:53 #vote jan4 15:38:56 20 sec countdown 15:39:09 #endvote 15:39:10 Voted on "when to do bugday?" Results are 15:39:11 jan4 (4): jd__, n0ano, dhellmann, eglynn 15:39:12 jan7 (2): yjiang5_away, nijaba 15:39:28 eat that January 7th! 15:39:30 #agreed bugday on jan 4th 15:39:42 * n0ano we have a mandate :-) 15:40:05 ok, I guess we'll define the date for g3 later? 15:40:14 based on our experience? 15:40:27 yep makes sense 15:40:31 nijaba: +1 15:40:40 maybe not so close to the deadline next time :-) 15:40:42 let's move on then 15:40:50 #topic Review blueprints and progress 15:41:16 so, I think we should define a close date for blueprint validation 15:41:22 g2 and g3 are coming up fast 15:41:43 so I think if we have not agreed on a bp fore dec 15th, we should probably can it.... 15:41:58 that seems reasonable 15:42:05 what's the process for "agreeing"? 15:42:20 discusson the list? 15:42:25 I guess 15:42:27 that's a deadline for g2 only? (or for grizzly in general?) 15:42:37 we don't have a formal agreement process, but consensus on the features and how to achve it is what we have done bfore 15:42:46 eglynn: in general 15:42:49 k 15:43:03 ok, I just want to make sure I get the agreements I need 15:44:07 #startvote agree on dec 15th for bluprint "freeze"? yes, no, abstain 15:44:08 Begin voting on: agree on dec 15th for bluprint "freeze"? Valid vote options are yes, no, abstain. 15:44:09 Vote using '#vote OPTION'. Only your last vote counts. 15:44:16 #vote yes 15:44:25 #vote yes 15:44:31 #vote yes 15:44:32 #vote yes 15:44:32 #vote yes 15:44:35 #vote yes 15:44:55 #endvote 15:44:56 Voted on "agree on dec 15th for bluprint "freeze"?" Results are 15:44:57 yes (6): n0ano, jd__, nijaba, eglynn, yjiang5_away, dhellmann 15:45:09 #agreed blueprint freeze on dec 15th 15:45:24 #topic Discuss suggested re-narrowing of project scope for grizzly versus user-oriented monitoring 15:45:45 so this topic was motivated by markmc's observations on the ML: http://lists.openstack.org/pipermail/openstack-dev/2012-November/003348.html 15:45:50 I think this was proposed by someone outside of the project 15:45:59 I am currently not too afraid of this 15:46:06 what do you guys think? 15:46:17 specially now that we have a bp freeze 15:46:53 I think some of this was brought about by the "thrashing" of the discussion of how to handle monitoring 15:47:09 well Synaps will give us a (user-oriented) monitoring service, and multi-publish will give the ability to push metrics into that 15:47:20 yes, so now we have a more defined direction 15:47:42 but I agree that extending ourselves to also cover system oriented monitoring / instrumentation now would be a bridge too far 15:47:51 (for grizzly anyway) 15:48:08 eglynn: on this I tend to agree, but we don't even have the start of a bp for it 15:48:14 by system oriented monitoring I mean cloud-operator-focussed 15:48:15 yes, I think we need to start differentiating between short term and long term goals for these new features 15:48:20 so I was seeing it as dicussions for h 15:48:23 I'm more interested in system monitoring but I'm more than willing to address that post-grizzly 15:48:40 cool 15:48:42 but we should definitily continue the discussion 15:48:50 agreed 15:48:54 yep 15:49:04 ok, cool!!! 15:49:09 discussion - yes, actively push code upstream - not just yet 15:49:09 agreed 15:49:20 #topic multi-publisher blueprint 15:49:26 until there's any code, that's just kind of useless discussion anyway :) 15:49:44 I'm working on the patch now 15:49:48 I think we are getting close to have an agreement on this on right? 15:49:56 cool 15:49:58 nijaba: yes 15:49:59 yeah, yjiang5_away is doing the grunt work :) 15:50:06 thanks yjiang5_away 15:50:14 :) 15:50:17 seconded! 15:50:30 #topic multi-dimensions bp 15:50:43 link? 15:50:46 so this is a new one that jd and I came up with 15:51:08 #link http://wiki.openstack.org/Ceilometer/blueprints/multi-dimensions 15:51:10 #link https://blueprints.launchpad.net/ceilometer/+spec/multi-dimensions 15:51:15 hehe 15:51:54 so the question left on this one is wether we need to express complex dimension with & nd | or not 15:52:11 I think we could propose only & for a first version 15:52:22 I think too 15:52:24 agreed 15:52:26 nijaba: +1 15:52:27 and extend it later if needed 15:52:36 or can be achieved with multiple request easily for now 15:52:43 I don't care for the syntax described, but the intent is good 15:53:12 #action nijaba to update the bp to specify complex request in a future version 15:53:22 dhellmann: feel free to prppose a syntax :) 15:53:43 nijaba: something more closely resembling a regular GET request would be easier to handle 15:54:12 dhellmann: that reminds me of what jd was suggesting 15:54:31 &key=string&key=string.... 15:55:06 nijaba: right. let me work up an example using a syntax WSME supports :-) 15:55:23 ok, I'll wait for your example then. thanks 15:55:29 the controller method would get an array of dimension objects as an argument if we do it right 15:55:38 perfect 15:55:41 that was the idea 15:56:13 #action dhellmann to prepare example of dimension query syntax that will work with WSME 15:56:55 #topic Discuss adopting asalkeld's client implementation officially 15:57:12 I am actually all for this proposal 15:57:16 anything blocking? 15:57:25 what's asalkeld version? 15:57:38 so when zykes- asked about CLIs a few days ago, I was surprised there were so many 15:57:38 * dhellmann looks for link 15:57:40 I though dhellmann had one under its foot 15:57:49 #link https://github.com/asalkeld/python-ceilometerclient 15:57:56 jd__: ours is much more primitive 15:58:08 asalkeld used the "standard" client model 15:58:18 much more complete 15:58:39 we really need a client lib 15:58:39 ok :) 15:58:40 sounds like the one to row in behind 15:58:48 anyone against? 15:59:01 nope 15:59:05 didn't look at the code, but I'm probably not against anyway 15:59:06 * nijaba needs to run to deliver a speech, sorry to be in a hurry 15:59:10 I'm obviously in favor 15:59:13 can we put that into openstack'core/incubated? 15:59:33 that's the idea, to have another git repo setup for it with CI integration, etc. 15:59:34 jd__: normally client is a seperated project? 15:59:39 yjiang5_away: yes 15:59:52 #agreed adopt asalkeld's client officially 15:59:55 dhellmann: perfect, what do we have to do for that? 16:00:03 jd__: dunno 16:00:06 yjiang5: seperate repo, but same overall project? 16:00:11 talk to the infra team, I guess 16:00:15 #action move asalkeld's client into openstack incubation/core 16:00:23 do you guys mind to continue the discussion on our chan? I need to run :/ 16:00:26 that needs an owner 16:00:31 #action jd__ move asalkeld's client into openstack incubation/core 16:00:31 nijaba: #chair me I'll finish 16:00:32 nijaba: I think we're done 16:00:40 #chair jd__ 16:00:41 or maybe not 16:00:41 Current chairs: jd__ nijaba 16:00:54 run nijaba! run! 16:00:57 jd__: thanks, did not know I could do that during the meeting 16:01:04 * nijaba waves and run 16:01:26 * eglynn wonders if we've any sharable presentation collateral? 16:01:55 (giving a talk to an Irish openstack user group next week on ceilo/monitoring etc.) 16:02:10 #topic open discussion 16:02:14 http://www.meetup.com/OpenStack-Ireland/events/92413652/ 16:02:17 eglynn: I remmeber jd__ sent some in IRC before 16:02:29 eglynn: I've the ones we used with nijaba a few weeks ago 16:02:49 https://docs.google.com/presentation/d/1i30roVZp00Wvo46F4k5CT98sw2uMgaf5Lh3bSfiQ-Cg/edit#slide=id.p 16:02:50 jd__, yjiang5: cool 16:03:18 thanks, I may borrow liberally ;) 16:03:37 jd__: can we put into ceilometer launchpad? 16:04:02 yjiang5_away: good idea 16:04:30 sure 16:04:51 jd__: thanks. 16:05:09 yjiang5_away: even better, let's add a link to our docs 16:05:38 dhellmann: yep, and eglynn can add his new one after finish the talking :) 16:05:39 speaking of docs, the jenkins job to build them is now working 16:05:51 yjiang5: yep, will do! 16:05:54 #link http://docs.openstack.org/developer/ceilometer/ 16:06:04 I opened a bug to fix the styling 16:06:37 awesome 16:06:46 does someone have time to talk to the doc team about whether something will link to us? 16:07:23 I don't know what we have to do to get a link on the project list, for example 16:07:31 bribe someone I guess 16:07:53 somebody for some #action ? 16:08:15 dhellmann: So basically contacct with doc team? 16:08:41 yjiang5_away: yes, we need someone to email them and ask what to do next 16:08:54 dhellmann: I'm glad to help 16:09:19 great! want to take an #action item? 16:09:29 dhellmann: yes, but I don't know how to #action yet :$ 16:09:39 just type it out :-) 16:09:54 yjiang5_away: #action yjiang description 16:09:59 yep 16:10:09 #action talk with doc team to link to ceilometer docs 16:10:36 you forgot your nickname after #action I think :) 16:10:38 #action yjiang5 talk with doc team to link to ceilometer docs 16:10:43 +1 16:10:49 that's one seems right! 16:10:56 anything else for today? 16:11:08 :) 16:11:39 last call! 16:11:40 and that's a wrap I think ... 16:11:45 I'm done 16:11:49 #endmeeting