16:01:08 #startmeeting 16:01:09 #meetingname ceilometer 16:01:09 Meeting started Thu May 24 16:01:08 2012 UTC. The chair is jd___. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:09 #link https://lists.launchpad.net/openstack/msg12156.html 16:01:09 16:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:11 The meeting name has been set to 'ceilometer' 16:01:19 16:01:19 #topic actions from previous meetings 16:01:22 16:01:50 flacoste: something to add for you action item from last meeting? 16:01:57 dhellmann: also ? :) 16:02:12 sorry I'm late 16:02:17 no problem 16:02:29 jd___: nope, comments sent to the list 16:02:30 if we have time at the end can we talk about a way to share experimental code? 16:02:35 jd___: and i think we reached agreement 16:02:43 o/ 16:02:53 dhellmann: yep 16:02:59 thanks 16:03:22 #info flacoste comments sent to the list 16:03:41 #topic messaging queue system to use 16:03:49 now let's discuss the real stuff :) 16:03:52 :-) 16:04:20 as I think I mentioned on the list, we should try to avoid dictating a specific implementation and limit ourselves to basic requirements for a message bus 16:04:26 #link https://lists.launchpad.net/openstack/msg11937.html 16:04:40 dhellmann: is nova.rpc mechanism enough? 16:04:48 I like the idea of using it, personally 16:04:54 if I can get this "worker" change approved it should do exactly what we need 16:05:16 I posted a new patch today with qpid tests but I need to push for more reviews 16:05:23 well your change seems to be in on a good path 16:05:59 and I understood correctly we'll get qpid/zmq/rabbit for free by using nova.rpc so everybody is likely to be happy? 16:06:31 that is also my understanding 16:07:08 sounds good 16:07:15 ok so is everybody happy to say that we agree to use nova.rpc mechanism and let the user choose the messaging bus he wants? 16:07:47 with the stipulation that when nova.rpc moves into the common library we should actually use *that* version 16:07:59 depending on the schedule, of course 16:08:02 yeah, obviously :) 16:08:21 ok, I'm happy with that decision 16:08:43 #agreed use nova.rpc as a messaging bus and let the user chose what he wants behind (qpid, rabbit, zmq…) 16:08:57 #agreed use nova.rpc from openstack.common when it moves to this 16:09:07 anything else? 16:09:43 do we need to make sure the requirements nick posted to the list are met by the systems supported by nova.rpc? 16:09:46 I think they are, but... 16:10:05 well, not all of them are 16:10:11 by all queues 16:10:22 true. if the user has persistent queues disabled in rabbit then we can't guarantee delivery. 16:10:36 flacoste, did you have another example? 16:10:37 and the ha story of rabbit isn't that great, last i heard 16:10:46 well, if the user shoots himself in the foot… :) 16:10:53 what about zmq? 16:10:53 it required shared storage 16:10:56 I keep hearing that, but no one has any details. I'm not an expert, so I don't know one way or the other. 16:11:10 hm I don't think zmq has persistence 16:11:12 my understanding is that it's easier to build a ha message queue system with zmq 16:11:21 but yeah, i think it lacks some other requirements 16:11:35 so maybe that delivery requirement shouldn't be *our* requirement, but it may be a *user* requirement that we should support 16:11:35 well zmq doesn't really have a "queue" (independent service that "stores" the messages) 16:11:48 right, clayg, that's my understanding 16:11:50 as nijaba wrote "Not sure this list is exhaustive or viable" so it's likely that you can't satisfy *every* point with only one message queue 16:11:57 you'll have to do tradeoff 16:11:57 someone could build a message queue server with zmq but it isn't one by itself 16:12:04 exactly 16:12:30 but I don't think we should decide which tradeoff to do for the user 16:12:35 so nova.rpc is a good call :) 16:12:36 +1 16:13:17 anything else? 16:13:30 jd___: where are you at on yoru nova branch to support an independent volume service? Do the volume tables get ripped out? Will nova-compute be the only component to talk to volumes? What's the attach workflow? Who keeps track of what's attached where? 16:13:45 ^ any of those 16:13:45 clayg: Error: "any" is not a valid command. 16:13:50 whoa... 16:14:32 is that work related to ceilometer? 16:14:34 clayg: I don't follow you here? 16:14:59 am I in the wrong meeting? 16:15:12 maybe. :-) this is the metering group meeting 16:15:12 when will nova be ready to consume an indepent volume service (e.g. cinder) 16:15:24 I guess I am in the wrong meeting then :D 16:15:31 I think you're in the wrong meeting :D 16:16:02 ok, so, except volumes :D anything else? :) 16:16:18 experimental branches? 16:16:27 ok let's change topic then 16:16:35 #topic message bus usage described in architecture proposal V1 16:16:50 I don't think dhellmann has something to add about it :) 16:16:55 heh 16:17:33 I haven't worked out the implementation details of sending the actual metering messages, yet. Should those be cast() calls? 16:17:54 dhellmann: good question 16:18:06 or should we use a topic publisher like the notifications do? 16:18:19 I'm more in favor of copying notifications 16:18:20 it seems like metering messages are another case where we might want multiple subscribers to see all of the messages 16:18:28 yeah, I'm leaning that way, too 16:18:47 it's just messaging so… 16:18:55 I understand that metering messages will be sent over a topic, but will it also be exposed/stored in a database? 16:19:11 mnaser, yes 16:19:12 mnaser: that's the job of the collector, yes 16:19:29 Fantastic, I've been working with the Nova notification system and it's much better to request information. 16:19:50 dhellmann: we use only one topic? 16:19:53 the collector will write the info to a database and the api service will allow for some basic queries 16:20:26 we could do it like notifications do and publish multiple times to "metering" and "metering + counter_id" 16:20:32 so metering and metering.instance for example 16:20:52 makes sense 16:20:55 that doubles the traffic, but the exchange should just discard the message if no one is listening 16:21:07 #info we could do it like notifications do and publish multiple times to "metering" and "metering + counter_id" 16:22:26 anything else on this? 16:22:45 I find the current architecture good enough and clear so… :) 16:22:52 good work from dhellmann \o/ :) 16:23:00 thanks! 16:23:08 The backend storage, is there plans for usage of RRD or how exactly are you guys thinking of doing it? 16:23:32 we haven't worked that out yet, I think that's the topic we need to discuss this week if I remember the meeting schedule correctly 16:23:40 mnaser: we don't know yet, there's a meeting for this later in June (see http://wiki.openstack.org/Meetings/MeteringAgenda) 16:23:49 #topic Open discussion 16:24:11 free fight time 16:24:22 dhellmann: experimental branches then? 16:24:28 we've had some trouble with sharing experimental code through gerrit because of the rules about test coverage 16:24:35 I see. I can try and figure that out, because we currently have a full collector service that uses nova (only works with XenServer driver) and also exposes data using REST API + keystone authentication .. I'll try to see how we can bring the java code to pytohn and maybe help 16:24:43 We are using RRD to store the data however 16:24:51 personally I think it's too early to be so strict with testing, but I'm OK with it if we can work out another way to share code 16:24:55 maybe just github branches? 16:25:42 I'm worried that if we use separate branches we might end up with messy merges later, esp. with rebasing 16:25:59 mnaser: if you can share with us your experience we'd be glad indeed 16:26:17 +1, it would be great to hear about your experiences with that 16:26:20 dhellmann: I'm on your side on this but heh… 16:26:37 I've already lost much time because of merges and rebase I had to do 16:26:39 jd___: I'll be looking forward for future meetings, I'll be adding them to my calendar 16:26:45 like I said, I'm OK with keeping the gerrit code "clean" and tested, but -- right 16:26:50 mnaser: great, thanks! 16:26:59 maser good! 16:27:43 jd___, maybe when we agree we like an experimental branch we rebase it and submit it, then delete the branch on github? 16:28:08 I guess we can just leave all of the experimental branches out there 16:28:26 the problem is that we can base code on something experimental 16:28:27 I don't know the best way to handle it 16:28:33 right 16:28:42 I mean rebasing on something that has been rebased is nightmare, even with git 16:28:52 well, maybe we're reaching a point where this problem isn't going to be so severe 16:29:01 dhellmann: this is what I think 16:29:08 it has been a problem for one or two big commits 16:30:05 yeah, true 16:30:05 but the current architecture seems solid enough for now 16:30:16 I don't think we'll encounter the problem again 16:30:28 and if we think it might come up, we can have more discussions on the mailing list 16:30:35 yep 16:30:38 I wasn't looking for something formal, just some ideas 16:30:48 ok, I'm happy with that 16:30:53 what else do we have? 16:30:54 :) 16:31:06 Does the project currently share the same OpenStack mailing list or has a specific one? If it does, any tags in the subjects so I can have a filter on those? 16:31:19 mnaser: [metering] on the main list 16:31:21 we use the main mailing list and messages are tagged with [metering] in the subject 16:31:22 dhellmann: do you have an idea on what you'll work in the next days, if you do work on that? 16:31:28 Perfect, thank you. 16:31:53 I am going to work on more notification converters 16:32:09 I have the branch with plugins for the polling agent, too 16:32:20 ok 16:32:30 I've pushed a plugin for floating IP today 16:32:48 our alpha 1 period ends next week and I need to try to get at least a branch going that logs events to the console, if not sending metering messages in some format 16:32:56 excellent, I'll have a look after lunch 16:33:10 ok 16:33:26 the topic for next week is "API message format" but I'm going to just do something simple for now and plan to change it later if we don't like what I put together :-) 16:33:48 good :) 16:34:22 how do we want to track to-do items? bugs in launchpad? 16:34:32 I feel like we're at the point where we could open some tickets for things like these plugins 16:34:33 sounds like a good idea 16:34:39 yep 16:34:53 #action jd___ open tickets in launchpad for plugins, etc… 16:35:05 I'll do so we can assign ticket and not duplicate the work 16:35:10 good plan 16:35:25 Just as a suggestion, would it be good to propose a nova API extension that provides metrics that Ceilometer can consume (and each driver can write their own code to provide those metrics) 16:35:56 that's more or less what the plugins are for 16:36:16 I see 16:36:19 you can add code to the agent that polls, and the results are published so the collector can see them 16:36:35 and you can add listeners in the collector for events like notifications 16:36:55 one goal was to reduce the number of changes needed within the other projects 16:37:14 we may need some, eventually, but we want as few as possible 16:37:36 jd___, we need tickets to have the collector listen for notifications from the services other than nova 16:37:39 I see, so a dependency to go through another project is not something that is preferred in this case? 16:37:59 right, we only want to push changes into the other projects if there is no way to avoid it 16:38:17 #action jd___ open ticket to have the collector listen for notifications from the services other than nova 16:38:27 and we want those things to be as general as possible (so, having them send general notifications is OK but they should not send metering data directly) 16:38:28 dhellmann: you mean like glance I guess? 16:38:35 glance and quantum at least 16:38:43 maybe swift? 16:38:52 we might need to poll swift, I'm not sure 16:39:05 I see, so for example, if we have some nova-compute nodes that are running XenServer, would we have to add them all individually to Ceilometer (forigve my silly questions) 16:39:17 basically, for each counter we identified figure out which service has the data and make sure we are listening to its notification stream 16:39:20 I don't know quantum 16:39:27 swift does not use a RPC 16:39:42 mnaser, no need to apologize for asking questions! 16:39:58 quantum is the new networking stuff 16:40:05 I feel like they're basic questions about the infrastructure of it so sorta "go read the arch docs" :) 16:40:16 dhellmann: I know what it is but not how it works ;) 16:40:27 mnaser, you might want to read over http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 but we can still discuss here 16:40:32 ah, ok, jd___ 16:40:37 I don't either :-_ 16:40:39 :-) 16:40:40 lol 16:40:51 anyway quantum is out of scope for now 16:41:01 ok. I'm going to need it, but I can track that myself. 16:41:10 your call ;) 16:41:35 if you do more than what is planned it's good too I guess! :) 16:41:47 mnaser a ceilometer agent runs on the compute node and polls the hypervisor for details through libvirt (or the other drivers) and we also catch notifications for events like creating or deleting instances 16:42:01 Gotcha, I see, I figured when reading. 16:42:18 jd___ we also need tickets for adding polling for hypervisor drivers that do not use libvirt 16:42:29 In that case, I can help write up the XenServer plugins/poller, as I already have most of the work done for it to be honest. 16:42:37 yeah, maybe mnaser will be able to help on that :) 16:42:38 excellent! 16:42:48 #action jd___ we also need tickets for adding polling for hypervisor drivers that do not use libvirt 16:42:49 that worked out nicely 16:43:15 mnaser: when the ticket will be opened feel free to at least describe what technique you did use to poll info for XenServer 16:43:23 it'd be awesome 16:43:36 jd___: Will do, there are numerous ways as well so I'll bring up the options and then a decision can be taken 16:43:44 mnaser: perfect 16:43:48 sounds like a good approach 16:43:59 that's all I have for this week. is there anything else to discuss? 16:44:13 nothing for me 16:44:31 I'll close the meeting, mnaser feel free to join #openstack-metering if you want to discuss more with us 16:44:39 In there :) 16:44:47 good meeting jd___ 16:44:51 and mnaser 16:44:57 thanks guys! 16:45:00 #endmeeting