16:00:18 #startmeeting 16:00:19 Meeting started Wed Jun 27 16:00:18 2012 UTC. The chair is jgriffith_. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:33 #topic status update 16:00:46 So for those that don't know... we're really close 16:00:49 hi 16:00:57 looks like one of the big two landed? 16:01:04 clayg: Yep 16:01:21 So the only one that's really holding things up at this point is mine :( 16:01:28 heh 16:01:36 https://review.openstack.org/#/c/8073/ 16:02:39 hello 16:02:57 hrmmm... ec2 tests? 16:03:21 clayg: yes 16:03:39 I'm stuck on terminate instances when a volume is attached 16:03:50 There's a failing rpc call 16:03:52 do you know what needs to fix, or do you need help tracking it down? 16:04:26 clayg: I could use help tracking it down 16:04:44 jgriffith_: so is the detach not happening on terminate? 16:04:58 clayg: I have it narrowed in to compute/api and I think it's calling volume manager somewhere but it's like spaghetti 16:05:17 rnirmal: I suspect it's actually the detach call 16:05:54 jgriffith_: steps to reproduce? 16:06:43 sleepsonthefloor: run_tests.sh nova.tests.api.ec2.test_cinder_cloud:CinderCloutTestCase.test_stop_start_with_volume 16:07:20 or .....:CinderCloudTestCase.test_stop_with_attached_volume 16:08:27 So some background on this test.... 16:08:59 It stubs out in the setup all of the api calls to volume/fake.API 16:09:27 I think we need both tests still since folks will be using nova-volume as is for a while 16:10:45 Ok... so if anybody has time to look at that and help me figure out what's up that would be great 16:11:01 Once we get this change set pushed we're pretty much there for F2 I believe 16:11:26 #topic cinder hack day 16:11:49 So Piston Cloud has been kind enough to host a Cinder Hack Day in SF next week 16:12:07 It will be at the Piston Cloud offices in San Francisco on Tuesday July 3'rd 16:12:29 that's cool 16:12:32 Anybody who wants to write some Cinder code is welcome and encouraged to attend 16:12:44 clayg: Yeah, it should be really cool 16:13:06 The idea is to generate some interest, get some new recruits and buld a bit of momentum 16:13:32 There are some folks from Rising Tide, RackSpace and others that will be there 16:14:07 Depending on how this goes we may want to think about similar events in the future 16:14:22 Even doing via IRC remotely maybe? 16:15:01 it'd be a lot easier to get some Ceph devs there if it were on IRC, I'd like to see that 16:15:17 rturk: Understood 16:15:23 (since we're in LA mostly) 16:15:45 There are timezone considerations for us, but it sounds like a good idea 16:16:01 yeah 16:16:13 So would there be interest in having an IRC channel up for the event next week? 16:16:21 absolutely! 16:16:29 yup 16:16:34 Ok, I'll see what I can do and keep folks updated 16:16:38 cool 16:16:51 So this assumes that you'll be setting the day aside to work on Cinder code 16:17:18 agreed - just saves on the travel time 16:17:36 rturk: yep, sounds good 16:17:51 #action jgriffith to setup IRC channel for Cinder Hack Day next week 16:18:22 I'll send an email out with info on this 16:18:51 #topic plans for existing nova volume code 16:19:14 Ok, so this is something that maybe isn't as clear to folks as it should be 16:19:32 I just want to go through it as a team here to make sure we're all singing the same song 16:20:04 So the goal should be to have Cinder 'ready' for F2, and in Openstack as default volume servie for Folsom release 16:20:14 Existing nova volume code will NOT go away yet 16:20:23 It will still be there for folks that rely on it 16:20:40 But I would like to see it go away by G release depending on how things go 16:20:57 Also, I do have SERIOUS concerns about maintaining parity between both services 16:21:16 My thought is Cinder moves forward, nova-v stays where it is 16:21:28 Anybody have different ideas/thoughts in mind? 16:21:38 Or concerns, objections... 16:22:09 Makes sense 16:22:29 sounds reasonable 16:23:00 Ok... there was some misconception I think that we were going to completely rip out nova-v for Folsom 16:23:14 This caused some stress I think, and rightfully so 16:23:54 #topic priorities for Cinder 16:24:15 I wanted to get some input from folks on what to attach first for Cinder after F2 16:24:32 inparticular this might be good for laying out the hack day 16:24:52 I have some ideas of my own (boot from volume) 16:25:04 But wanted to open up for other ideas/input here 16:25:10 BTW... 16:25:27 When I say BFV I'm referring to the Ceph document as the model 16:25:40 If you haven't seen this I'll dig up a copy and send it out 16:26:16 I'd like to see all the status stuff cleaned up 16:26:34 rnirmal: details? 16:26:39 things like when is a delete possible, an attach/detach etc... there's been several bugs around that 16:26:52 rnirmal: Oh yes... EXCELLENT point 16:27:01 just status of the volumes in general.. it's a mess right now 16:27:15 jgriffith_: sorry had a walk up, I think nova should adopt a "no new features in volumes" at f-2, and have _major_ deprecation warnings pending removal in g-1 or g-2 16:27:33 ^ or close there by 16:27:33 clayg: Error: "or" is not a valid command. 16:27:49 ubirtbot: I hate you 16:27:51 clayg: I would agree with that 16:27:56 clayg: I think that's what vishy had agreed on early on... only bug fixes go back in the nova-volumes 16:28:09 clayg: You and virtbot have quite a relationship I've noticed! 16:28:14 meh, "security" fixes - bugs are "expected behavior" ;) 16:28:22 haha 16:29:07 rnirmal: So back to your point 16:29:09 ok, sorry that's it from the peanut gallery, I'll try to look at that cloudcontroller test 16:29:19 clayg: NP 16:29:35 I agree, the bugs will be the priority 16:29:59 yeah simple cleanup like adding a status class or something... we refer to status in all the places as just string literals 16:30:09 I think the only "important" one missing from the list right now is the snapshot one (I'll have to look it up again) 16:30:42 rnirmal: Agreed 16:30:46 and this one: 16:30:49 #link https://bugs.launchpad.net/nova/+bug/1008866 16:30:51 Launchpad bug 1008866 in nova "Creating volume from snapshot on real/production/multicluster installation of OpenStack is broken" [Medium,Triaged] 16:30:56 urm... which "list" 16:31:16 clayg: Sorry... 16:31:20 #linkk https://bugs.launchpad.net/cinder/+bugs 16:31:40 Hmmm... I thought virtbot would yell at me for typing poorly 16:31:44 jgriffith_: yup that one's a much bigger one for the default driver.... I don't think that's a problem for the other drivers 16:32:56 rnirmal: I definitely place it high on the priority list 16:33:17 Ok... so I think there's more than enough work laid out just in the defects/enhancements 16:33:41 delete issues are going to be the biggest win IMO 16:34:02 anything folks would like to add at this point? 16:34:18 Or need some clarification on something? 16:34:33 one more thing... there's going to be folks around to review and approve changes for the hack day right ? 16:34:36 jgriffith_: everything I care about is in that doc you mentioned 16:35:15 rturk: :) You and me both... 16:35:21 rturk: To a point 16:35:42 rturk: We'll never get the stuff in the doc to work well withou fixing the items rnirmal mentioned 16:36:03 nod 16:36:17 Ok... I guess this is a good time for open discussion? 16:36:20 jgriffith_: one more.. I seem to have a huge list today... is there any plans to work on the docs 16:36:33 rnirmal: Thanks for reminding me!!! 16:36:46 I chatted with annegentle yesterday about that very topic 16:37:05 I have a list of todo's regarding the wiki pages, and doc updates 16:37:32 I've targeted F3 and will be opening up entries in the docs project for them 16:38:00 #topic docs 16:38:09 Might as well have a docs topic eh...? 16:38:14 sure 16:38:24 so what you mentioned is dev docs + api docs right? 16:38:30 rnirmal: correct 16:38:54 rnirmal: with an emphasis on api, config and mainenance docs as the priority 16:39:09 I would LOVE to get some good dev docs out there 16:39:26 but right now would prioritize people being able to use/run Cinder 16:39:30 I think we are at a place to generate the sphinx docs 16:39:40 not sure if it has any content :) 16:40:29 rnirmal: Sounds like a good thing for you to volunteer for ;) 16:41:11 yeah I'll do some 16:41:23 :) 16:41:27 Sorry, I couldn't resist 16:41:47 So here's what I have as a list of official docs: 16:41:53 1. migration (Essex to Folsom) 16:41:53 2. Initial install/config 16:41:53 3. API usage 16:41:58 4. Cinderclient 16:42:00 5. some sort of design description/review 16:42:32 Anybody see anything blatantly missing? 16:43:06 nope looks good 16:43:18 excellent.... 16:43:31 Ok... now I think that's all I had 16:43:36 #topic open discussion 16:44:00 anybody have anything? 16:44:07 so looks like all the devstack changes got merged in 16:44:08 dhellmann: and I would like to tqlk qbout metring (ceilometer) 16:44:29 nijaba: Yes!! Sorry, I almost forgot somehow 16:44:42 Just for an update for everybody.... 16:45:01 I talked with the ceilometer team a bit last week about metering in Cinder 16:45:26 Particularly usage/create/delete stats on a per tenant basis 16:45:39 nijaba: I'll let you and dhellmann elaborate... 16:45:54 so, what we need to fetch is two fold 16:46:11 1/ we need to collect usage per tenant on a regular basis 16:46:21 it seems that the best approach for this is to use the api 16:46:36 but that leaves the question of how to authenticate ceilometer 16:47:19 since our architecture offers local agents, we are proposing to deploy one on the sames hosts where cinder is deployed 16:47:24 thus making local calls 16:47:53 nijaba: so that's going to be polling the cinder api 16:47:58 2/ we need to capture create/modify/delete volume event 16:48:07 rnirmal: yes, that would be the idea 16:48:33 for 2/ it seems that you are not currently genereating any event on any queue 16:48:49 so we would like to test the idea of us patching cinder to do so 16:49:03 yeah we had some events added to nova-volumes... but the review got abandoned in cinder 16:49:11 that's something we should be able to get back in 16:49:36 nijaba: this is similar to notification events in nova 16:49:41 or essentially the same 16:49:56 rnirmal: yes, very much 16:50:26 we would just be proposing some patches to do so, so that we can capture them from our agent and send it to our collector 16:50:55 rnirmal: but if there are patches already in the work, we would gladly have a look at them :) 16:51:25 nijaba: #link https://review.openstack.org/#/c/7517/ 16:51:48 rnirmal: great! 16:52:08 so I take it you guys would see 1 and 2 from a good perspective? 16:52:26 nijaba: +1 16:52:29 I'm ok with 2... I'm a little concerned about the polling in 1 16:52:41 rnirmal: we are very open to other suggestions 16:52:54 nijaba: I'm assuming the frequency of the polling is configurable.... 16:52:58 I haven't followed the ceilometer discussions 16:53:02 jgriffith_: it is, yes 16:53:15 nijaba: Then I'm ok with it, as long as it's user configurable 16:53:17 on a per agent basis 16:53:24 so I don't think I can provide any valuable input without understanding that 16:53:48 rnirmal: what would you need? 16:54:02 nijaba: so that would be getting usage for all the tenants right? 16:54:11 rnirmal: yes 16:54:31 so this would have to be some sort of a admin only api call 16:54:35 the idea is to be able to callect all information so that a billing engine can be fed from a single source 16:54:44 https://launchpad.net/ceilometer 16:55:16 rnirmal: yes, admin type call, so that we are not exposing other user's information 16:55:26 ok that sounds good 16:56:03 authentication would be done by the fact that our agent is local, if that's good enough 16:56:36 Personally I'd prefer a second, admin-only endpoint 16:56:50 DuncanT: +100 on that 16:56:57 Can do whatever auth people want then (by port, wsgi plugin, firewall, whatever) 16:57:08 DuncanT: +1 16:57:16 DuncanT: sounds indeed much better 16:57:33 DuncanT: so that would be a second patch for us to propose, I guess? 16:57:39 DuncanT: it's easy to do now.. just duplicate the service and run it on a different end point with only the required extensions 16:57:45 but not clean 16:57:51 * dhellmann is sorry he's late 16:58:04 a second admin API does make more sense 16:58:31 rnirmal: Sounds like we can get something stood up for nijaba et al to start using, and change the internals ourselves later then? 16:58:44 DuncanT: yeah that's what I was thinking 16:58:47 are the openstack projects moving away from communicating internally via the RPC mechanism, or did cinder just decide not to implement that? I'm not sure about the history. 16:59:01 for now we can just have an extension with admin_only policies for them to use 16:59:09 dhellmann: cinder just didn't do it (yet) 16:59:21 jgriffith_, ok 17:00:18 btw, we are hanging out in #openstack-metering (as well as #openstack) if you guys have more questions 17:00:35 great thanks nijaba 17:00:42 thank you! 17:00:49 thanks for your support, everyone! 17:00:57 +1 17:01:22 Thanks dhellmann and nijaba 17:01:38 By the way... for folks that decide to try the tests I mentioned earlier.... 17:01:54 There's a merge problem you'll need to fix in nova/tests/api/ec2/test_cinder_cloud.py 17:02:54 s/from nova import rpc/from nova.openstack.common import rpc/ 17:03:04 Sorry about that.... 17:03:26 Alright... anybody have anything else? 17:03:54 if anything comes up, or you have questions about hack day next week hit me on IRC 17:04:11 Somebody "stole" my nick so you can find me as jgriffith_ or jdg 17:04:22 jgriffith_: will do, thx 17:04:37 Ok... thanks everyone!!!! 17:04:41 #endmeeting