16:03:22 #startmeeting 16:03:23 Meeting started Wed Jun 20 16:03:22 2012 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:03:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:03:26 A similar cinder-manage command would, I think, be useful - it is safer than manually grubbing in the database, and requiring root to use it (to get the db creds from the .conf file) makes it fairly safe 16:03:48 here too 16:03:48 DuncanT: Absolutely 16:03:52 dricco: Hello 16:04:00 #topic status update 16:04:29 For those that saw my email regarding testing out cinder with my github you may have run into some problems :( 16:04:40 It seems glance isn't gettting cloned properly 16:05:19 For those that tried it (clayg) I apologize for the confusion 16:05:29 Anyway... 16:05:51 The good news is euca- commands are working now 16:06:15 I've added tests to devstack to attach/detach, snapshot and deletes 16:06:31 I think we're pretty close to being ready to go out into the wild 16:07:02 I'm pushing to make Cinder default in devstack after f2 release 16:07:26 After that we have a lot of work to do in terms of features 16:07:59 Anybody else tried running anything, or have any updates on current status? 16:08:08 I won't speak for sleepsonthefloor :) 16:08:36 jgriffith: finishing up moving the snapshot api changes to an extension 16:08:43 * heckj wanders in to lurk 16:08:54 may have to update cinder client as well 16:09:11 after that, it would seem that nova tests are the main task to be done? 16:09:26 sleepsonthefloor: Sounds good... I had some updates in the client, but moving those to an extension may mean another turn on them 16:09:36 sleepsonthefloor: Yes, nova tests.... I'm still working on those 16:09:40 We've got a working devstack instance going here, including working euca commands... not got our driver port working on it yet but working on it 16:09:52 DuncanT: cool 16:10:35 sleepsonthefloor: I almost have a completed fake cinder service finished 16:10:39 In terms of the merge timeline, the relevant keystone changes are in 16:10:45 I also put up https://github.com/Funcan/nova-volume-end2end though it needs some tweaks to work with a recent euca2ools library 16:11:27 sleepsonthefloor: So what I was thinking was get everything merged, saving Nova for last 16:11:55 For devstack merge everything with n-vol as default, then submit a patch to switch the default after that 16:12:40 any thoughts from anybody else on the ordering there? 16:13:27 jgriffith: makes sense to me. once the snapshot api extension and related client work are done, cinder and cinder client should be ready. hopefully those land today/tomorrow 16:13:55 sleepsonthefloor: Great.. and I'm hoping close to the same timeline for nova 16:14:14 The nova tests may still have some holes, but it should be sufficient by then 16:14:32 DuncanT: Do you want to talk a bit about your end2end tests? 16:15:32 clayg: Make up your mind :) 16:15:33 Sure, they (currently) use the euca api to create instance and volumes, write data to them, move the volume to different instances and check the result 16:15:57 jgriffith: i can't stay, i just wanted to pop in and say I'm playing with cinder (got all my issues worked out last night) 16:15:58 There is a similar test for snapshots there 16:16:10 I'll try to watch for messages that show up in yellow 16:16:47 DuncanT: very cool... any thoughts about moving this into devstack, or working with ci team to do something ie gate test? 16:16:50 I'm going to try and find bugs in cinder, track down all the patches so I can repoduce cinder in a multinode xen/nova setup - and maybe write some unittests! 16:17:03 clayg: Awesome!! 16:17:30 Certainly we can look at that, yeah. Devstack integration should be fairly easy 16:17:43 DuncanT: A bit of it is already there in euca.sh, but I like the actual data transfer addition 16:17:58 Also intend to have a version for the nova/cinder api soon 16:18:08 I need to step away for one minute... sorry, keep talking I'll catch up 16:18:26 Never mind... I'm good 16:18:55 DuncanT: Sounds great, keep us updated 16:19:28 DuncanT: Come across any issues thus far or are things looking pretty good? 16:19:57 Not found anything yet 16:20:19 Ok, sounds good... 16:20:30 Will be hitting it a bit harder as I get our driver wired up to it 16:20:41 One thing we need to think about is the whole specifying device mount point... 16:21:03 AFICT this doesn't really work in n-vol or cinder 16:22:07 Ok... anything else for status updates? Questions? 16:22:25 Under KVM the mount point specification is entirely bogus with nova today... and in a difficult-to-explain-to-customers way 16:22:48 DuncanT: exactly, I'd like to figure out a way to address that 16:23:06 DuncanT: not critical right now, but want it on our radar screen 16:24:03 Ok... onward 16:24:10 #topic hack day in SF 16:24:39 So don't know if everybody here saw the email from Joshua M about a cinder hack day in SF? 16:25:03 DuncanT: I know this probably isn't realistic for you :( 16:25:30 I was wondering if any folks from inktank, redhat etc were lurking about today that were on that list/interested 16:25:51 Also renuka you're out that way, would you be interested in such an event? 16:25:58 yea im in 16:26:00 jgriffith: when is the hackday? 16:26:40 heckj: Well he sent out the email last week and was talking yesterday timeframe :) 16:27:05 heh 16:27:07 I thougth it would be good to bring it up here this week and gauge interest before I get on a plane :) 16:27:35 I'm in Seattle, but am kind of interested. Wouldn't be able to contribute a whole lot right off that bat... 16:27:41 Also one question I had was if it would be more productive to wait until the basic integration was done and out there 16:27:45 I'd love to come along but I'm not sure it is practical :-( 16:28:09 DuncanT: Yeah, I figured... although maybe we could all come to Ireland? I've wanted to visit for a while now. 16:28:24 integration might be a good idea 16:28:34 That sounds like a great plan to me... it's a lovely place 16:28:52 renuka: Yeah, I think folks could contribute a lot more once there's a stable foundation 16:29:39 So let's put it this way... if we set such a thing up say in July after F2 would there be enough interest from folks here to show up and get together? 16:29:42 * heckj is happy to wait and learn 16:30:14 i'm in whichever way 16:31:02 Ok, I think I'll try and coordinate something. There are enough people in the area there that it could be fun/useful 16:31:30 I'll keep folks posted, but I don't think it makes as much sense to do it before we're integrated 16:31:55 I'd like to get some things like Boot From Volume knocked out in that type of setting 16:31:59 :) 16:32:33 speaking of which... boot from volume now works for xen :) 16:32:40 code in review 16:33:00 renuka: Awesome! 16:33:14 renuka: Which review was that? I've looked at a couple from you lately 16:33:37 https://review.openstack.org/#/c/8156/ It's mostly in the compute space 16:33:55 but i guess it is of interest to volume people :) 16:34:06 renuka: very much so :) 16:35:01 Ok... 16:35:13 We've diverged a bit 16:35:19 #topic open discussion 16:35:33 Fire away! 16:36:15 ha! I'm going to get off easy this week! 16:36:38 One question related to a recent mailing list thread 16:36:45 DuncanT: shoot 16:37:04 Any thoughts on whether delete should work on states otehr than available & error? 16:37:53 Deleting an 'attached' volume seems odd... you could be an implicit detattach but I suspect anybody trying to delete an attached disk has typo'd 16:38:01 DuncanT: my thought is no, however there's the thing about being abl to force a state 16:38:36 There are ways to get stuck in a state where nova database thinks the device is attached but nothing else does 16:38:57 nova-manage force-attach can be useful there sometimes but an end-user can't run that 16:39:22 Similarly it is possible to get stuck in creating or attaching with no way out as the end user 16:39:32 DuncanT: Yes, so we need a way to "fix" that, but in terms of what you can delete I think it should be dependent on the states you mention 16:39:50 Also that brings up the question of deleting volumes with snapshots again :) 16:40:36 Yeah... currently our code lets you but then the snapshots become useless because snapshots have no provider location :-( 16:40:40 DuncanT: We'll need to put together a blueprint for adding something like "correct-state" to the api/client 16:41:17 We've certainly got usecases where users would rather not pay for both the volume and the snapshot when all they need is the snapshot 16:41:33 DuncanT: Yeah, I think we may change definition and implementation of snapshots in the future based on past conversations 16:41:57 Sounds good to me 16:42:03 DuncanT: that seems to be a matter of billing policy 16:42:49 i think some hypervisors might be storing snapshots as deltas... that is the reason they dont allow deleting the original volume 16:43:05 renuka: It's also a matter of conceptual model... if the user only wants to keep a golden snapshot, why shouldn't they be able to delete the volume that the snapshot was created from? 16:43:27 I realise what semantics a given implementation supports can vary 16:43:30 DuncanT: renuka I seem to recall we started this topic at length a while back :) 16:43:43 I proposed the "golden copy" model be a clone 16:44:28 jgriffith: I don't remember coming up with a model mapped onto an api... I do remember the discussion 16:44:57 Should I send out an email about it and give people time to think? 16:45:11 DuncanT: no we didn't resolve it 16:45:47 DuncanT: Probably a good idea, but it's not something I intend to try and solve in the next week or two 16:45:56 DuncanT: Just to be fair/honest 16:46:16 Although if others have time/bandwidth to tackle it and are ready that's another story 16:46:48 Oh, I don't expect a speedy resolution, just want to keep the discussion alive so we don't get stuck with not changing anything do to lack of agreement on where we'd like to go 16:47:02 My concern would be if there are changes in how we do things like BFV etc they may impact it so I won't to be able to really focus everyone with a big picture in place 16:47:19 DuncanT: I agree with that completely 16:48:38 DuncanT: sending out something to ML is probably a good idea, it you want to run with it sounds good to me 16:49:11 Ok, I'll put something together 16:50:05 Anybody else have anything? 16:51:10 Next week should be exciting, we'll hit our first big milestone :) 16:51:28 Alrighty, thanks everyone! 16:51:33 #endmeeting