16:01:19 #startmeeting cinder 16:01:20 Meeting started Wed Mar 27 16:01:19 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:23 The meeting name has been set to 'cinder' 16:01:24 hehe :) 16:01:30 eharney: To the rescue! 16:01:45 Ok... again I neglected to update the wiki page 16:01:52 lemme make sure nobody else added anything :) 16:02:17 Nope :) 16:02:23 #topic RC3 16:02:36 So you all likely saw we did a quick turn on an RC3 16:02:49 hopefully that's it for Grizzly release candidates 16:03:02 we found a few bugs this week that we added there, but nothing too serious 16:03:14 mostly the quotas stuff (thanks guitarzan) 16:03:32 I also threw in the fix for checking tenant in the context for quota updates 16:04:04 I didn't test this as much as I would've liked but it seemed to merge cleanly with our other changes and a quick test everything looked good 16:04:25 Anybody have anything regarding milestone-proposed/RC they want to mention? 16:05:47 jgriffith: the netapp bug, is not an easy fix 16:05:55 I will wait until havana to fix it 16:06:14 bswartz: sounds good... go ahead and target it in launchpad if you would please (H1) 16:06:20 I've just got more client bugs 16:06:21 oh ok 16:06:26 DuncanT: :) 16:06:33 So FYI the quota changes: 16:07:18 https://review.openstack.org/#/c/25490/ 16:07:28 https://review.openstack.org/#/c/25326 16:07:39 https://review.openstack.org/#/c/25251 16:07:43 pheww 16:08:00 jgriffith: ah, that is exactly the issue I've been working on. lol 16:08:05 jgriffith: I'm not sure I can set the target milestone -- it's not clear in the bug UI 16:08:10 For my next trick, figure out how to cherry-pick all of those back into one commit for Folsom 16:08:24 bswartz: No worries, I'll have a look at it 16:08:28 bswartz: bug number? 16:09:16 j_king: :) Hopefully you haven't spent a ton of time on it 16:09:26 j_king: sorry... about that 16:09:35 initially suo said he wouldn't be able to do it 16:09:42 then it came in last week and got burried 16:09:46 1139129 16:10:14 bswartz: done 16:10:26 jgriffith: reviewing it now and noticing that we pretty much made the exact same changes. no worries. 16:10:35 my patch has been sitting as a draft 16:10:38 bswartz: I'm wondering if you have permissions to see that... "Milestone" column? 16:10:46 and I've been taking too long at it 16:10:52 j_king: sorry bout that 16:10:57 jgriffith: no worries. :) 16:11:15 yeah I can see it I just can't edit 16:11:28 bswartz: ahh... ok 16:11:40 Ok, so in summary... I think we have a pretty good RC 16:12:07 The only core issue that is still an issue is that the secure delete on snapshots 16:12:08 * bswartz is bereft of power on launchpad 16:12:13 ie we're not doing it 16:12:25 but given the precise bug this may be for the best 16:12:51 and I believe rushi may have had issues on Quantal as well 16:12:53 :( 16:13:22 So at this point if folks can test and document that would be fantastic 16:13:39 I mostly want to look at installing via packages at this point 16:14:07 make sure we've got things well documented in openstack-docs for things like how to configure multiple back-ends, and of course your respective drivers 16:14:12 speaking of which... 16:14:30 https://review.openstack.org/#/c/25490/ 16:14:46 I seperated the cinder admin docs and driver files 16:15:02 please take a look and make sure I didn't break any of your stuff in the process :) 16:15:51 One other thing I noticed last night... 16:16:10 It seems the sync of oslo-requires nuked eventlet from test-requires 16:16:31 Not an issue regarding release, but if you're trying to do testing and notice something funky there, that's why 16:16:39 I'll push a patch to get that put back in later 16:16:53 jgriffith: I can put in a update to openstack-infra to build the admin docs. 16:16:59 it's still in pip-requires, but it's needed for tests as well 16:17:10 thingee: I have one in review I think, let me dig up the url 16:17:12 jgriffith: where is the new doc located? so it is not in http://docs.openstack.org/trunk/openstack-compute/admin/content/ch_volumes.html any more? 16:17:13 thingee: that would be great (since I don't know how to do that anyway) :) 16:17:24 annegentle: perfect :) 16:17:33 xyang_: that's what thingee is referring to is making that happen 16:17:35 :) 16:17:55 thingee: https://review.openstack.org/#/c/25384/ yep it merged 16:18:12 thingee: next step, edit the index.html in openstack-manuals/www where you want it to be linked 16:18:20 xyang_: it'll likely be moved to http://docs.openstack.org/trunk/openstack-block-storage/admin/content 16:18:29 annegentle: and thingee are my heros! 16:18:34 thingee: ok. thanks 16:18:48 jgriffith: aw shucks 16:18:53 * jgriffith is very excited to finally have Cinder docs! 16:19:35 Ok... so any questions/concerns on RC and docs? 16:19:51 #topic encryption 16:20:02 Not sure how many folks have been following discusion on the ML 16:20:17 I have some concerns about how this is planned to be implemented in Nova 16:20:37 particularly for those of us that have or will have back-ends that implement encryption themeselves 16:21:09 I talked with Laura the other day and it looks like the design will allow us to specify encryption on volume create 16:21:30 and we can put hooks in to modify which version/implementation is used so that should be ok 16:21:49 I still don't like it being in nova full stop 16:21:52 I still have questions about snapshots and clones, I'm labbing some stuff out with LVM to see how this may or may not work 16:21:56 DuncanT: agreed! 16:22:12 DuncanT: I've objected on the ML as well as personally to Laura 16:22:21 DuncanT: The plan at this point.... 16:22:35 I'm just looking for the thread... I'm way behind on the list 16:22:35 I'm going to accept the proposed encryption session to the cinder track 16:22:45 there's also going to be one on the Nova track 16:23:06 I'm going to work with russellb and hopefully have it so the nova session isn't on a cinder day 16:23:15 so we can participate in both sessions and vice versa 16:23:33 jgriffith: +1 16:23:44 DuncanT: the thread has digressed considerably into a discussion on key management 16:24:02 DuncanT: but the proposed patch is up, everyone should take a look 16:24:02 I'll catch up and wade in :-) 16:24:29 I -1'd one patch ages ago, not been keeping up since, sorry 16:24:39 DuncanT: no worries 16:24:41 jgriffith: what day is cinder day 16:24:52 jgriffith: and do you want the nova one after the cinder one? 16:24:58 russellb: As it stands now I beleive it's still Thursday 16:25:05 ah ok 16:25:13 nova first it is 16:25:19 russellb: I don't have a strong preference and I think Cinder being thurs limits the options a bit 16:25:22 :) 16:25:24 russellb: thanks :) 16:25:28 np 16:25:51 jgriffith: do you have a link to the proposed patch? 16:26:02 xyang_: looking.... 16:26:41 jgriffith: annegentle Sorry I'm late was working away on the docs and lost track of time :) I'll have new patches up today for the feedback I got back from Anne. 16:28:33 xyang_: hmm... I've lost the patch :( 16:28:43 xyang_: I'll have to dig through the ML to find it again 16:28:56 jgriffith: I see this one https://review.openstack.org/#/c/21262/. It is abandoned. 16:29:02 jgriffith: what do you guys think about doing volume rate-limiting on nova? 16:29:26 winston-d: how do you mean? IO? 16:29:34 jgriffith: yup, IO 16:29:43 winston-d: I'm violently opposed 16:29:46 :) 16:29:53 :) 16:30:17 winston-d: We want to do rate limiting somewhere. Got any proposals? 16:30:56 winston-d: We were certainly looking at doing it in the hypervisor, but the hypervisor support for it looks rather flakey 16:31:00 DuncanT: winston-d so I'd be more concerned about the implementation here 16:31:07 DuncanT: ultimately i think it's best/easy to do that on Nova/hypervisor 16:31:37 jgriffith: It would certainly be easy to come up with something ugly :-) 16:31:50 DuncanT: haha :) 16:32:10 So we can definitely investigate... 16:32:32 but from my experience it's a pretty complex problem and I'd be curious about the demand 16:32:55 It's also lower on my priority list compared to things like encryption, migration of volumes, multi-attach etc 16:33:06 We have what we think is a requirement for it, but no serious attempts at implementation or testing 16:34:05 alright, we should talk about it then 16:34:20 winston-d: DuncanT maybe the two of you could outline exactly what you have in mind 16:34:24 the use case etc 16:34:35 jgriffith: sure. will talk to DuncanT 16:34:35 Sure. 16:34:59 I do think there are some other things that are higher priority, but maybe that's not accurate 16:35:18 and of course there's the obvious solution that already exists :) 16:35:40 Oh, we want all of the above too, and the moon on a stick please, and two ponies 16:35:52 three ponies!! 16:35:53 (The obvious solution? Source rate limiting?) 16:35:54 and a unicorn 16:36:00 DuncanT: SolidFire :) 16:36:11 sorry... couldn't resist 16:36:13 volume_type 16:36:22 :) 16:36:33 Ok... since we're talking sessions 16:36:38 #topic summit sessions 16:36:55 http://summit.openstack.org/ 16:37:12 DuncanT: I saw there was a proposal for testing. I was think making this a bit more low key unconference. 16:37:18 We have 17 proposals and 11 slots 16:37:27 thingee: +1 16:37:32 DuncanT: you good with that? 16:38:06 Also I'd like to look at consolidating a couple that may be related 16:38:22 Fine by me 16:38:34 I'm fine with having it in a bar somewhere if necessary... 16:38:41 DuncanT: +1 16:38:42 bar :) 16:38:50 best unconference evar 16:38:59 with a Pliny in hand 16:39:05 DuncanT: I'd also like to combine your give/take proposal with Avishays migration proposal 16:39:18 I think they'll tie together if that's ok with you? 16:39:40 Ok. They're not very related, but I've no problem with sharing a slot... it worked out mostly fine last year 16:40:04 DuncanT: hmmm.... you don't think we could leverage the work for the two? 16:40:20 these two are almost identical http://summit.openstack.org/cfp/details/146 + http://summit.openstack.org/cfp/details/152 16:40:25 give/take is purely ownership, it doesn't affect anything except that database 16:41:01 jgriffith: It doesn't need a full session though 16:41:06 DuncanT: agreed, but I think it *sort of* falls in the category of dynamic volumes so to speak 16:41:20 and Avishays agree to combine those two. 16:41:21 DuncanT: Ok, I'll look at combining or seeing if something more logical falls out 16:41:25 winston-d: looking 16:41:45 winston-d: yes, for sure, those will fold in to one 16:42:23 I think that will put us in pretty good shape 16:42:28 The plugability one requires no code changes at all but might turn into a political bunfight 16:42:31 the only other one is the plugins from Chuck 16:42:38 DuncanT: :) 16:42:46 and the independent scheduler one http://summit.openstack.org/cfp/details/97 here isn't valid since scheduler is already independent. 16:42:50 I haven't gotten a good feel for the level of interest there 16:43:04 I think I'll drop it if we're out of sessions 16:43:12 but I'd like to talk to folks about it in Portland 16:43:14 Seems reasonable 16:43:25 explain a bit more about what I have in mind and what Chuck is thinking there 16:43:59 Ok... anything else on sessions? 16:44:06 Anything folks thing we need to add? 16:44:48 maybe we can also talk about volume/task state machine in unconference? 16:44:57 winston-d: OHHHH yes! 16:45:13 winston-d: does this have to do with co-ordination? 16:45:15 winston-d: I'd like to see if we can have a true session for that 16:45:29 * j_king interested in distributed co-ordination problems in general 16:45:32 I'll see if I can futz the schedule around to make that happen 16:45:40 +1 million 16:45:43 cool 16:45:53 Clean forgot about that, despite having pages of notes on the subject 16:46:19 jgriffith: I'll get you some more details on the mutli-attach next week and we can decide if we need to have a unconference session or piggy back your read-only multi-attach 16:46:44 kmartin: I was thinking we keep that as a formal session and collapse read-only in to it 16:47:03 jgriffith: ok 16:47:06 kmartin: i think R/O would just be an option to a volume and should be pretty straight forward 16:47:16 the bulk of the discussion should be multi-attach 16:47:49 I'll write something up next week and forward it on to you 16:48:34 i brought up the possibility of a python-paxos implementation at the zookeeper talk at PyCon for this sort of thing. not sure if it's applicable to what you're talking about but if there's a ML thread about it, I'll hop in. (can't be at summit) 16:49:09 j_king: I'd be interested 16:49:20 j_king: say whaaaa? 16:49:23 j_king: why not? 16:50:09 jgriffith: 4mo old daughter to take care of and I already went to pycon. ;) 16:50:24 plus not currently employed 16:50:38 j_king: ahh.. those are good reasons 16:51:26 alright, anybody have anything else? 16:51:38 #topic cinderclient 16:51:59 Just a heads up, prioritize testing and reviews on cinderclient if you could the next couple days 16:52:12 I'd like to release to PyPi on Friday at the latest 16:52:21 ok... if there's nothing else 16:52:35 we'll finish at least a *little* bit early 16:52:51 thanks everybody... as always grab me on IRC if you want to talk about something 16:52:59 #end meeting 16:53:06 #endmeeting