16:00:47 #startmeeting 16:00:48 Meeting started Wed Aug 1 16:00:47 2012 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:05 Hey everyone 16:01:09 hi there 16:01:10 Hi 16:01:11 hi 16:01:28 o/ 16:01:35 Hi 16:01:39 Ok... so the first item on the agenda was last weeks action items 16:01:46 hi 16:01:49 We didn't have any, so that's pretty easy :) 16:01:54 lol 16:02:00 ship it 16:02:02 :) 16:02:06 lol 16:02:20 So.... 16:02:24 #topic status 16:02:40 Lots of activity the past week as you've all probably noticed 16:02:41 hello 16:02:47 Mostly bug finding/fixing 16:02:51 dtynan: Hey there 16:03:12 Hi 16:03:45 ohai 16:03:46 We've got most of the ones that creiht found handled 16:03:47 :) 16:04:25 Most of the critical ones are in process and should land in the next couple of days 16:04:38 hah 16:04:55 creiht: I'll expect to see more bugs in the coming days :) 16:04:59 indeed 16:05:33 I'd like to go through and do some triage on the list at some point as a team rather than do it arbitrarily myself 16:05:40 good idea 16:05:51 Folks up for doing that now? 16:05:58 Sure 16:06:06 ok 16:06:08 excellent 16:06:08 yup 16:06:11 ok 16:06:17 #topic bugs 16:06:25 https://bugs.launchpad.net/cinder 16:07:05 So in terms of ones that don't have fixes committed... 16:07:26 #1008866 16:07:44 Good progress is being made on that one and I think it will land today or tomorrow 16:08:02 Unfortunately unit tests are failing so that will need to be addressed 16:08:48 Rongze are you around? 16:09:52 i guess not 16:10:04 hehe... sorry, had a phone call 16:10:07 Ok 16:10:28 Quotas management 1023311 16:10:35 I haven't worked on this for a bit 16:10:49 It's closer but still not gettting the extension loaded for some reason 16:11:14 I'll be tied up trying to get some migration stuff figured out for the next few days so won't get back to this 16:11:46 The code is still in my repo https://github.com/j-griffith/cinder and j-griffit/python-cinderclient 16:11:57 If anybody is bored and what's to take a shot at finishing it up :) 16:12:32 944383 and 970409 16:12:41 Anybody working on these? 16:13:06 How about this... 16:13:21 Anybody want to sign up to work on recover/cleanup volume in attaching state? 16:13:34 That will be a nova and cinder fix 16:14:00 Ok, we'll let it sit for a bit 16:14:24 The second one is the allow backends to delete volumes with snapshots 16:14:33 There was a lot of interest from this group in having that 16:14:38 Anybody planning to work it? 16:15:06 Hmmm... 16:15:07 I can do 944383 after i finish az work, maybe next week. 16:15:14 winston-d: thanks 16:15:20 We were looking at snapshots 16:15:30 winston-d: If you would assign yourself to the bug in launchpad that would be great! 16:15:33 Not sure of the status ATM 16:15:43 DuncanT: Same on your side, if you want to assign an owner that would help alot 16:15:56 jgriffith, sure. 16:16:13 We can always defer/close them if need be, but right now we have a bunch of stuff with no owner :) 16:17:19 Bug #1004328 - Known old KVM issue, I don't expect we'll be fixing it in any near timescale 16:17:21 Launchpad bug 1004328 in nova "mountpoint doesn't work when a volume is attached to an instance" [High,Confirmed] https://launchpad.net/bugs/1004328 16:17:48 DuncanT: Would you mind stepping through the list, I need to leave for a moment 16:18:02 Sure 16:18:52 Unless somebody is way more clever than me, 1004328 is goign to end up as 'by design' 16:19:08 We could look at taking the device name out of the API maybe? 16:19:15 Thoughts? 16:20:07 1004382 - Stuck to attached when a volume is detached from an instance 16:20:30 Basically saying that there is no detaching state to match attaching 16:20:41 taking the device name out seems like best option till we can fix kvm 16:20:59 Seems like a reasonable state to add if somebody has time to fix 16:21:11 DuncanT: I'm for taking it out, unless vish or somebody else has a solution 16:21:12 yeah, I've been wanting a deatching state for a while 16:21:28 it would help a lot of issues 16:21:34 Making it optional is back-compatable I think? 16:22:14 we'll have to generate the device name and pass to libvirt 16:22:29 Ok, I changed it t Confirmed/Medium for now 16:22:36 In Cinder 16:22:36 as you have to supply a device name to libvirt when doing attach 16:23:30 creiht: Are you vollenteering for the detaching state? ;-) 16:23:47 I have no time to work on it right now sorry :/ 16:24:13 So back to #1004328 16:24:39 I think the answer is we leave it as is and push to a fix in libvirt 16:24:57 That way if it is ever fixed in libvirt we have the capability 16:25:14 We just need to document it somewhere really well that it doesn't *work* and why 16:25:17 Agreed? 16:25:58 yes 16:25:58 I agree 16:26:04 Seems reasonable, though making the field optional in the API also seems like a good idea 16:26:07 DuncanT: thoughts? 16:26:18 It is meaningless so why require it... 16:26:24 Ok, I can live with optional 16:26:30 That seems like a good compromise 16:26:51 Only question is does the choice of default matter? 16:26:53 Not sure how xen et al will cope with the optional though 16:27:13 Ping Renuka I guess? 16:27:15 renuka: around? 16:27:22 I am here 16:27:38 Any thoughts on making the mount point optional on the attach call? 16:27:50 ie how it might impact xen? 16:27:51 I guess I raised this bug. 16:27:54 oh 16:28:00 I think we could generate one ourselves 16:28:01 I don't think that will work for xen 16:28:02 but dunno 16:28:41 we basically convert it to a device number underneath 16:28:56 gotta run.... sorry 16:28:56 renuka: So do you *care* what it is? 16:29:04 creiht: cya 16:29:36 i probably dont 16:29:53 but by *what* do you mean we will have the same defaults across? 16:30:09 or do you mean we let the virt layer find the next available mount point 16:30:23 cos a default will not work for attaching 2 volumes, say 16:30:37 So something like: attach_volume(xxxxx, mount_point='/dev/none') 16:30:46 I think we mean 'let the virt layer find an available mount point' 16:30:49 I would assume the virt layer should find next available if its unspecified 16:30:50 renuka: Yeah, that would be the problem 16:31:14 chalupaul: good point, that's what it's actually doing anyway 16:31:50 Trouble here is now this is a *nova/compute* change, not a cinder change 16:31:54 Shall we pencil in making it optional as the proposed solution then get as many hypervisors as possible to test? 16:31:55 hm 16:32:27 DuncanT: I'm good with that but we'll need to provide some details on how this works as renuka's question pointed out 16:32:47 Yup, it looks like a suck it and see bug... 16:32:52 what is the problem with fixing this directly in libvirt? 16:33:05 So we'd actually leave it as a None in the api call as default and let the virt layer decide what to do if it's None 16:33:23 jgriffith: I'd suggest so, yes 16:33:27 that's the way i'd like to see it, and make it optional in the api. 16:33:32 jgriffith: Might require libvirt patches 16:34:01 I'll update the bug with this proposal if you want? 16:34:09 jgriffith: it will still be a libvirt bug, right? I mean what if the user does specify the mountpoint 16:34:17 so I'm not gone yet, from the xen side, it is nice from the api to know what nodes are mounted where 16:34:38 perhaps kvm could be patched to update the actual mount point if it uses a different one? 16:34:56 kvm has no way of getting at that information 16:35:02 ahh 16:35:24 the other odd one in the case is windows instances 16:35:33 renuka: I guess we just document that specifying the mount point is broken on kvm but works on xen (and maybe others)? 16:35:59 DuncanT: in that case, we dont need an API change at all, if we are documenting 16:36:28 renuka: We can still make it optional - many people will do probe by label or uuid anyway I guess, whatever the hypervisor 16:36:45 heh good point 16:36:53 DuncanT: sure, that change is good to have. Just saying it doesnt fix the bug 16:37:44 Ok, looks like we have a vague agreement, I'll update the bug and see how libvirt reacts to 'none'... 16:38:32 We have a reasonable explaination of the kvm behaviour that I can propose for the docs too 16:38:52 DuncanT: will look for the doc patch 16:38:58 Is it possible to hack around it? Like attach a dummy at the in between points 16:39:09 if you look at the libvirt docs, they explicitly state that it's a hint, not necessarily the target used 16:39:33 http://libvirt.org/formatdomain.html#elementsDisks 16:39:56 From a kvm point of view, it must be unique but can otherwise be anything you like 16:41:51 DuncanT: was that in response to what I asked? 16:41:59 Ok, I'm leaning towards deferring it 16:42:14 Noting it's a libvirt issue and leaving it at that 16:42:20 though if you make it optional, would it force an api version bump? 16:42:34 creiht: optional shouldn't in my opinion 16:42:36 Not sure... it is fully back compatable 16:42:42 jgriffith: So we are sure we cant hack around it then? 16:42:54 renuka: That was at jdurgin 16:43:19 jgriffith: yeah same as my opinion, but the ppb was just voting on api versioning stuff yesterday right? 16:43:32 I think we can but I wonder if it's the *right* thing to do at this point 16:43:32 DuncanT: no wonder I couldn't make sense of it :P Anyway, what do people think about putting a dummy device in the in-between mountpoints 16:43:48 renuka: Not sure what you mean? 16:43:57 i guess the question is can we 16:44:12 i feel like if we tried to hack around it, we'd just be unhappy with the results 16:44:19 What do you mean by a dummy device? 16:44:21 no, because we can't guarantee that it gets put in-between 16:44:22 DuncanT: That way, the next mountpoint will be the one requested 16:44:37 creiht: Yeah, we might get hammered on that 16:44:48 jdurgin: Doesn't it take consecutive ones? 16:44:57 not necessarily 16:45:02 creiht: My view on it however is that the signature can be used exactly the same 16:45:40 the guest can do anything it likes with it's block devices 16:46:13 None-kvm guests usually get told what the hypervisor hinted at... kvm guests don't 16:46:22 yeah I agree, I'm just pointing out that it would be worthwhile to check to make sure 16:46:51 you can't assume the guest will respect the hint though 16:46:57 the other wonky thing with the device for attach is that for windows instances the device means nothing but it still has to be there 16:46:59 jdurgin: surely it is following some logic, not picking an arbitrary mount point (here, it seems like consecutive). Have you seen it do otherwise? 16:47:43 There is a windows-style device ID you can use rather than /dev/[sv]d* 16:49:07 Ok, here's my prop 16:49:26 If somebody feels strongly enough about changing this they can grab it and propose a solution 16:49:34 renuka: I don't think it's worth trying to hack around the lower-level api that doesn't guarantee naming - it can be done by udev if you really want guaranteed names 16:49:37 Otherwise I say it's out of scope for F3 16:49:59 jgriffith: Fair 16:50:17 We have bigger fish to fry as they say :) 16:50:37 makes sense 16:50:43 I don't like the behavior but I don't know that it warrants all of the time and effort to change it right now 16:50:55 It is also not a regression 16:51:04 the api always gets discussion deep into the weeds ;) 16:51:16 chalupaul: that's for sure 16:51:30 DuncanT: Good point, it's not a *bug* really but an enhancement request 16:51:38 At least from cinder's perspective 16:51:58 Document the limitation, stick it on the wishlist, Next :-) 16:52:12 DuncanT: That's my proposal 16:52:19 I'll even volenteer to do the docs patch 16:52:24 lol 16:52:29 DuncanT: Sold! 16:52:58 Modify the bug, assign it to you and resolve it via docs 16:53:32 Doing so now 16:53:50 Ok, I'll just quickly sumamrize some of the remaining issues incase folks want some open floor time 16:54:08 bug #1017266 16:54:09 Launchpad bug 1017266 in cinder "Building docs fails" [Undecided,In progress] https://launchpad.net/bugs/1017266 16:54:20 There was a proposed fix for this one that failed jenkins 16:54:30 I've asked the author to resubmit with no luck 16:54:32 have hit an issue with my volume usage metering blueprint 16:54:37 if we have a sec 16:54:43 I'll just duplciate the work and resubmit myself 16:54:51 cian_: Yes, I'll hurry up :) 16:55:09 bug #1021605 16:55:10 Launchpad bug 1021605 in horizon "Is cinderclient being used by horizon?" [Undecided,Invalid] https://launchpad.net/bugs/1021605 16:55:13 the periodic task in compute/manage.py nova 16:55:17 I think this is a non-issue and should be closed 16:55:39 calls self.volume_api.get_all(context) 16:55:48 passing in an admin context 16:55:58 The big one still out there is Bug #1023755 16:55:59 Launchpad bug 1023755 in cinder "Unable to delete the volume snapshot" [Undecided,New] https://launchpad.net/bugs/1023755 16:56:08 but an admin context doesn't have a keystone token or service catalog 16:56:23 so cannot generate a python-cinderclient to talk to cinder 16:56:37 cian_: Just a sec 16:57:16 Vincent_Hou: I'd like to take another look at this one after the tgtadmin changes land 16:57:27 ok 16:57:32 Vincent_Hou: I'm concerned because it sounds like you didn't see this on nova-vol? 16:57:50 i see it in nova -vol 16:57:56 Oh... ok 16:58:02 better news 16:58:08 Anyway, we'll keep looking at it 16:58:11 just different ubuntu versions give different results 16:58:17 Yeah 16:58:29 Ok... I'll wrap up the bug talks for now 16:58:35 #topic open discussion 16:58:40 cian_: Ok 16:58:55 so what i can see we have no way currently of talking to cinder internally in nova without going through KS 16:59:22 glance talk to swift by having a admin user created in KS 16:59:56 hrmmph 17:00:03 glance then fetch all their images from swift using this user 17:01:15 cian_: Can you add the admin user to ks/cinder in the same manner? 17:02:13 I can have a look 17:02:18 cian_: TBH I'll have to look at it later to better understand what you're seeing 17:02:38 cian_: If you want to send me some more info on exactly what you mean I can have a closer look 17:03:19 anything else? 17:03:53 jgriffith: will do 17:03:57 Ok, thanks everyone 17:04:01 thx 17:04:06 Sorry I was pulled back and forth out of the meeting today 17:04:12 DuncanT: Thanks for covering for me 17:04:19 np 17:04:25 #endmeeting