16:02:16 #startmeeting cinder 16:02:17 Meeting started Wed Jan 30 16:02:16 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:20 The meeting name has been set to 'cinder' 16:02:24 o/ 16:02:45 rushiagr: hey~ 16:02:47 Hey everyone! 16:02:54 Nice to see folks all ready to go :) 16:03:00 Here's the agenda: http://wiki.openstack.org/CinderMeetings 16:03:18 We'll also set aside some time to talk with hub_cap about multi-backend 16:03:25 <3 16:03:28 So let's get at it.. 16:03:35 #topic status of blue-prints 16:04:12 Check out the g3 link 16:04:15 https://launchpad.net/cinder/+milestone/grizzly-3 16:05:00 So we have a TON of bp's once again! 16:05:28 I just wanted to make sure folks are making progress here 16:05:31 A lot of them seem to be driver-related, wouldn't get worried about those 16:05:34 thingee: let's start with you :) 16:05:34 and we're getting close to the deadline, right? 16:05:41 avishay: easy for you to say :0 16:05:58 jgriffith: :) 16:05:58 JM2: we'll get to that.. but yes 16:06:14 maybe thingee is busy... let's see... avishay ? 16:06:20 jgriffith: haven't started cinder client v2 work. already speced it out 16:06:35 jgriffith: not more than a days worth of work. 16:06:42 jgriffith: yes? 16:06:45 thingee: ok, great so we're on track 16:06:57 thingee: err... just mean that it's still a go 16:06:59 though it's cinder v2 work is not targetted anymore? 16:07:08 cinderclient* 16:07:34 ummmmm 16:07:37 thingee: which one 16:07:46 can doc get in after g3? 16:07:58 jgriffith: please do... 16:07:59 ahhh 16:08:03 jgriffith: https://blueprints.launchpad.net/python-cinderclient/+spec/cinderclient-v2-support 16:08:07 Yeah the client stuff is a wreck 16:08:23 thingee: I'll fix that right now 16:09:04 as for docs for v1 and v2. I spent a great deal last weekend on this, especially learning to get oxygen to do what I need it to do and trying to create consistent docs like the other projects. 16:09:04 thingee: ahh... I remember, I have no milestones enabled for cinderclient proj in launchpad :) 16:09:16 thingee: isn't that fun :) 16:09:39 So it sounds like no big concerns with getting yor BP's in for G3 correct? 16:10:11 freeze is next week, I can switch to have cinderclient v2 done, and then focus back on docs since people might have opinions on wordage 16:10:27 jgriffith: I don't have doubts 16:10:28 thingee: That would be awesome if you have the time 16:10:33 Sounds great 16:10:38 eharney: around? 16:10:41 yup 16:10:45 :) 16:10:59 So what do you think about LIO (other than hating me for me suggestion of migrations) :) 16:11:19 so, i'm getting back to the LIO stuff this week, going to look into migrations and commit changes from the latest review comments 16:11:28 eharney: sounds good 16:11:35 i am not too clear on what the migration thing will end up looking like, but going to investigate and see where i end up 16:11:36 eharney: If we can't get the migration path that's ok 16:11:54 yeah, hoping i can at least document how to do it 16:11:57 eharney: I'm not planning to switch default target or anything yet anyway 16:12:00 right 16:12:08 but it would be handy if folks could migrate existing volumes to try it out 16:12:22 it might just be an admin extension in cinderclient 16:12:31 Or worst case cinder-manage 16:12:51 hmm, yeah 16:12:59 TBH, I don't care... that's the one type of thing that I think cinder-manage is still useful for 16:13:23 seems reasonable to me 16:13:31 I'd like to see it pretty much go away some day but it's handy for db updates and this sort of thing.... but anyway I'm rambling 16:13:32 what is migrations about? 16:13:46 avishay: how to move from tgtd to LIO for the iSCSI target backend 16:13:52 avishay: I'd like to be able to convert existing systems using tgtd to LIO targets 16:13:59 eharney: gotcha 16:14:01 jgriffith: thakns 16:14:06 may be a stretch... but worht a shot 16:14:09 worth 16:14:18 hub_cap: :) 16:14:23 oh waitt... 16:14:26 eharney: anything else :0 16:14:51 +1 for cinder-manage 16:15:13 not at this point 16:15:24 Ok thanks! 16:15:30 hub_cap: around? 16:15:32 hai 16:15:36 sry was talking to rnirmal 16:15:44 hub_cap: need me to come back to ya? 16:15:48 naw im good 16:15:52 hub_cap: So whatya think about multi-backends? 16:16:10 imma bout to push a 2nd review w/ all the necesssaries for volume 16:16:18 SWEEEETTTTTT!!! 16:16:23 I'm stoked man.... 16:16:29 but i still need to work out the weight/cost stuff in the scheduler 16:16:34 bah! 16:16:36 :) 16:16:44 but now it can say if u passed me lvm, i can get you to Q lvmblah 16:16:47 need any help from winston-d ? 16:16:52 whens the cutoff? 16:16:59 next week... 16:17:04 but there is an exception process 16:17:04 like END of next week? 16:17:07 or what? 16:17:20 hub_cap: does the multiback end also work for other non-lvm drivers? 16:17:24 xyang: it will 16:17:36 i just coded lvm cuz it was a "hey look at the review see if you like it" 16:17:39 il add to the rest 16:17:42 hub_cap: you're far enough along that I'm not worried about the deadline here 16:17:52 k jgriffith im not worried too much about it iether 16:18:01 cool cool 16:18:03 im fixing up some tests i broke :) and ill push in like ~1 hr 16:18:06 I'm excited! 16:18:12 jgriffith: calmate ;) 16:18:18 hehe 16:18:23 lol 16:18:27 i will review tomorrow 16:18:31 alright.. anything else? 16:18:32 ill be aorund in cinder to chat about it, cool avishay 16:18:43 hub_cap: sounds good 16:18:47 Ok.... 16:18:51 DuncanT: ping! 16:18:52 :) 16:18:56 yup, i'll remove tomorrow. 16:19:05 s/remove/review 16:19:07 winston-d: remove :) 16:19:10 :) 16:19:14 don't do that... 16:19:15 :) 16:19:16 jgriffith: i have a few loose ends to finish up - the disconnect iscsi thing we discussed, generic iscsi implementation for backup/restore maybe 16:19:38 avishay: you have another patch coming? 16:19:40 avishay: ahhh yes... I got to you but then skipped and didn't come back around :) 16:19:49 xyang: yes 16:19:56 avishay: You probably would like me to finish that patch I started for the san.py changes :) 16:20:05 jgriffith: that would be great :) 16:20:07 avishay: I'll do that today if folks promise to actually review it ;) 16:20:23 * jgriffith is foreshadowing an upcoming topic for the meeting 16:20:35 jgriffith: i'm also working on an update for the storwize/svc driver for all the new features 16:20:41 avishay: can you tell me the function name that I need to implement to check if there are luns on target 16:20:44 i will review everything 16:20:57 Ok... doesn't seem DuncanT is available at the moment... 16:21:00 xyang: as soon as i start coding it (tomorrow?) i will :) 16:21:05 ok 16:21:06 Any HP folks around that are working on the swift backup patch? 16:21:10 Duncan's on vacation today 16:21:12 jgriffith: I think DuncanT is away traveling now 16:21:16 frankm_: ahh... thanks :) 16:21:19 good for him 16:21:25 jgriffith: he's on my side of the pond :) 16:21:25 but now you're in the hot seat :) 16:21:28 no problem :-) 16:21:30 ah... 16:21:35 frankm_: hehe... 16:21:36 me too! 16:21:37 :) 16:21:44 ahhh... well hello smulcahy ! 16:22:01 so I'm working through the review comments I got on the latest patch set for volume backups 16:22:02 alright... you guys have any updates on this? Seems to have stalled after the last go around 16:22:23 frankm_: cool... any issues? Any ETA's? 16:22:29 frankm_: or not that far yet :) 16:22:39 no major issues so far 16:22:58 frankm_: great... any chance we'll see something updated today? 16:23:12 I'd like to merge without possibly having fully resolved the scheduler question - does that seem reasonable? 16:23:15 should have a new patch set tomorrow I hope 16:23:32 frankm_: tomorrow sounds good... 16:23:42 I think thats part of a bigger discussion beyond just the backup/restore functionality 16:23:57 smulcahy: refresh my memory please... 16:24:09 frankm: if we change the swift service to a different backend service and change the backup service flag, does it work for a different backend? 16:25:13 crickets.... 16:25:32 some of the backup operations are passed out by the chance scheduler atm - which will not work for a driver without iscsi, but hardcoding all requests to the original host associated with it directly would seem to introduce a spof 16:26:04 smulcahy: ahhh yes 16:26:05 and this would seem to be an issue for other operations also - presumably the longer term fix is to determine the driver capabilities in the scheduler before deciding where to send the request 16:26:19 smulcahy: yeah, I think that's acceptable 16:26:20 but that seems bigger than "implement a basic volume backup service" ;) 16:26:24 smulcahy: it's not just capabilties - it's connectivity 16:26:44 smulcahy: ok, I think we can wiggle that around later 16:26:47 avishay: yes, I'm being imprecise in my problem description 16:26:58 but don't want to be mistaken for someone that fully understands this problem :) 16:26:59 and I think it's *ok* to go out with limited use cases and build on them 16:27:13 smulcahy: haha 16:27:17 if we're in the iscsi world and everyone is connected to everyone, it's OK. LVM is definitely a problem and Fibre Channel might be too (might have more limited connectivity) 16:27:34 LVM uses iSCSI so no problem there ) 16:27:36 :) 16:27:38 xyang: there are no other backends - what we've done is write the hooks to plug in other backends - but it may need some tweaking when someone introduces another backend 16:27:48 Just temporary connections like we do for clones, image copies etc 16:27:55 but I digress.... 16:27:58 xyang: not clear on whether its worth doing anything more until we see what another backend service looks like though 16:28:02 smulcahy: ok thanks 16:28:14 jgriffith: right, as long as the implementation in the driver goes via iscsi and not direct - maybe that's a good workaround for now 16:28:15 ok... smulcahy frankm_ anything else? 16:28:24 avishay: true dat! 16:28:34 smulcahy: we are looking into it 16:28:41 ah, our initial implementation may use local_path 16:28:49 Ok... next 16:28:56 jgriffith: smulcahy: maybe instead of the current LVM backup/restore implementation, make a generic iSCSI implementation for everyone? 16:28:56 xyang: ohla 16:29:16 avishay: so should we do that before merging or get what we have merged? 16:29:17 that way scheduling is not an issue for anyone 16:29:31 smulcahy: that's up to jgriffith :P 16:29:42 xyang: interesting - what will the backend talk to? a second backend would certainly help flush out rough edges in the plugging model 16:29:42 jgriffith: just see if it is possible to use a different backup service 16:29:47 avishay: smulcahy I would agree if you can do the generic iSCSI implementation that's ideal 16:30:00 the iscsi attach code just got merged today, so it shouldn't be too much work 16:30:25 (i hope) :D 16:30:26 smulcahy: it will be EMC's backup appliance. don't know if it is easy to use the framework you setup 16:31:21 xyang: I expect it might need some incremental improvement - but better to do so with real examples rather than try to anticipate everything I think 16:31:47 smulcahy: we'll find out:) 16:31:51 the API is very basic right now and should allow basic functionality to new drivers - I think that is the best way 16:32:06 in time, we can expand the API as necessary 16:32:29 avishay: agreed, that was the intention 16:32:42 smulcahy: ok... let's see if we can get the iSCSI case in. 16:33:00 I just took another look at the code and once the clone patch lands it should be a good template for you to use 16:33:31 Next... EMC drivers :) 16:33:36 xyang: you're up :) 16:33:50 xyang: ? 16:33:51 smulcahy: if you need help with the iSCSI stuff let me know - I planned to do it in addition to the LVM implementation, but it makes more sense to do INSTEAD of the LVM implementation 16:33:56 the bp? 16:34:10 xyang: So there's a couple 16:34:22 One being the Fibre patch 16:34:28 The other being the isilon patch 16:34:28 jgriffith: we are working on FC driver and Isilon 16:34:31 avishay: will definitely need some examples to work off here I think - I guess we'll see when the clone patch lands - hopefully it should be obvious how to approach it 16:34:35 in good status 16:34:39 xyang: Do you have any status updates? 16:34:43 but still waiting for legal 16:34:43 good progress on both? 16:34:43 jgriffith: what clone patch? 16:34:52 ahh... stupid lawyers 16:34:56 yes, good progress on both 16:35:03 For FC, there's dependency 16:35:17 ok, I'll update the BP's to reflect your statement :) 16:35:29 xyang: Still waiting on the FC nova changes to be reviewed 16:35:34 on HP's code. takes forever for the review to go through 16:35:35 jgriffith: i think we might need some flexibility in adding FC to drivers - we can't test until the nova code goes in 16:35:41 kmartin, yes 16:36:02 I got draft code form kmartin and it helps a lot. 16:36:08 xyang: avishay understood 16:36:16 The draft is great, but need to test as well :) 16:36:27 Yes 16:36:43 jgriffith: I have an issue with the FC mindset if we have a few minutes 16:36:44 I should be able to get a new patch up today with the FC driver feedback that I got...thanks everyone for looking at it 16:36:45 The only thing I would point out is the statement I made in the HP patch 16:36:51 sure 16:37:09 I think we should be extracting out any shared code possible into a fibre_channel.py 16:37:13 parent class 16:37:31 Something to enable other folks to use if they have FC devices going forward 16:37:46 I need to look again at how you guys handled provider location etc 16:37:55 but I'll get back to that later 16:37:58 jgriffith: sure, will work on that today. 16:38:01 kmartin: xyang make sense? 16:38:04 I don't understand why every connection type will be a new class 16:38:07 yeah 16:38:09 kmartin: great... thanks 16:38:13 jgriffith: sure. I'll hear from legal on 2/7. hope that's not too late 16:38:20 avishay: why not? 16:38:32 If a back-end supports both, then we need two Cinder instances? 16:38:43 huh? 16:38:46 xyang: wow, your legal team actually gives you dates? 16:38:47 Why not just see what the connection is, and initialize_connection will do the right thing? 16:39:09 avishay: that's fine... 16:39:23 bswartz: I'm not dealing with them directly. that's what I heard:) 16:39:30 avishay: what I was getting at is any generic FC operations should be in a seperate module that can be imported 16:39:48 jgriffith: +1 16:39:55 jgriffith: and if i want to do that, i inherit from both the iscsidriver class and the fcdriver class? 16:40:06 avishay: Sure.. that's your call 16:40:07 +1 16:40:15 personally blending the drivers seems a bit silly to me 16:40:22 avishay: I think hmna responded to you comment in the draft 16:40:29 I think they should be separate 16:40:29 I would prefer to see inheritance and overriding methods where needed 16:40:31 I think ideally there should only be VolumeDriver, and the rest are helper methods 16:40:53 OK, I guess I'm the minority here :) 16:40:53 avishay: hmmm..... 16:41:13 avishay: I think your point has meritt, but I am not sure I want to tackle that in Grizzly 16:41:43 avishay: also, to be clear I'm not saying there needs to be an FC-driver class 16:41:48 jgriffith: That's fine. I'm planning to keep my driver as 1 class. If people want 3+ classes that's fine, as long as it's not restrictive 16:41:54 avishay: I'm saying an FC module more in line with your suggestion here 16:42:02 jgriffith: OK great 16:42:03 avishay: fair 16:42:21 but from a maintenance perspective inheritance is our friend :) 16:42:28 jgriffith: agreed 16:42:29 that's all I'm getting at 16:42:40 OK, enough on this :P 16:42:46 Ok... we're probably in agreeemnt and don't even know it :) 16:42:51 exactly 16:42:52 :) 16:43:07 Alright... 16:43:16 bswartz: rushiagr you're up 16:43:28 bswartz: rushiagr looks like your rewrites are about done 16:43:39 bswartz: rushiagr Just needs to get through review 16:43:48 jgriffith: you talking about the drivers? 16:44:02 bswartz: Yeah, talking about the direct driver additions 16:44:13 sorry... that wasn't very clear 16:44:15 the WIP for the NAS stuff should be in the next few days -- I don't have an exact date yet but I will VERY SOON 16:44:30 FYI, I'm just going down the list of the BP's on the G3 page if you're having trouble following me 16:44:33 direct drivers ok 16:44:38 That's why I provided the link at the beginning of the topic :) 16:44:53 bswartz: Ok 16:44:58 jgriffith: yes, really soon 16:45:00 bswartz: how are the bugs coming ? 16:45:35 bswartz: I'm under the impression that most are handled in these two BP's, is that the case? 16:45:40 bswartz: NAS stuff == new NAS service for guests? 16:45:48 I believe we're down to only a few bugs 16:46:01 bswartz: https://bugs.launchpad.net/cinder/+bugs?field.tag=netapp 16:46:02 avishay: yes -- it's what I presented at the last conference 16:46:04 avishay: right 16:46:12 bswartz: rushiagr: OK great 16:46:20 Jgrifithh:u talking about direct drivers or NAS enhancements? 16:46:30 bswartz: I'm not thrilled about introducing a bunch of new code if we're not even fixing the existing code 16:46:34 jgriffith: 2 in progress, two will be done in a week 16:47:01 in progress == reviews pending 16:47:05 rushiagr: Ok, so I'm going to target them all to G3... fair? 16:47:18 jgriffith: yes 16:47:31 great, those should be our priority right now IMO 16:48:08 rushiagr: have you triaged https://bugs.launchpad.net/cinder/+bug/1098581 16:48:09 Launchpad bug 1098581 in cinder "Netapp: create_volume_from_snapshot of a different size" [Undecided,New] 16:49:01 rushiagr: Ok... I'll catch you after the meeting and we can synch up on these :) 16:49:06 jgriffith: we have a possible workaround for that -- the cause is well understood 16:49:08 jgriffith: i am working on that bug, sorry, i forgot to assign it to myself 16:49:14 jgriffith: sure 16:49:20 rushiagr: :) No problem 16:49:43 rushiagr: Can you go ahead an update the status and add some notes to it when you get a chance please ? 16:49:55 jgriffith: sure, will do 16:50:07 Ok... we're running out of time as usual :) 16:50:14 #topic requirements for new drivers 16:50:37 So we talked about this a bit but I was going to post a formal policy on this 16:50:47 Wanted to mention it to folks here before doing so ;) 16:51:09 My take on this is that anybody is welcome of course, however... 16:51:22 For a new driver to be accepted it must meet minimum functionality requirements 16:51:39 Those requirements are basicly the base operations in the LVM driver 16:52:01 Ideally you'll offer *more* and have optimizations that make your back-end better 16:52:08 it would be great to see this written somewhere 16:52:15 I mean plain english, not just code :) 16:52:20 but you must support the basic API calls: create/delete/attach/snapshot etc 16:52:32 jgriffith: so once we get backups merged, every new driver will be required to implement that functionality? 16:52:34 jgriffith: do you have a stance on drivers that depend on additional nova features which are pending? 16:52:38 JM2: Agreed... I'll do it today unless there's a mutiny here ;) 16:52:42 jgriffith: not saying thats a bad thing, just wondering 16:52:50 bswartz: example? 16:52:56 like the coraid AoE stuff 16:53:25 bswartz: so IMO that's a seperate issue from cinder functionality 16:53:44 the question is would we accept the driver BEFORE the nova patches are accepted, or would we wait? 16:53:58 bswartz: we'd wait 16:54:31 There's not much value IMO in having a driver in there that doesn't work with the rest of the project 16:54:52 agree as long as the backend supports the basic APIs, they should be included in the first relase of the driver 16:54:55 jgriffith: when you publish the policy it's probably worth explicitly stating that 16:55:11 bswartz: good point.. noted 16:55:29 alright, I'll see about putting a wiki up today and sending a link out on the openstack-dev ML 16:55:41 #topic closing out new drivers 16:55:44 what about smulcahy 's question about backup/restore for new drivers, and other future features 16:55:55 if emulation of eg. snapshots (with a slow copy) is allowed, it can useful to state it 16:56:00 +be 16:56:10 JM2: Yup, I'll clarify that 16:56:19 I don't care how you implement it.... just that it's there 16:56:24 ok 16:56:32 even if it's slow, ugly, uses duck tape etc :) 16:56:51 avishay: the logic for backup/restore should be easy to implement for a new driver, but do you want to *require* that for every driver? 16:56:51 jgriffith: backup/restore, other new features for new drivers? 16:56:57 (no duck was harmed while cooking the scality driver) 16:57:15 avishay: Since backup restore isn't settled yet I don't think it's fair to try and make requirements for folks yet 16:57:23 JM2: hehe... good to know! 16:57:27 * jgriffith likes ducks 16:57:33 avishay: make sense? 16:57:39 * JM2 loves duck liver 16:57:45 * jgriffith mmmmmm 16:57:51 FC SAN update 16:57:55 ? ;) 16:57:59 zykes-: sorry you'll have to wait 16:58:06 zykes-: or read the meeting notes 16:58:08 jgriffith: definitely makes sense for now, but the question is if it will ever be a requirement...what about old drivers that won't implement new features? 16:58:12 but please don't hijack my meeting this week :) 16:58:53 avishay: correct... 16:59:08 avishay: my point was backup/restore isn't going to be part of those base requirements 16:59:26 avishay: we can figure out what to do going forward in Havana 16:59:27 * JM2 sighs in relief 16:59:45 JM2: Yeah, that would be unfair, especially since the code hasn't landed yet IMO 16:59:54 avishay: does that make sense? Or am I still not being clear? 17:00:10 alright... two more things an 1 minute left 17:00:14 jgriffith: OK, i think the policy is good for now, and for Havana we'll need something more well-defined 17:00:17 Oh.. no minutes left 17:00:34 I'm planning to stop accepting BP's for new drivers after this week 17:00:43 jgriffith: yes sir 17:01:08 I should've published something for this earlier, and in the future I'll probably set an earlier timeline like halfway through H3. 17:01:13 Anyway... just wanted to give a heads up 17:01:27 #topic reviews 17:01:53 Everybody please, go to https://review.openstack.org/#/q/status:open+cinder,n,z 17:02:03 Do a review, do 2, do 3... 17:02:07 do whatever you can 17:02:28 The longer things sit the more trouble we have with merge conflicts etc 17:02:33 let's keep things rollin here please 17:02:39 and we're out of time... 17:02:52 Remember openstack-cinder or openstack-dev 17:02:59 morning 17:03:01 Most of the folks here are there at all hours of the day 17:03:11 and now I'll turn it over to the Xen team 17:03:15 #endmeeting