16:01:43 #startmeeting cinder 16:01:44 Meeting started Wed Feb 20 16:01:43 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:48 The meeting name has been set to 'cinder' 16:01:53 o/ 16:01:58 hi 16:02:01 hi 16:02:01 hi 16:02:03 hi 16:02:07 hi 16:02:08 hi! 16:02:15 wow... full house this am 16:02:26 alright, I suspect a busy meeting so let's get at it 16:02:40 #topic Shared Storage service updates 16:02:45 bswartz: You're up 16:03:05 Okay, all of the code has been in review for the last several days 16:03:16 there is one piece not in though -- the LVM drivers 16:03:33 there were some changes requested during the initial review last week 16:03:48 and we won't be able to complete those (with unit test coverage) by today 16:04:16 so we're asking that the new service be accepted with NetApp drivers only, and that the LVM drivers be granted an extension until next week 16:04:26 comments? 16:04:48 hi 16:05:28 I think that would be acceptable 16:05:30 I want to emphasize that complete LVM drivers were part of the original submission, and the changes being done are to respond to feedback 16:06:25 anyone else have any input here? 16:06:30 I believe OpenStack has an exception process, if the core members are okay with it 16:06:46 kmartin: yes however I have some info around that 16:06:51 so what drivers can be used as Share back-end, beside NetApp and LVM? 16:07:07 those are the only ones that exist today 16:07:21 bswartz: actually NetApp is the only one that exists 16:07:25 we hope other vendors with NAS-capable storage backends will write sharded storage drivers too 16:07:37 shared* 16:07:44 bswartz: It doesn't seem you've received much support on this so far 16:07:45 do other vendors plan to do so? 16:08:00 JM1: ? 16:08:12 xyang_: ? 16:08:22 winston-d: we are thinking about it too 16:08:25 I have not heard from any other vendors yet -- we have been focused on the getting the design implemented 16:08:33 the push for drivers will come next 16:08:41 who's going to make that push? 16:08:46 we will 16:08:51 sorry I didn't follow the shared storage discussion 16:09:04 NetApp plans to be very involved in the ongoing maintenance of the cinder-share stuff 16:09:05 not sure this fits with our current model, but I need to check 16:09:19 bswartz: just curious, but if you haven't been communicating with other vendors, is the design pretty general for others to come in? 16:09:20 JM1: and everybody else: https://review.openstack.org/#/c/21290/ 16:09:23 I'm also wondering how much the design will need to change to accommodate other vendors? 16:09:39 yes, the design is very generic -- similar to how generic the blocks interface is 16:09:47 rushiagr: ah ok, that's what it's about 16:10:05 rushiagr: thx 16:10:08 so yes, it would make much sense for us to integrate here, but honestly I never quite understood the use case for this 16:10:29 What about the actual plumbing and making it work - network security and the likes? I'm still worried that I've seen no designs for that side 16:10:42 eharney: you asked for the changes to the CIFS LVM driver -- do you have any feelings about delivering it after the G3 deadline? 16:10:51 JM1: yes, that's my feeling 'don't understand the use case' too 16:11:03 bswartz: I do apologize on my part, but I'm a bit behind on the review. I'd say 1/3 way done. rushiagr, would you be available today in case I have comments on the review? 16:11:10 DuncanT1: those are things we will address in Havana, there are still many use cases enabled by the existing code 16:11:22 thingee: sure 16:11:32 bswartz: so the way I see it then it's not ready until Havana 16:11:35 bswartz: i don't really have much of an opinion on that 16:11:41 jgriffith: +1 16:11:59 jgriffith: for every use case that doesn't invovle VLANs, there is no network plumbing required 16:12:01 I will say that i also find myself not really understanding the use cases and intent of this service in general 16:12:03 is this a dumb question: nova integration? 16:12:25 guitarzan: nope, not at all 16:12:50 guitarzan: there's not a lot to be done with nova -- with NAS, the instances can talk directly to the storage backend over CIFS/NFS 16:12:53 bswartz: I don't think there are many 'real' installations that have VMs and infrastructure networks as a flat, un-protected space? 16:13:07 bswartz: which brings me back to the question of why they need a service then? 16:13:15 bswartz: FWIW, we announce the plans for fibre channel support at the Folsom summit and rallied interested compaines(5 to 6) and have been meeting with them weekly ever since for a common solution that will meet everyone needs 16:13:15 DuncanT1: in public clouds no, but in private clouds yes 16:13:21 bswartz: interesting... we don't allow guests access to our storage network directly 16:14:18 bswartz: I suspect any such design is fragile at best. All of the packaged private clouds I've seen (which is far from all of them) use network isolation 16:14:29 as a cinder user I would be frustrated to see a feature in grizzly that is only available with a proprietary technology and no free software alternative. 16:14:52 dachary: I agree -- I would not advocate putting in the changes with no plans to also include LVM drivers 16:15:02 bswartz: does any other cloud infrastructure software (such VMWare vCloud, CloudStack) provide NAS service? 16:15:28 an alternative proposal would be, grant an extension on the whole change, waiting until the LVM driver are included 16:15:54 keep in mind there's an entire service you're proposing goes in to Grizzly that will have ZERO testing 16:16:01 winston-d_: I'm not aware of any, this is an opportunity to put OpenStack ahead of the rest 16:16:09 No gate tests, no tempest tests, no devstack tests etc 16:16:35 I'd really like to see something, even a beer mat design, for how this can be made to work with segregated networks without major rework... 16:16:51 jgriffith: those are things we want to work on -- it's hard to put those in before there is something to test 16:17:07 bswartz: ok. thx for the info. i'm interested to learn the real use cases in either public/private cloud. I know NAS is very useful but don't know how it is(will) be used in cloud. 16:17:10 It would have to have some type of test to be accepted 16:17:29 bswartz: understood, but I suspect the TC would kick my butt if I dropped an entire service with zero integration into the openstack ecosystem 16:18:01 jgriffith: +1 16:18:04 jgriffith: what I dont' want to do is to try to introduce everything with a big bang -- big bang changes never happen 16:18:20 bswartz: for any other change I would be fine to make an exception, but this is a pretty big change. 16:18:22 there has to be an incremental approach to actually build software 16:18:42 jgriffith: Have you ran it by the TC yet? 16:18:58 as a side note, we are already quite interested in shared storage, if we can make it work :) 16:19:05 But we can be incremental over the 6 months of a release far more smoothly than we can between releases... People expect releases to be fairly complete 16:19:08 I believe that laying the foundation for new APIs and reference impls are the right first step 16:19:44 i.e. merge what is there on day one of H rather than the last day of G 16:19:58 bswartz: agree, but they should incremental approach needs to include test at the beginning 16:20:01 DuncanT1: +1 16:20:21 DuncanT1: +1 16:20:32 i think some of use are concerned about the reference impls are a proprietary back-end. 16:20:46 DuncanT1: +1 16:20:58 DuncanT1: The beginning is obviously better than the end. We did have a complete submission at the beginning of Grizzly, and it unfortunately has taken us this long to refactor the code to address the concerns of the core team 16:21:50 how about if we update our submission with the NFS based LVM driver today? The CIFS LVM driver still needs more time. 16:22:14 Alright, we're at 20+ minuts on this topic 16:22:23 I'm going to say it's not ready 16:22:43 and I'm still not even convinced of the use case/need for it anyway 16:22:51 but there's way too much risk here 16:22:53 jgriffith: will you grant an extension until next week and reconsider? 16:23:04 bswartz: I don't have that power 16:23:32 bswartz: FFE's come from the TC (ie Thierry) and I can assure you the answer would be no given the facts here 16:23:46 so this means we'll be stuck until Havana opens up? 16:24:27 until RC1 is out, yes 16:24:49 I have to say I'm disappointed, but I won't argue it any further 16:24:56 I respect the will of the community 16:24:58 jgriffith: bswartz we can have the NFS part in atleast, so that people can have a look and comment. Maybe we can disable this service for now, but let the code in with a warning? 16:25:43 rushiagr: disable how? 16:26:02 rushiagr: Nah, I think that's a sneaky way to get your code in unoficially :) 16:26:06 rushiagr: I don't understand the point if it's incomplete for it to be in a major stable release. 16:26:17 winston-d_: I mean, not have it running by default 16:26:29 Ok, we really should move on. 16:26:53 jgriffith: okay.. 16:27:16 #topic G3 status 16:27:24 bswartz: can we talk after the meeting? 16:27:38 rushiagr: that (what service is being run) 's really not controlled by the code itself unless you remove the bin/cinder-share? that sounds very odd to me. 16:27:45 So it looks like were' in ok shape, everythin is in the review process 16:28:11 remember the gates take a long time now, so please don't delay/hesitate on reviews and turn arounds 16:28:17 DuncanT1: any update for backups? 16:28:28 thingee: yes i have another hour after this 16:28:57 jgriffith: We're just fighting unit tests after rebasing against the multi-backend stuff... hopefully a few hours off 16:29:26 DuncanT1: alright 16:29:33 We pushed another patch earlier today, it should have addressed most of the comments 16:29:49 smulcahy: yeah, I saw that but it fails unit tests 16:29:58 thanks to the oslo dump as DuncanT1 mentioned 16:30:26 thingee: any ideas on the V2 client switch? 16:30:36 Head of tree fails unit tests if you run them certain ways that always used to work, which is slowing me down quite a bit 16:30:47 DuncanT1: Yeah, I'm getting to that 16:30:59 jgriffith: I'll need to recheck and make sure something is just not failing at gate 16:31:20 thingee: looking last night it seems it's a bonified failure 16:31:39 thingee: again, I think we're being bit by the new config/versioning common changes here 16:31:49 jgriffith: is TearDown still called for unit test? doesn't seem to be called any more 16:32:14 xyang_: TBH they've completely jacked everything up so bad I don't even know what it's doing or not doing anymore 16:32:29 xyang_: Is that why the xml file was left behind for you all of a sudden you expect? 16:32:40 jgriffith: yes 16:32:48 xyang_: makes sense 16:33:02 xyang_: considering I wsn't seeing that before 16:33:18 xyang_: note I logged a couple of bugs for you if you could take care of those it would be great 16:33:32 jgriffith: yes, working on them 16:33:45 xyang_: the temp file we should be using tempdir or cleaning up manually anyway but it is peculiar 16:33:48 xyang_: thanks 16:34:12 jgriffith: can I just use "/tmp" as path 16:34:27 eharney: Looks like you turned the LIO changes around, I'll look at them shortly 16:34:44 xyang_: You can, but checkout using the tempdir module 16:34:56 jgriffith: sure 16:35:16 jgriffith: yep just had to rebase a little 16:35:21 eharney: :) 16:35:31 a lot of rebasing going on :) 16:35:50 I think those are the big issues... 16:36:11 Everybody that can we just need to keep on top of reviews and keep things moving 16:36:16 Please help out if you can 16:36:33 #topic questions/issues 16:36:55 Yes, unit tests are borked in local env's, I'll try and figure that out and update everyone 16:37:02 Anybody else have anything? 16:37:04 :-) 16:38:04 I would like to make some integration tests for the multibackend feature with tempest 16:38:13 I didn't find such a tests 16:38:22 what do you think about that ? 16:38:30 it's a good area to explore ? 16:38:33 jgallard: I love the idea! 16:38:47 great ! :) 16:38:50 jgallard: So we'll need to modify devstack as well of course 16:39:07 jgallard: if you need pointers on how all that works and where it lives lemme know 16:39:09 Yes I think so 16:39:16 We will have a look at some boackup tests for tempest too 16:39:17 jgallard: although today won't be a good day for that :) 16:39:17 I had an action last week to populate the list of volume stats, I placed them on https://wiki.openstack.org/wiki/Cinder wiki, I think I documented it correctly but would like someone to look at them 16:39:26 DuncanT1: yes please :) 16:39:27 jgriffith: ok ! thanks a lot ;-) 16:40:03 kmartin: nice 16:40:04 kmartin: the 'storage_protocol' looks very vague to me 16:40:21 JM1: that's kind of intentional :) 16:40:25 and I still wonder how/if it is used 16:41:02 I see it as an implementation detail of the driver, but you may enlighten me 16:41:31 I could add a note about it matching the driver volume type? 16:41:34 let's discuss what capabilities/stats to report for drivers in coming summit. 16:41:54 winston-d_: +1 16:42:00 winston-d_: +1 16:42:09 winston-d_: +1 16:42:13 winston-d_: +1 16:42:19 JM1: This is something that we plan to formalize 16:42:30 no plan to attend the summit on my side, but I will read your notes on this ;) 16:42:31 JM1: right now it's kinda "loose" based on some needs we had 16:42:46 I think most drivers have volume stats now so people could look at them for an exanmple 16:42:55 jgriffith: ok, and I suppose the needs aren't clear yet 16:43:02 JM1: :) 16:43:08 JM1: some are, some aren't 16:43:15 ok, that's fine with me 16:43:28 until then we can have placeholders and improve as needed 16:44:02 JM1: did you read this? https://etherpad.openstack.org/p/cinder-driver-modification-for-filterscheduler 16:44:35 winston-d_: nope, didn't know about it, thanks for the pointer 16:44:49 and finally, i have a doc about filter scheduler. 16:45:15 don't know how to share to you all except explicitly add each one of you into google doc's share. 16:45:25 winston-d_: that looks great. thanks! 16:45:44 winston-d_: make it viewable by anyone with link 16:45:46 and post the link 16:47:08 jgriffith: sure. let me try to find the 16:47:44 winston-d_: interesting, we don't fit in any of the proposed "protocols" 16:48:29 JM1: what protocol do you use? 16:48:47 JM1: sorry.. not sure of your affiliation/device 16:48:51 all, here's the doc for filter scheduler: https://docs.google.com/document/d/1fDXaBD9B7D6A_5RCitJ8mMTPv9nliHKNex8K8-WhYlQ/edit?usp=sharing 16:48:54 winston-d_: can you also explain how multiple volume driver uses volume_type in your doc? 16:49:01 jgriffith: we provide a FUSE filesystem, which talks to our servers with our own proprietary protocol 16:49:13 Oh... yes, now I know your patch :) 16:49:13 xyang_: check out my filter scheduler doc i just posted. 16:49:21 xyang_: sure 16:49:42 JM1: Ceph would be the closest in terms of protocol I believe 16:49:57 jgriffith: we will provide NFS in a release this year, but it's not there yet 16:50:02 JM1: https://review.openstack.org/#/c/22400/1/cinder/volume/drivers/rbd.py 16:50:09 JM1: sure 16:50:15 well, Ceph is different in many ways 16:50:24 JM1: yes understood 16:50:27 and even if we were close, but slightly different 16:50:39 I don't see how useful to the scheduler this information could be 16:50:48 please do give me feedback on anything you find unclear/badly written/typo anything. 16:50:51 JM1: What I'm saying is as far as the "storage_protocol" just do as Ceph did and put your own custom entry for now 16:51:04 jgriffith: ah yes, sure 16:51:14 Hi all, I'm very late :) 16:51:17 JM1: Yes, you made that clear to me the other day when you said our architecture was shit 16:51:20 :) 16:51:27 did I say this? 16:51:38 I try to say things softer usually :) 16:51:43 haha 16:51:47 (even when I mean it like that) 16:51:51 LOL 16:52:16 alrighty... anybody have anything else pressing? 16:52:25 * DuncanT1 is intrigued to hear all of that rant some time 16:53:10 jgriffith: I will have a process question related to my nova patch 16:53:14 but that may be off topic here 16:53:22 Sure, hit me up later 16:53:28 great 16:53:28 let me know if there is any more use case around filter scheduler or scheduling you would like to see in the document. 16:53:39 Ok... everyone thank you very much. 16:53:50 Thanks John 16:53:51 Off to figure out how to fix the unit test debacle. 16:54:06 I'll be around if anybody needs anything or wants to help with stuff :) 16:54:11 #endmeeting