16:01:43 <jgriffith> #startmeeting cinder
16:01:44 <openstack> Meeting started Wed Feb 20 16:01:43 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:48 <openstack> The meeting name has been set to 'cinder'
16:01:53 <thingee> o/
16:01:58 <eharney> hi
16:02:01 <xyang_> hi
16:02:01 <jgallard> hi
16:02:03 <JM1> hi
16:02:07 <bswartz> hi
16:02:08 <rushiagr> hi!
16:02:15 <jgriffith> wow... full house this am
16:02:26 <jgriffith> alright, I suspect a busy meeting so let's get at it
16:02:40 <jgriffith> #topic Shared Storage service updates
16:02:45 <jgriffith> bswartz: You're up
16:03:05 <bswartz> Okay, all of the code has been in review for the last several days
16:03:16 <bswartz> there is one piece not in though -- the LVM drivers
16:03:33 <bswartz> there were some changes requested during the initial review last week
16:03:48 <bswartz> and we won't be able to complete those (with unit test coverage) by today
16:04:16 <bswartz> so we're asking that the new service be accepted with NetApp drivers only, and that the LVM drivers be granted an extension until next week
16:04:26 <bswartz> comments?
16:04:48 <dachary> hi
16:05:28 <kmartin> I think that would be acceptable
16:05:30 <bswartz> I want to emphasize that complete LVM drivers were part of the original submission, and the changes being done are to respond to feedback
16:06:25 <jgriffith> anyone else have any input here?
16:06:30 <kmartin> I believe OpenStack has an exception process, if the core members are okay with it
16:06:46 <jgriffith> kmartin: yes however I have some info around that
16:06:51 <winston-d_> so what drivers can be used as Share back-end, beside NetApp and LVM?
16:07:07 <bswartz> those are the only ones that exist today
16:07:21 <jgriffith> bswartz: actually NetApp is the only one that exists
16:07:25 <bswartz> we hope other vendors with NAS-capable storage backends will write sharded storage drivers too
16:07:37 <bswartz> shared*
16:07:44 <jgriffith> bswartz: It doesn't seem you've received much support on this so far
16:07:45 <winston-d_> do other vendors plan to do so?
16:08:00 <winston-d_> JM1: ?
16:08:12 <winston-d_> xyang_: ?
16:08:22 <xyang_> winston-d: we are thinking about it too
16:08:25 <bswartz> I have not heard from any other vendors yet -- we have been focused on the getting the design implemented
16:08:33 <bswartz> the push for drivers will come next
16:08:41 <jgriffith> who's going to make that push?
16:08:46 <bswartz> we will
16:08:51 <JM1> sorry I didn't follow the shared storage discussion
16:09:04 <bswartz> NetApp plans to be very involved in the ongoing maintenance of the cinder-share stuff
16:09:05 <JM1> not sure this fits with our current model, but I need to check
16:09:19 <thingee> bswartz: just curious, but if you haven't been communicating with other vendors, is the design pretty general for others to come in?
16:09:20 <rushiagr> JM1: and everybody else: https://review.openstack.org/#/c/21290/
16:09:23 <DuncanT1> I'm also wondering how much the design will need to change to accommodate other vendors?
16:09:39 <bswartz> yes, the design is very generic -- similar to how generic the blocks interface is
16:09:47 <JM1> rushiagr: ah ok, that's what it's about
16:10:05 <bswartz> rushiagr: thx
16:10:08 <JM1> so yes, it would make much sense for us to integrate here, but honestly I never quite understood the use case for this
16:10:29 <DuncanT1> What about the actual plumbing and making it work - network security and the likes? I'm still worried that I've seen no designs for that side
16:10:42 <bswartz> eharney: you asked for the changes to the CIFS LVM driver -- do you have any feelings about delivering it after the G3 deadline?
16:10:51 <winston-d_> JM1: yes, that's my feeling 'don't understand the use case' too
16:11:03 <thingee> bswartz: I do apologize on my part, but I'm a bit behind on the review. I'd say 1/3 way done. rushiagr, would you be available today in case I have comments on the review?
16:11:10 <bswartz> DuncanT1: those are things we will address in Havana, there are still many use cases enabled by the existing code
16:11:22 <rushiagr> thingee: sure
16:11:32 <jgriffith> bswartz: so the way I see it then it's not ready until Havana
16:11:35 <eharney> bswartz: i don't really have much of an opinion on that
16:11:41 <DuncanT1> jgriffith: +1
16:11:59 <bswartz> jgriffith: for every use case that doesn't invovle VLANs, there is no network plumbing required
16:12:01 <eharney> I will say that i also find myself not really understanding the use cases and intent of this service in general
16:12:03 <guitarzan> is this a dumb question: nova integration?
16:12:25 <jgriffith> guitarzan: nope, not at all
16:12:50 <bswartz> guitarzan: there's not a lot to be done with nova -- with NAS, the instances can talk directly to the storage backend over CIFS/NFS
16:12:53 <DuncanT1> bswartz: I don't think there are many 'real' installations that have VMs and infrastructure networks as a flat, un-protected space?
16:13:07 <jgriffith> bswartz: which brings me back to the question of why they need a service then?
16:13:15 <kmartin> bswartz:  FWIW, we announce the plans for fibre channel support at the Folsom summit and rallied interested compaines(5 to 6) and have been meeting with them weekly ever since for a common solution that will meet everyone needs
16:13:15 <bswartz> DuncanT1: in public clouds no, but in private clouds yes
16:13:21 <guitarzan> bswartz: interesting... we don't allow guests access to our storage network directly
16:14:18 <DuncanT1> bswartz: I suspect any such design is fragile at best. All of the packaged private clouds I've seen (which is far from all of them) use network isolation
16:14:29 <dachary> as a cinder user I would be frustrated to see a feature in grizzly that is only available with a proprietary technology and no free software alternative.
16:14:52 <bswartz> dachary: I agree -- I would not advocate putting in the changes with no plans to also include LVM drivers
16:15:02 <winston-d_> bswartz: does any other cloud infrastructure software (such VMWare vCloud, CloudStack) provide NAS service?
16:15:28 <bswartz> an alternative proposal would be, grant an extension on the whole change, waiting until the LVM driver are included
16:15:54 <jgriffith> keep in mind there's an entire service you're proposing goes in to Grizzly that will have ZERO testing
16:16:01 <bswartz> winston-d_: I'm not aware of any, this is an opportunity to put OpenStack ahead of the rest
16:16:09 <jgriffith> No gate tests, no tempest tests, no devstack tests etc
16:16:35 <DuncanT1> I'd really like to see something, even a beer mat design, for how this can be made to work with segregated networks without major rework...
16:16:51 <bswartz> jgriffith: those are things we want to work on -- it's hard to put those in before there is something to test
16:17:07 <winston-d_> bswartz: ok. thx for the info. i'm interested to learn the real use cases in either public/private cloud.  I know NAS is very useful but don't know how it is(will) be used in cloud.
16:17:10 <kmartin> It would have to have some type of test to be accepted
16:17:29 <jgriffith> bswartz: understood, but I suspect the TC would kick my butt if I dropped an entire service with zero integration into the openstack ecosystem
16:18:01 <thingee> jgriffith: +1
16:18:04 <bswartz> jgriffith: what I dont' want to do is to try to introduce everything with a big bang -- big bang changes never happen
16:18:20 <thingee> bswartz: for any other change I would be fine to make an exception, but this is a pretty big change.
16:18:22 <bswartz> there has to be an incremental approach to actually build software
16:18:42 <kmartin> jgriffith: Have you ran it by the TC yet?
16:18:58 <guitarzan> as a side note, we are already quite interested in shared storage, if we can make it work :)
16:19:05 <DuncanT1> But we can be incremental over the 6 months of a release far more smoothly than we can between releases... People expect releases to be fairly complete
16:19:08 <bswartz> I believe that laying the foundation for new APIs and reference impls are the right first step
16:19:44 <DuncanT1> i.e. merge what is there on day one of H rather than the last day of G
16:19:58 <kmartin> bswartz: agree, but they should incremental approach needs to include test at the beginning
16:20:01 <dachary> DuncanT1: +1
16:20:21 <kmartin> DuncanT1: +1
16:20:32 <winston-d_> i think some of use are concerned about the reference impls are a proprietary back-end.
16:20:46 <jgallard> DuncanT1: +1
16:20:58 <bswartz> DuncanT1: The beginning is obviously better than the end. We did have a complete submission at the beginning of Grizzly, and it unfortunately has taken us this long to refactor the code to address the concerns of the core team
16:21:50 <bswartz> how about if we update our submission with the NFS based LVM driver today? The CIFS LVM driver still needs more time.
16:22:14 <jgriffith> Alright, we're at 20+ minuts on this topic
16:22:23 <jgriffith> I'm going to say it's not ready
16:22:43 <jgriffith> and I'm still not even convinced of the use case/need for it anyway
16:22:51 <jgriffith> but there's way too much risk here
16:22:53 <bswartz> jgriffith: will you grant an extension until next week and reconsider?
16:23:04 <jgriffith> bswartz: I don't have that power
16:23:32 <jgriffith> bswartz: FFE's come from the TC (ie Thierry) and I can assure you the answer would be no given the facts here
16:23:46 <bswartz> so this means we'll be stuck until Havana opens up?
16:24:27 <ttx> until RC1 is out, yes
16:24:49 <bswartz> I have to say I'm disappointed, but I won't argue it any further
16:24:56 <bswartz> I respect the will of the community
16:24:58 <rushiagr> jgriffith: bswartz we can have the NFS part in atleast, so that people can have a look and comment. Maybe we can disable this service for now, but let the code in with a warning?
16:25:43 <winston-d_> rushiagr: disable how?
16:26:02 <jgriffith> rushiagr: Nah, I think that's a sneaky way to get your code in unoficially :)
16:26:06 <thingee> rushiagr: I don't understand the point if it's incomplete for it to be in a major stable release.
16:26:17 <rushiagr> winston-d_: I mean, not have it running by default
16:26:29 <jgriffith> Ok, we really should move on.
16:26:53 <rushiagr> jgriffith: okay..
16:27:16 <jgriffith> #topic G3 status
16:27:24 <thingee> bswartz: can we talk after the meeting?
16:27:38 <winston-d_> rushiagr: that (what service is being run) 's really not controlled by the code itself unless you remove the bin/cinder-share? that sounds very odd to me.
16:27:45 <jgriffith> So it looks like were' in ok shape, everythin is in the review process
16:28:11 <jgriffith> remember the gates take a long time now, so please don't delay/hesitate on reviews and turn arounds
16:28:17 <jgriffith> DuncanT1: any update for backups?
16:28:28 <bswartz> thingee: yes i have another hour after this
16:28:57 <DuncanT1> jgriffith: We're just fighting unit tests after rebasing against the multi-backend stuff... hopefully a few hours off
16:29:26 <jgriffith> DuncanT1: alright
16:29:33 <smulcahy> We pushed another patch earlier today, it should have addressed most of the comments
16:29:49 <jgriffith> smulcahy: yeah, I saw that but it fails unit tests
16:29:58 <jgriffith> thanks to the oslo dump as DuncanT1 mentioned
16:30:26 <jgriffith> thingee: any ideas on the V2 client switch?
16:30:36 <DuncanT1> Head of tree fails unit tests if you run them certain ways that always used to work, which is slowing me down quite a bit
16:30:47 <jgriffith> DuncanT1: Yeah, I'm getting to that
16:30:59 <thingee> jgriffith: I'll need to recheck and make sure something is just not failing at gate
16:31:20 <jgriffith> thingee: looking last night it seems it's a bonified failure
16:31:39 <jgriffith> thingee: again, I think we're being bit by the new config/versioning common changes here
16:31:49 <xyang_> jgriffith: is TearDown still called for unit test?  doesn't seem to be called any more
16:32:14 <jgriffith> xyang_: TBH they've completely jacked everything up so bad I don't even know what it's doing or not doing anymore
16:32:29 <jgriffith> xyang_: Is that why the xml file was left behind for you all of a sudden you expect?
16:32:40 <xyang_> jgriffith: yes
16:32:48 <jgriffith> xyang_: makes sense
16:33:02 <jgriffith> xyang_: considering I wsn't seeing that before
16:33:18 <jgriffith> xyang_: note I logged a couple of bugs for you if you could take care of those it would be great
16:33:32 <xyang_> jgriffith: yes, working on them
16:33:45 <jgriffith> xyang_: the temp file we should be using tempdir or cleaning up manually anyway but it is peculiar
16:33:48 <jgriffith> xyang_: thanks
16:34:12 <xyang_> jgriffith: can I just use "/tmp" as path
16:34:27 <jgriffith> eharney: Looks like you turned the LIO changes around, I'll look at them shortly
16:34:44 <jgriffith> xyang_: You can, but checkout using the tempdir module
16:34:56 <xyang_> jgriffith: sure
16:35:16 <eharney> jgriffith: yep just had to rebase a little
16:35:21 <jgriffith> eharney: :)
16:35:31 <jgriffith> a lot of rebasing going on  :)
16:35:50 <jgriffith> I think those are the big issues...
16:36:11 <jgriffith> Everybody that can we just need to keep on top of reviews and keep things moving
16:36:16 <jgriffith> Please help out if you can
16:36:33 <jgriffith> #topic questions/issues
16:36:55 <jgriffith> Yes, unit tests are borked in local env's, I'll try and figure that out and update everyone
16:37:02 <jgriffith> Anybody else have anything?
16:37:04 <DuncanT1> :-)
16:38:04 <jgallard> I would like to make some integration tests for the multibackend feature with tempest
16:38:13 <jgallard> I didn't find such a tests
16:38:22 <jgallard> what do you think about that ?
16:38:30 <jgallard> it's a good area to explore ?
16:38:33 <jgriffith> jgallard: I love the idea!
16:38:47 <jgallard> great ! :)
16:38:50 <jgriffith> jgallard: So we'll need to modify devstack as well of course
16:39:07 <jgriffith> jgallard: if you need pointers on how all that works and where it lives lemme know
16:39:09 <jgallard> Yes I think so
16:39:16 <DuncanT1> We will have a look at some boackup tests for tempest too
16:39:17 <jgriffith> jgallard: although today won't be a good day for that :)
16:39:17 <kmartin> I had an action last week to populate the list of volume stats, I placed them on https://wiki.openstack.org/wiki/Cinder wiki, I think I documented it correctly but would like someone to look at them
16:39:26 <jgriffith> DuncanT1: yes please :)
16:39:27 <jgallard> jgriffith: ok ! thanks a lot ;-)
16:40:03 <jgriffith> kmartin: nice
16:40:04 <JM1> kmartin: the 'storage_protocol' looks very vague to me
16:40:21 <jgriffith> JM1: that's kind of intentional :)
16:40:25 <JM1> and I still wonder how/if it is used
16:41:02 <JM1> I see it as an implementation detail of the driver, but you may enlighten me
16:41:31 <kmartin> I could add a note about it matching the driver volume type?
16:41:34 <winston-d_> let's discuss what capabilities/stats to report for drivers in coming summit.
16:41:54 <jgriffith> winston-d_: +1
16:42:00 <kmartin> winston-d_: +1
16:42:09 <thingee> winston-d_: +1
16:42:13 <xyang_> winston-d_: +1
16:42:19 <jgriffith> JM1: This is something that we plan to formalize
16:42:30 <JM1> no plan to attend the summit on my side, but I will read your notes on this ;)
16:42:31 <jgriffith> JM1: right now it's kinda "loose" based on some needs we had
16:42:46 <kmartin> I think most drivers have volume stats now so people could look at them for an exanmple
16:42:55 <JM1> jgriffith: ok, and I suppose the needs aren't clear yet
16:43:02 <jgriffith> JM1: :)
16:43:08 <jgriffith> JM1: some are, some aren't
16:43:15 <JM1> ok, that's fine with me
16:43:28 <JM1> until then we can have placeholders and improve as needed
16:44:02 <winston-d_> JM1: did you read this? https://etherpad.openstack.org/p/cinder-driver-modification-for-filterscheduler
16:44:35 <JM1> winston-d_: nope, didn't know about it, thanks for the pointer
16:44:49 <winston-d_> and finally, i have a doc about filter scheduler.
16:45:15 <winston-d_> don't know how to share to you all except explicitly add each one of you into google doc's share.
16:45:25 <xyang_> winston-d_: that looks great.  thanks!
16:45:44 <jgriffith> winston-d_: make it viewable by anyone with link
16:45:46 <jgriffith> and post the link
16:47:08 <winston-d_> jgriffith: sure. let me try to find the
16:47:44 <JM1> winston-d_: interesting, we don't fit in any of the proposed "protocols"
16:48:29 <jgriffith> JM1: what protocol do you use?
16:48:47 <jgriffith> JM1: sorry.. not sure of your affiliation/device
16:48:51 <winston-d_> all, here's the doc for filter scheduler: https://docs.google.com/document/d/1fDXaBD9B7D6A_5RCitJ8mMTPv9nliHKNex8K8-WhYlQ/edit?usp=sharing
16:48:54 <xyang_> winston-d_: can you also explain how multiple volume driver uses volume_type in your doc?
16:49:01 <JM1> jgriffith: we provide a FUSE filesystem, which talks to our servers with our own proprietary protocol
16:49:13 <jgriffith> Oh... yes, now I know your patch :)
16:49:13 <winston-d_> xyang_: check out my filter scheduler doc i just posted.
16:49:21 <xyang_> xyang_: sure
16:49:42 <jgriffith> JM1: Ceph would be the closest in terms of protocol I believe
16:49:57 <JM1> jgriffith: we will provide NFS in a release this year, but it's not there yet
16:50:02 <jgriffith> JM1: https://review.openstack.org/#/c/22400/1/cinder/volume/drivers/rbd.py
16:50:09 <jgriffith> JM1: sure
16:50:15 <JM1> well, Ceph is different in many ways
16:50:24 <jgriffith> JM1: yes understood
16:50:27 <JM1> and even if we were close, but slightly different
16:50:39 <JM1> I don't see how useful to the scheduler this information could be
16:50:48 <winston-d_> please do give me feedback on anything you find unclear/badly written/typo anything.
16:50:51 <jgriffith> JM1: What I'm saying is as far as the "storage_protocol" just do as Ceph did and put your own custom entry for now
16:51:04 <JM1> jgriffith: ah yes, sure
16:51:14 <avishay> Hi all, I'm very late :)
16:51:17 <jgriffith> JM1: Yes, you made that clear to me the other day when you said our architecture was shit
16:51:20 <jgriffith> :)
16:51:27 <JM1> did I say this?
16:51:38 <JM1> I try to say things softer usually :)
16:51:43 <jgriffith> haha
16:51:47 <JM1> (even when I mean it like that)
16:51:51 <jgriffith> LOL
16:52:16 <jgriffith> alrighty... anybody have anything else pressing?
16:52:25 * DuncanT1 is intrigued to hear all of that rant some time
16:53:10 <JM1> jgriffith: I will have a process question related to my nova patch
16:53:14 <JM1> but that may be off topic here
16:53:22 <jgriffith> Sure, hit me up later
16:53:28 <JM1> great
16:53:28 <winston-d_> let me know if there is any more use case around filter scheduler or scheduling you would like to see in the document.
16:53:39 <jgriffith> Ok... everyone thank you very much.
16:53:50 <DuncanT1> Thanks John
16:53:51 <jgriffith> Off to figure out how to fix the unit test debacle.
16:54:06 <jgriffith> I'll be around if anybody needs anything or wants to help with stuff :)
16:54:11 <jgriffith> #endmeeting