16:01:41 <DuncanT> #startmeeting Cinder
16:01:42 <openstack> Meeting started Wed Feb  6 16:01:41 2013 UTC.  The chair is DuncanT. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:45 <openstack> The meeting name has been set to 'cinder'
16:01:48 <DuncanT> Lo all
16:01:53 <JM1> hi
16:01:53 <bswartz> hi
16:01:54 <hemna_> :)
16:01:54 <winston-d_> hi
16:01:56 <xyang_> hi
16:01:59 <rushiagr> hi
16:02:06 <thingee> o/
16:02:13 <eharney> hi
16:02:55 <DuncanT> JGriffith is away, so he's asked me to chair. There's the bare bones of an agenda at http://wiki.openstack.org/CinderMeetings as usual but PM me or shout up if there's something you want to discuss
16:03:37 <hemna_> So I saw avishay's comments on his issues he's having with FC
16:03:53 <hemna_> I'm going to try and see if I can dig up a QLogic HBA today at work and try and reproduce his issues.
16:04:13 <DuncanT> FC still isn't merged, correct?
16:04:18 <hemna_> correct
16:04:26 <DuncanT> (sorry, been away for a bit, still playing catchup)
16:04:39 <hemna_> it's been getting a good amount of reviews from the nova guys lately though
16:04:51 <DuncanT> Sounds like testing is progressing anyway, which is great
16:04:56 <hemna_> yup
16:05:04 <DuncanT> Anything else you need
16:05:05 <DuncanT> ?
16:05:13 <hemna_> don't think so
16:05:34 <DuncanT> Good stuff
16:06:44 <DuncanT> The only blueprint I'm involved with is volume backup... we're fixing uuids and iscsi attach and a new review will be up shortly. I don't understand the comments from ronenkat but I'm hoping they'll get back to me with more detail
16:07:52 <hemna_> need more eyes for the review?
16:08:15 <bswartz> DuncanT: is volume backup working completely now?
16:08:41 <DuncanT> bswartz: Multi-node is broken in some cases until the iscsi attach change turns up
16:08:43 <thingee> DuncanT: some of the questions I've been asking about coverage with the backup manager seemed to get missed last time. Francis was saying it has 100% but I just can't see that with coverage report.
16:09:08 <bswartz> DuncanT: cool
16:09:14 <DuncanT> thingee: He's going to mail you about that... definitely not seeing what you're seeing
16:09:33 <smulcahy> thingee: We've re-ran the coverage report on fresh devstacks with the branch and we're getting very different coverage reports
16:09:43 <smulcahy> thingee: So not sure why the discrepancy
16:10:10 <DuncanT> hemna: More eyes are always good, particularly as we thing the next patch will be pretty much done, except possibly some testing issues
16:10:11 <thingee> it must be something weird in my env. I'll try a fresh repo this time instead of just new venv
16:10:19 <thingee> smulcahy, DuncanT ^
16:10:39 <DuncanT> thingee: Thanks for that
16:11:07 <hemna_> coolio
16:11:28 <DuncanT> Any other blueprints to comment on? Multi backend scheduler?
16:11:49 <smulcahy> thingee: thanks - yeah, maybe try from scratch because we're not seeing your coverage results.
16:11:50 <thingee> DuncanT: o/
16:11:50 <winston-d_> yup
16:12:06 <DuncanT> thingee: Yup
16:12:21 <thingee> DuncanT: cinderclient v2 is up for review.
16:12:45 <thingee> DuncanT: jgriffith mentioned he wanted args to be consistent. I'll make a comment to the review and switch back to wip
16:12:45 <DuncanT> Ooo, hadn't spotted that
16:12:46 <winston-d_> i have some comment with multi back-end volume service patch. but haven't gone through the whole patch yet. i'll talk to hub_cap offline
16:12:58 <rushiagr> and so is 'NAS as a separate service' code
16:13:08 <thingee> DuncanT: I'm going to deprecate the other arg style for a release so everyone is happy :)
16:13:31 <DuncanT> thingee: People are never happy
16:13:38 <thingee> DuncanT: I'm happy
16:14:04 <DuncanT> We've a general plea for reviews... quite a few open and quite a few un-responded to review comments
16:14:05 <thingee> DuncanT: docs are coming along. not too worried about "feature freeze" deadline with 'em ;)
16:14:30 <DuncanT> https://review.openstack.org/#/q/status:open+project:openstack/cinder,n,z
16:14:53 <thingee> DuncanT: v1 doc is just about done. will be doing that more and maybe starting v2 over the weekend
16:15:01 <DuncanT> https://review.openstack.org/#/q/status:open+project:openstack/python-cinderclient,n,z
16:15:21 <DuncanT> thingee: Good stuff. Will take a look at the v2 client stuff asap
16:16:05 <bswartz> DuncanT: is there any way we, as reviewers, can prioritize what to review?
16:16:31 <thingee> bswartz: yea hang on
16:16:45 <thingee> bswartz: https://launchpad.net/cinder/+milestone/grizzly-3
16:16:47 <winston-d_> bswartz: i think those reviews that are targeted to G3 should come first
16:16:52 <thingee> bswartz: whatever is in code review
16:17:00 <DuncanT> bswartz: I tend to go with stuff I've previously commented on that has been updated, followed by G3 stuff
16:17:17 <bswartz> okay, all good suggestions
16:17:26 <DuncanT> bswartz: I also encourage people to shout up when they've something they feel is being ignored
16:17:42 <bswartz> it would be ideal if we there was a way to minimize overlapping review work so everything gets equal coverage, but perhaps that's not possible
16:17:58 <bswartz> DuncanT: +1 that works too
16:18:18 <xyang_> what about reviews for bug fixes?  they are not targeted, but have to go in, right
16:18:45 <bswartz> xyang_: technically, bugfixes could go in after G-3
16:19:10 <xyang_> bswartz: ok
16:19:19 <thingee> xyang_: you can ping core devs available in #openstack-cinder too
16:19:26 <thingee> or #openstack-dev
16:19:32 <bswartz> that's not a reason to ignore them, but they feel lower priority to me than new features
16:19:54 <thingee> DuncanT: what else?
16:20:02 <xyang_> thingee: ok
16:20:10 <DuncanT> #topic Policy on what's required for a new driver
16:20:10 <Yada> what about bugs which impact BP devs (volume_create issues) ?
16:20:44 <DuncanT> Yada: If you think something needs bumping up the priority, your best bet is to poke people in #openstack-cinder
16:21:20 <DuncanT> Yada: Most of the core team hang about in there, and are more likely to be responsive to people being keen
16:21:21 <Yada> We will cause it may block our cinder BP approval : currently not working anymore (was ok days ago) ;-)
16:22:08 <Yada> Working on it to dig and provide as much infos as possible
16:22:34 <DuncanT> Yada: Will follow up with you in the cinder room after the meeting if you want
16:22:46 <Yada> no worries
16:23:12 <DuncanT> So John posted something to the openstack mailing list a week and a bit ago about minimum features in new drivers
16:23:54 <DuncanT> I can't find it right now but the gist was that new drivers should be at least as functional as the LVM one is at time of merging, unless they explain why they can't be
16:24:12 <thingee> DuncanT: I haven't been able to find it either. Maybe that's why he feels he got no reply :P
16:24:21 <DuncanT> Since nobody replied, this is likely to become policy unless somebody complains sharpish
16:24:33 <DuncanT> thingee: I did find it earlier... it was a reply down a thread
16:25:09 <DuncanT> Ah ha, in the thread "[Openstack] List of Cinder compatible devices"
16:25:21 <DuncanT> "Having to go through and determine what feature is or is not  supported per driver is EXACTLY what I want to avoid. If we go down the  path of building a matrix and allowing partial integration it's going to  create a huge mess and IMO the user experience is going to suffer  greatly.  Of course a driver can do more than what's on the list, but I  think this is the minimum requirement and I've been pushing back on  submissions based o
16:25:24 <rushiagr> the problem is partly due to his sentences come at the very bottom of the mail
16:25:39 <kmartin> Yeah, I missed it as well...maybe he could add it to http://wiki.openstack.org/Cinder
16:25:46 <Yada> Yep he replied to an email from : Xiazhihui (Hashui, IT) <xiazhihui09@huawei.com>
16:25:49 <bswartz> DuncanT: so that policy implies that as new features are added to the LVM driver, all of the other drivers have to catch up eventually -- can we say something about how quickly that needs to happen?
16:26:16 <DuncanT> bswartz: I'd like a statement about that too, but not sure how to word it
16:26:39 <JM1> it would also be useful to list said features
16:26:43 <DuncanT> bswartz: Certainly I'd like a policy where we can threaten to drop unmaintained drivers
16:26:46 <bswartz> JM1: +1
16:26:58 <DuncanT> JM1: +1
16:27:52 <DuncanT> JM1: Any such list becomes stale if it is external to the code, but certainly a list of 'as of xxx date, the minimum feature list is...'
16:28:04 <DuncanT> Anybody got a problem with the concept?
16:28:19 <kmartin> clearly list each feature that needs implemented and add it to http://wiki.openstack.org/Cinder, since all new developers tend to start there
16:28:19 <winston-d_> nope, sounds good to me
16:28:31 <JM1> kmartin: +1
16:28:40 <xyang_> +1
16:28:43 <rushiagr> kmartin: +1, good point
16:28:53 <Yada> Based on my understanding and chat with John it is : Volume create | delete | attach | detach + Snapshot create | delete + Create Volume from Snapshot
16:28:58 <bswartz> also, if a driver needs to updated to comply with a new feature, does that update count as a bugfix, or does it need a blueprint, and milestone, etc
16:29:11 <DuncanT> bswartz: bugfix in general I think
16:29:14 <JM1> how about volume to/from image?
16:29:36 <xyang_> there's a generic function now in driver.py
16:29:42 <DuncanT> JM1: That and clone are there now in LVM, so I guess they are needed for a new driver
16:29:43 <kmartin> Each new release the list of features should be revisted and updated
16:29:54 <xyang_> I'm testing that function, but have to override something to get it to work
16:30:54 <kmartin> JM1: some of the new features should have a little lag time for the drivers to be updated, like the next release.
16:31:35 <DuncanT> Next milestone release or next full release?
16:32:13 <kmartin> Next full release, some features are not completed until the last sprint and its hard for all the drivers to get updated that quickly
16:32:32 <bswartz> kmartin: +1
16:32:50 <Yada> And what about the new BP ? Will be "fair" if it is the same rules for all and if it does not block BP validation IMHO
16:33:07 <DuncanT> Fair enough, though I think we should strongly encourage quicker updates where we can
16:33:42 <DuncanT> Yada: I don't understand the question sorry
16:34:08 <kmartin> DuncanT: I agree, strongly encouraged but not required
16:34:09 <bswartz> I know speaking for NetApp, we have development schedules, and it's not always easy to make time for stuff that comes up at the last minute. However if driver changes to comply with new features count as bugfixes then that relaxes the deadline to get them done.
16:34:44 <Yada> I mean : if all agree on the minimum cinder features supported than can we apply the same for the new BP instead of asking new BP to commit on all the features I listed above
16:34:58 <xyang_> any new feature need legal approval too, that could take very long
16:35:30 <kmartin> Have to remember some of these features may require legal approval from the bigger companies...and we all know how fast that happens
16:35:47 <DuncanT> kmartin: I work for HP too, I know your pain ;-)
16:35:54 <kmartin> xyang_: :) beat me to it
16:36:17 <xyang_> kmartin:  :)
16:36:26 <DuncanT> Yada: That makes sense, though sometimes it is a matter for taste... we can discuss exceptions at these meetings
16:36:47 <DuncanT> Right, it sounds like we have a general agreement. Any volenteers to draft it on the wiki?
16:37:44 <DuncanT> Anybody at all?
16:38:09 <kmartin> Hell, I'll do it
16:38:13 <DuncanT> :-)
16:38:33 <DuncanT> #action kmartin to draft new driver policy for the wiki
16:38:34 <winston-d_> phew
16:38:57 <DuncanT> So the last item on our agenda is...
16:39:03 <DuncanT> #topic AZs (again)
16:39:11 <jgriffith> sorry... but legal ain't my problem :)
16:39:21 <thingee> he lives
16:39:26 <jgriffith> :)
16:39:36 <thingee> DuncanT: are we educated in this topic now?
16:39:45 <DuncanT> thingee: I don't think so, no
16:40:19 <thingee> make that an action item :P...someone should take lead and get that figured out
16:40:54 * jgriffith pretends he's not back yet :)
16:41:06 <DuncanT> We have our own ideas, but it comes down to 'There is an AZ field in several parts of our API. What do we want it to mean?'
16:41:13 <jgriffith> I'll look at getting something documented
16:41:17 <jgriffith> DuncanT: not that simple
16:41:26 <jgriffith> DuncanT: It actually has a distinct meaning
16:41:34 <jgriffith> DuncanT: Particularly in the context of EC2
16:41:47 <jgriffith> DuncanT: You can only attach volumes to instances in the same AZ
16:41:53 <winston-d_> so who raised this issue?
16:41:59 <DuncanT> winston-d_: Me
16:42:17 <bswartz> jgriffith: we assigned all the action items to you while you were gone
16:42:23 <jgriffith> haha :)
16:42:30 <DuncanT> jgriffith: I'm not sure of the details of the EC2 API
16:42:30 <bswartz> jk
16:42:32 <winston-d_> then  you should educate us, at least a problem statement
16:42:50 <jgriffith> winston-d_: who me?
16:43:02 <jgriffith> winston-d_: I'm not the one who asked what they were :)
16:43:10 <avishay> Hi all, sorry I'm (very) late
16:43:13 <winston-d_> DuncanT: ^^
16:43:19 <jgriffith> winston-d_: Ohh... :)
16:43:40 <DuncanT> winston-d_: The problem is that the fields in the API currently don't do much in relation to the same fields in the nova api
16:43:57 <DuncanT> They have no clear meaning, and inconsitent behaviour
16:44:04 <xyang_> avishay:  hi.  can I talk to you after the meeting?  I'm merging with your changes but have issues
16:44:06 <DuncanT> *consistent
16:44:15 <avishay> xyang_: of course
16:44:17 <jgriffith> DuncanT: hmmm... interesting
16:44:46 <jgriffith> So I'll look at getting this documented and cleared up a bit
16:44:49 <winston-d_> what kind of consistency are you looking for ?
16:44:56 <DuncanT> Nova seems to treat them as a specific specialisation of aggregates that the scheduler treats specially
16:45:08 <jgriffith> DuncanT: That's new :(
16:45:17 <jgriffith> DuncanT: so we have to play some catch up
16:45:44 <jgriffith> The disparity you see currently is because they've moved forward with aggregates and such
16:45:51 <DuncanT> winston-d_: A definition of what they mean, and what the limitations are (e.g. can an instance in az-xyz mount a volume in az-abc?)
16:46:48 <DuncanT> (I hope the answer to that ends up being 'no', but currently it isn't enforced (draft patch from a colleague to fix that in the queue) but currently no two people seem to entirely agree)
16:46:58 <winston-d_> the 2nd part of the question is controlled by Nova API or Cinder API?
16:47:11 <DuncanT> winston-d_: Both
16:47:27 <DuncanT> winston-d_: Can you clone a volume between AZs? (pure cinder)
16:47:41 <DuncanT> winston-d_: Attach is a decision for both
16:47:47 <DuncanT> winston-d_: There are other questions
16:48:27 <DuncanT> #action jgriffith and DuncanT to look at documenting availability zones
16:48:46 <jgriffith> :)
16:48:49 <winston-d_> how do OPS people think about it?  do think they nova/cinder should allow such action?
16:49:03 <winston-d_> since AZ is defined by them
16:49:04 <DuncanT> I have no idea what providers are using availability zones. We are, I'm pretty sure Rackspace don't
16:49:20 <jgriffith> So the best advice I can provide for a quick overview is lookat AWS
16:49:21 <winston-d_> AWS did
16:49:35 <DuncanT> winston-d_: There is no definition of an AZ, so different people have totally different models in mind
16:49:35 <jgriffith> Thta's it was intially modeled after
16:49:53 <winston-d_> DuncanT: that is the real problem i guess
16:49:56 <DuncanT> winston-d_: Cells have removed some of the confusion I think
16:50:10 <jgriffith> DuncanT: I'd argue that cells introduced more confusion but anyway :)
16:50:21 <winston-d_> DuncanT: but cell is transparent to end user (aka API)
16:50:28 <DuncanT> winston-d_: We (HP) don't want cross az mounting
16:50:36 <jgriffith> OK... time out
16:50:44 <jgriffith> No sense beating on this right now
16:50:49 <DuncanT> winston-d_: Indeed, so pan-cell mounting is a 'it should just work'
16:50:56 <jgriffith> DuncanT: and jgriffith will flush this out and doc it for folks
16:51:28 <rushiagr> AWS has cells too? or is it a new concept by us folks?
16:51:45 <DuncanT> rushiagr: Unknown since they aren't user visible
16:51:50 <winston-d_> rushiagr: no, we don't know since it transparent to end users
16:51:50 <jgriffith> Well never mind then... carry on :)
16:52:10 <winston-d_> jgriffith: :)
16:52:22 <DuncanT> So, any other business? We've ten minutes left
16:52:32 <DuncanT> #topic any other business
16:52:50 <DuncanT> jgriffith?
16:53:10 <jgriffith> I don't have much, but I haven't gone through evertyhging you guys covered yet
16:53:21 <jgriffith> My main thing is the usual plea for reviews :)
16:53:30 <jgriffith> We're getting a pretty good back-log again
16:53:40 <jgriffith> Just a note on stable/folsom
16:53:57 <jgriffith> Those patches need to be reviewed/approved by OSLO core team
16:54:51 <rushiagr> jgriffith: Status of blueprint: NAS as a separate service. WIP submitted.
16:55:09 <jgriffith> rushiagr: Saw that... thanks!
16:55:13 <DuncanT> With core team discussions coming up, I expect people will be extra keen on reviews ;-)
16:55:23 <jgriffith> rushiagr: It helps a TON to have something in progress for folks to work on
16:55:43 <DuncanT> rushiagr: Got a link to that?
16:56:03 <rushiagr> DuncanT: https://review.openstack.org/#/c/21290/
16:56:20 <DuncanT> Cheers
16:56:37 <avishay> I'd like to bring up a topic to start thinking about - a framework for certifying hardware
16:57:07 <rushiagr> jgriffith: I know, its better than having a multi thousand line code drop at the last moment
16:57:14 <winston-d_> wow, big topic
16:57:41 <bswartz> avishay: rackspace is working on something like that
16:57:45 <avishay> The Nova FC code doesn't happen to work with my HBA.  We'll try to fix that.  But there should be a way to certify hardware (HBAs, controllers, etc.).
16:57:52 <bswartz> are you in touch with them?
16:58:09 <avishay> bswartz: No.  I'd appreciate any pointers.
16:58:21 <bswartz> avishay: I will get some and get back to you
16:58:28 <avishay> bswartz: thanks a lot
16:58:51 <bswartz> It's called Alamo
16:58:53 <DuncanT> Would be good to hear about those plans too
16:59:00 <jgriffith> I'd rather focus first on black-box driver qualification
16:59:25 <jgriffith> But I agree... if we're going down the paths folks seem to be taking us these days, Hardware may start to become an issue
16:59:25 <bswartz> Alamo has a driver+hardware qualification suite
16:59:28 <avishay> jgriffith: That too
17:00:27 <xyang_> bswartz: Alamo doesn't cover unreleased code though.  I think avishay is asking about that
17:00:30 <avishay> jgriffith: sounds something to discuss at the summit
17:00:39 <avishay> xyang_: not necessarily
17:00:49 <bswartz> I think it's reasonable to say that the cinder core team will NOT worry about hardware qualification, and we will leave that to distros and vendors who support this stuff?
17:01:04 <jgriffith> avishay: +1
17:01:10 <avishay> xyang_: but there should be some "official" test suite that vendors can run to make sure their HW works with OpenStack
17:01:10 <DuncanT> +1million
17:01:24 <jgriffith> bswartz: I would agree up to a point
17:01:44 <jgriffith> bswartz: since we're going to introduce things like FC we have to be slightly more pro-active I htink
17:01:47 <jgriffith> think
17:01:49 <DuncanT> The trouble with 'official tests' is they turn into 'it passes the test suite, cinder must be broken'
17:02:05 <jgriffith> TBH to me that just means... "supported HBA/driver list"
17:02:17 <jgriffith> bswartz: but I would agree, that should fall to the vendors who want/use FC
17:02:20 <xyang_> avishay:  good idea
17:02:25 <DuncanT> Supported by whom?
17:02:29 <jgriffith> bswartz: else from my perspective, take FC out
17:02:38 <jgriffith> DuncanT: Supported is a bad choice of words
17:02:50 <DuncanT> :-)
17:03:06 <bswartz> time check, we're about to get booted
17:03:06 <DuncanT> I've been spending too much time around lawyers ;-)
17:03:12 <jgriffith> haha
17:03:17 <DuncanT> Any final words?
17:03:29 <JM1> "rosebug"
17:03:38 <DuncanT> #endmeeting