16:02:28 <DuncanT-> #startmeeting Cinder
16:02:28 <openstack> Meeting started Wed Mar  5 16:02:28 2014 UTC and is due to finish in 60 minutes.  The chair is DuncanT-. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:31 <openstack> The meeting name has been set to 'cinder'
16:02:44 <DuncanT-> https://wiki.openstack.org/wiki/CinderMeetings#Agenda_for_next_meeting as usual
16:02:56 <glenng> Greetings all :-)
16:03:25 <DuncanT-> Who's here?
16:03:32 <xyang1> hi
16:03:34 <bswartz1> I'm idling mostly
16:03:37 <akerr> hi
16:03:38 <winston-d> o/
16:03:40 <mtanino> Hi
16:03:42 <avishay> hello
16:03:53 <sneelakantan> hi
16:03:56 <philr> Hi
16:04:01 <DuncanT-> #topic i-3 status
16:04:03 <coolsvap> Hello
16:04:29 <avishay> https://launchpad.net/cinder/+milestone/icehouse-3
16:04:33 <DuncanT-> Very briefly: Looks like all the blueprints that were explicitly targeted are done
16:05:13 <glenng> A lot of bug fixes too.
16:05:18 <DuncanT-> The netapp thing is merged but launchpad hasn't caught up yet
16:05:29 <bswartz1> stuck in gate last I checked
16:05:35 <bswartz1> the job was up to 12.5 hours
16:05:53 <DuncanT-> Ah, ok. Hopefully it will get through, we can kick it if it doesn't
16:05:55 <eharney> there are a couple Cinder changes in the gate but not too far off now
16:06:02 <DuncanT-> Any other comments / feedback / worries?
16:06:05 <winston-d> bswartz1: we've seen much worse, be paitent. :)
16:06:09 <bswartz1> heh
16:06:27 <bswartz1> thanks for the +2s
16:06:31 <DuncanT-> I'm calling the three day hackathon a success.
16:06:50 <DuncanT-> I'll add feedback on that to the end of the adgenda
16:06:55 <DuncanT-> agenda
16:07:11 <DuncanT-> For now, I think Vishy wants the stage....
16:07:20 <winston-d> kudos to Mike~
16:07:30 <DuncanT-> #topic Volume replication
16:07:33 <glenng> *agrees*
16:07:33 <DuncanT-> winston-d ++
16:07:39 <avishay> vishy == avishay? :)
16:08:05 <avishay> So volume replication unfortunately didn't make it to Icehouse
16:08:24 <avishay> ronenkat will be taking over the code for juno
16:08:49 <avishay> he has sent out an email to the mailing list with a conference call to discuss the design
16:09:06 <avishay> ideally we'd like feedback in the next few weeks so that there is something concrete to discuss in atlanta
16:09:23 <avishay> ronenkat: do you want to add anything?
16:09:27 <DuncanT-> That conference call is in 50 minutes I think?
16:09:39 <ronenkat> avishay: just to clarify, the focus of the call is on disaster recovery not just replication
16:09:50 <akerr> is there a link to that email?  i have 600 unread ML emails :(
16:10:26 <avishay> akerr: http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg18224.html
16:10:33 <akerr> avishay: thanks
16:10:36 <avishay> akerr: sure
16:10:45 <philr> avishay: I just want to note, that we at LINBIT intend to hook DRBD into any replication hooks that will come to cinder
16:10:47 * jungleboyj has finally dug out and made it into the office.
16:11:13 <avishay> philr: will that be with the LVM driver?
16:11:16 <DuncanT-> philr: I'm looking forward to hearing more about that
16:12:05 <philr> We have been working for some month on a thing called "drbdmanage", that does drbd config files for nodes...That in turn uses LVM as of today
16:12:22 <philr> Though we prepared to have other backends
16:12:33 <DuncanT-> Shall we hop to that as a topic now?
16:12:42 <philr> We had libstoragemanagement in mind, but I guess we will be able to do cinder instead as well
16:12:42 <avishay> philr: it would be great if you could make that happen - if you could implement DRBD+LVM that would probably be the reference implementation
16:12:44 <DuncanT-> #topic DRBD/drbdmanage driver for cinder
16:13:06 <DuncanT-> Any blueprints or other reading material?
16:13:11 <DuncanT-> (or any code)...
16:13:22 <philr> No blueprints as of today
16:13:44 <philr> We plan to do blueprints within the next two weeks
16:13:46 <avishay> philr: it would be great if you could review the replication blueprint + code and see that it works for you
16:14:00 <avishay> philr: https://review.openstack.org/#/c/64026/
16:14:16 <philr> avishay: Thanks for the link. We will do that
16:14:26 <avishay> philr: great
16:14:45 <DuncanT-> philr: Is there any code yet, or is it still largely conceptual?
16:14:59 <eharney> philr: i have had some libstoragemgmt work going on for a bit myself
16:15:20 <eharney> philr: for Cinder, that is
16:15:42 <philr> The code we have right now is "drbdmanage" ... That is not cinder specific. It is a generic manage-a-drbd-cluster code.
16:16:00 <philr> http://git.drbd.org/gitweb.cgi?p=drbdmanage.git;a=summary
16:16:39 <philr> ...but all that is the "ground work" for our cinder integration.
16:16:49 <avishay> philr: great stuff
16:17:41 <philr> PS: Last relese is 3 weeks old, we will do the next one on Friday (March 7), and then concentrate on the actual cinder work.
16:18:15 <DuncanT-> I'll take a nose through that code, and I look forward to seeing the cinder work
16:18:19 <avishay> OK great, please keep us updated
16:18:34 <philr> BTW, we plan to use D-BUS from our cinder bits to drbdmanage.
16:19:12 <DuncanT-> That should be... interesting. No good reason not to that I'm aware of, just not seen it attempted before
16:19:32 <avishay> Yep...
16:19:45 <avishay> philr: please include that in your blueprint along with the reasoning behind it
16:19:51 <philr> ok.
16:20:34 <philr> Ok, Then, I am all set for today.
16:20:46 <avishay> philr: good stuff, thanks!
16:20:57 <mtanino> My turn?
16:21:28 <DuncanT-> Yup
16:21:40 <mtanino> Thanks.
16:21:40 <sneelakantan> DuncanT-: Could you give me a few mins at the end? Wanted to bring up a blueprint for icehouse discussion.
16:21:44 <mtanino> I proposed new LVM driver for Juno.
16:21:46 <DuncanT-> #topic New LVM
16:21:49 <mtanino> https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage
16:21:57 <DuncanT-> sneelakantan: Noted
16:22:01 <mtanino> Could anyone review or comment for about the spec of my blueprint?
16:22:26 <mtanino> And then I would like to get an approval for this blueprint.
16:22:34 <eharney> mtanino: so, i'm interested, i've been working on LIO and targetd support for Cinder
16:22:39 <avishay> mtanino: the blueprint was hard for me to understand.  is the proposal to add FC to the LVM driver?  it seems like more than that?
16:22:41 <eharney> but i haven't looked at this for long enough to wrap my head around it yet
16:22:48 <DuncanT-> Don't worry too much about approval, that doesn't actually mean a lot
16:23:19 <eharney> this assumes a remote LVM storage setup though?
16:24:08 <mtanino> avishay, Not to add FC code in the LVM driver.
16:25:24 <mtanino> In this blueprint, condition of the environment is FC or iSCSI connected storage.
16:25:28 <rushiagr> sorry, late
16:25:34 <KurtMartin> mtanino: so a brand new LVM driver that supports both iSCSI and FC
16:26:09 <avishay> wait...so this is LVM on top of SAN storage?  for example, taking a Storwize/3PAR/whatever volume, and using that as a PV?
16:26:41 <eharney> since it says it presupposes virtual storage using iSCSI targetd, it sounds like this is a targetd driver and not an LVM driver
16:27:06 <mtanino> KurtMartin, Yes.
16:27:07 <avishay> i thnk we just gave 3 different interpretations to the same paragraph of text
16:27:14 <avishay> this is how religious wars start... :)
16:27:21 <KurtMartin> agree it's not clear
16:27:43 <DuncanT-> Are you planning a summit session on this? Sounds like there's lots of peopel who want ot ohear details
16:27:50 <DuncanT-> mtanino: ^^^
16:27:58 <bswartz1> +1
16:28:05 <mtanino> I mean, current LVM driver is required to use iSCSI target. Right?
16:28:12 <avishay> mtanino: correct
16:28:13 <eharney> or iser/rdma
16:28:39 <mtanino> But in SAN environment, some user does not want to use iSCSI target and access to backend storage directly.
16:29:19 <avishay> mtanino: what do you mean by "directly"?
16:29:25 <mtanino> OK.
16:29:44 <mtanino> The driver does not operate the backend storage.
16:30:08 <mtanino> Just create LV and VG on top of mounted LU on a server.
16:30:28 <mtanino> Is may explain is good?...
16:30:33 <avishay> mtanino: and the LU is a volume on SAN storage?
16:30:43 <eharney> what is running on the server?
16:30:47 <mtanino> avishay, Yes.
16:31:52 <mtanino> eharney, what do you mean?..
16:32:50 <eharney> so if the driver doesn't operate the backend storage (meaning different from the current LVM driver which does, i think) -- what does?
16:32:51 <avishay> i THINK the idea is: you have a SAN volume attached to a Nova host, and the volume is actually a VG. and then you create an LV on that VG.  i don't get why, but i think that's the idea.
16:33:11 <DuncanT-> Sounds like you're mounting the same iscsi / fc lun in lots of places, and being careful to keep the creates in one, but letting the clients (compute nodes) do direct access?
16:33:22 <mtanino> avishay, Thanks. That's right.
16:33:38 <DuncanT-> Avoids importing and re-exporting a LUN through a server I guess?
16:34:02 <eharney> basically you transport data via some LVM clustering idea instead of an iSCSI target. ok, i think i get the picture
16:34:39 <DuncanT-> Seems reasonable to me. I'd love to hear more at the design summit if you can make it, mtanino?
16:34:42 <mtanino> DuncanT-, Yes. Cinder node and Nova compute mount same volume.
16:35:36 <mtanino> DuncanT-, Thanks. I will have a plan to join a summit.
16:35:36 <avishay> mtanino: will you attend the openstack summit in atlanta?
16:35:44 <avishay> great
16:35:44 <eharney> maybe consider adding a diagram or two to the blueprint explaining all the parts and how they're connected, just to make it simpler to understand
16:35:52 <avishay> eharney: +1
16:35:52 <mtanino> So, can I have a session for the design summit?
16:36:23 <mtanino> eharney, Thanks for your comment.
16:36:26 <DuncanT-> mtanino: Apply when the slots open. I'm certainly keen to here more
16:36:33 <eharney> as am i
16:36:38 <KurtMartin> me too
16:36:38 <jungleboyj> eharney: +1
16:36:39 <mtanino> DuncanT-, OK.
16:36:52 <DuncanT-> #action mtanino To tell us more at the design summit
16:37:06 <DuncanT-> Any more comments on that?
16:37:18 <eharney> sounds good to me for now
16:37:28 <mtanino> eharney, Thanks.
16:37:53 <DuncanT-> Ok, sneelakantan wanted a spot
16:38:04 <sneelakantan> thanks
16:38:04 <DuncanT-> #topic sneelakantan blueprint
16:38:05 <avishay> sneelakantan: topic?
16:38:10 <DuncanT-> Got a link please?
16:38:59 <DuncanT-> sneelakantan: Hello?
16:39:10 * jungleboyj hears crickets
16:39:37 <DuncanT-> We can come back to that I guess....
16:39:54 <DuncanT-> #topic any other business
16:39:58 <DuncanT-> Any more for any more?
16:40:16 <hemna_> mornign
16:40:24 <jungleboyj> Can I make a clarification on Feature Freeze?
16:40:25 <hemna_> miss anything ?
16:40:55 <DuncanT-> jungleboyj: You can try...
16:41:02 <jungleboyj> DuncanT-: Thank you sir.
16:41:27 <sneelakantan> damn! had connection trouble with freenode.
16:41:34 <jungleboyj> For changes that are currently in flight but are still in review.  Will those continue to be reviewed or are they going to be -2'd?
16:41:40 <DuncanT-> sneelakantan: NP... we'll come back to you in a sec
16:41:57 <jungleboyj> sneelakantan: Sorry, I temporarily stole the floor.
16:42:14 <sneelakantan> jungleboyj: pls go ahead
16:42:28 <DuncanT-> jungleboyj: jungleboyj Unless they have an exception, I think they've missed now
16:42:49 <DuncanT-> jungleboyj: I'm not the authority on that though
16:42:54 <avishay> I guess only jgriffith knows who has exceptions
16:43:16 <DuncanT-> There's a formal process for exceptions, but I don't remember what it is
16:43:30 <jungleboyj> DuncanT-: Ok, I will have to talk to jgriffith then.  Was just wondering about https://review.openstack.org/#/c/70465/
16:43:37 <eharney> note that icehouse-3 is supposed to be cut on Thursday.
16:43:54 <jungleboyj> Since that has been in flight but hasn't been +A'd yet.
16:43:54 <DuncanT-> Bug fixes still welcome, but I intend to watch out for people turning features into bugs to sneak through... that hurt us a bit at the end of H
16:44:30 <eharney> jungleboyj: it's depending on an outdated dep currently
16:44:35 <DuncanT-> jungleboyj: https://review.openstack.org/#/c/75740/6 needs an update before we can get that in
16:44:56 <jungleboyj> eharney: Right.  I am fixing its dep at the moment and then need to update it.
16:45:16 <DuncanT-> I'm happy to review it after the meeting, and jgriffith can make a final call I guess...
16:45:34 <DuncanT-> The I3 tag isn't cut yet, so it could still be got in
16:45:43 <jungleboyj> eharney: DuncanT- We can take that into the cinder room.  Just wanted clarification since I know that Nova was -2'ing stuff.
16:45:51 <DuncanT-> Cool
16:46:05 <DuncanT-> #topic sneelakantan blueprint
16:46:14 <sneelakantan> a very similar request for this blueprint
16:46:15 <sneelakantan> https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type
16:46:20 <sneelakantan> This is something I've been working on since Dec
16:46:32 <jungleboyj> DuncanT-: Thanks guys!
16:46:33 <sneelakantan> As far as I remember it was always targetted for icehouse-3, but now today I notice that it has been moved out
16:46:45 <sneelakantan> I did not receive a notification either
16:46:49 <sneelakantan> Any idea what could have happened?
16:47:06 <sneelakantan> The code is ready in 4 patches and has been reviewed many times
16:47:14 <sneelakantan> https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/vmdk-storage-policy-volume-type,n,z
16:47:21 <DuncanT-> sneelakantan: Not sure, sorry. Might well qualify for a freeze exception
16:47:35 <DuncanT-> sneelakantan: You need jgriffith again for that
16:47:40 <sneelakantan> hmm.. ok.
16:47:43 <sneelakantan> will do that.
16:47:49 <sneelakantan> DuncanT-: Thanks.
16:48:11 <DuncanT-> It seems well contained
16:49:49 <KurtMartin> sneelakantan: the FFE process is like how nova does it...send a request to openstack-dev mailing list with a prefix of "[Cinder] FFE Request: ".  Nova has some other requirements like it needs to be sponsored by two cores
16:50:00 <DuncanT-> https://review.openstack.org/#/c/73165/ Concerns me that it is not back-compatible, but if it's only not compatible with something we haven't merged yet then I'm less worred
16:50:42 <sneelakantan> KurtMartin: ok will raise FFE once it starts for cinder
16:51:09 <KurtMartin> sneelakantan: nova's already started
16:51:13 <DuncanT-> Right, ten minutes left
16:51:34 <avishay> sneelakantan: don't wait, just do it :)
16:51:39 <DuncanT-> Any more eyes on those patches is probably useful
16:51:40 <avishay> or not
16:51:43 <DuncanT-> lol
16:51:50 <DuncanT-> #topic Any other business?
16:52:03 <jungleboyj> avishay: :-)
16:52:14 <avishay> reminder: DR call in 9 minutes for those who are interested
16:52:20 <DuncanT-> sneelakantan:  (16:51:21) avishay: sneelakantan: don't wait, just do it :)
16:52:35 <ronenkat> should I ask for FFE for client code? https://review.openstack.org/#/c/72743 or maybe just get it approved?
16:52:40 <sneelakantan> DuncanT-: ok thanks.
16:52:49 <DuncanT-> Client code doesn't get frozen ronenkat
16:52:53 <jungleboyj> ronenkat: Client code is different.
16:52:54 <avishay> ronenkat: client is on a different schedule than cinder itself
16:53:07 <ronenkat> what can I find that schedule?
16:53:17 <avishay> ronenkat: inside jgriffith's head i believe :)
16:53:21 <DuncanT-> The schedule is 'when jgriffith hits the button'
16:53:22 <ronenkat> (correction) where can I find that schedule
16:53:30 <ronenkat> :0
16:53:32 <ronenkat> :)
16:53:38 <jungleboyj> :-)
16:53:56 <DuncanT-> Bugging him until he does a release just to shut you up has been known to work....
16:54:10 * jungleboyj doesn't want to see that being extracted.
16:54:13 <DuncanT-> But that is best left until *after* your work is merged ;-)
16:54:21 <winston-d> client release scheduler is in jgriffith's mind
16:54:34 <DuncanT-> Any more for any more?
16:55:36 * coolsvap feels DuncanT's any more any more has got great appeal :)
16:55:53 <DuncanT-> Finishing 5 minutes early then I guess. We can always carry on in the normal channel
16:56:06 <DuncanT-> #endmeeting