16:02:28 #startmeeting Cinder 16:02:28 Meeting started Wed Mar 5 16:02:28 2014 UTC and is due to finish in 60 minutes. The chair is DuncanT-. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:31 The meeting name has been set to 'cinder' 16:02:44 https://wiki.openstack.org/wiki/CinderMeetings#Agenda_for_next_meeting as usual 16:02:56 Greetings all :-) 16:03:25 Who's here? 16:03:32 hi 16:03:34 I'm idling mostly 16:03:37 hi 16:03:38 o/ 16:03:40 Hi 16:03:42 hello 16:03:53 hi 16:03:56 Hi 16:04:01 #topic i-3 status 16:04:03 Hello 16:04:29 https://launchpad.net/cinder/+milestone/icehouse-3 16:04:33 Very briefly: Looks like all the blueprints that were explicitly targeted are done 16:05:13 A lot of bug fixes too. 16:05:18 The netapp thing is merged but launchpad hasn't caught up yet 16:05:29 stuck in gate last I checked 16:05:35 the job was up to 12.5 hours 16:05:53 Ah, ok. Hopefully it will get through, we can kick it if it doesn't 16:05:55 there are a couple Cinder changes in the gate but not too far off now 16:06:02 Any other comments / feedback / worries? 16:06:05 bswartz1: we've seen much worse, be paitent. :) 16:06:09 heh 16:06:27 thanks for the +2s 16:06:31 I'm calling the three day hackathon a success. 16:06:50 I'll add feedback on that to the end of the adgenda 16:06:55 agenda 16:07:11 For now, I think Vishy wants the stage.... 16:07:20 kudos to Mike~ 16:07:30 #topic Volume replication 16:07:33 *agrees* 16:07:33 winston-d ++ 16:07:39 vishy == avishay? :) 16:08:05 So volume replication unfortunately didn't make it to Icehouse 16:08:24 ronenkat will be taking over the code for juno 16:08:49 he has sent out an email to the mailing list with a conference call to discuss the design 16:09:06 ideally we'd like feedback in the next few weeks so that there is something concrete to discuss in atlanta 16:09:23 ronenkat: do you want to add anything? 16:09:27 That conference call is in 50 minutes I think? 16:09:39 avishay: just to clarify, the focus of the call is on disaster recovery not just replication 16:09:50 is there a link to that email? i have 600 unread ML emails :( 16:10:26 akerr: http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg18224.html 16:10:33 avishay: thanks 16:10:36 akerr: sure 16:10:45 avishay: I just want to note, that we at LINBIT intend to hook DRBD into any replication hooks that will come to cinder 16:10:47 * jungleboyj has finally dug out and made it into the office. 16:11:13 philr: will that be with the LVM driver? 16:11:16 philr: I'm looking forward to hearing more about that 16:12:05 We have been working for some month on a thing called "drbdmanage", that does drbd config files for nodes...That in turn uses LVM as of today 16:12:22 Though we prepared to have other backends 16:12:33 Shall we hop to that as a topic now? 16:12:42 We had libstoragemanagement in mind, but I guess we will be able to do cinder instead as well 16:12:42 philr: it would be great if you could make that happen - if you could implement DRBD+LVM that would probably be the reference implementation 16:12:44 #topic DRBD/drbdmanage driver for cinder 16:13:06 Any blueprints or other reading material? 16:13:11 (or any code)... 16:13:22 No blueprints as of today 16:13:44 We plan to do blueprints within the next two weeks 16:13:46 philr: it would be great if you could review the replication blueprint + code and see that it works for you 16:14:00 philr: https://review.openstack.org/#/c/64026/ 16:14:16 avishay: Thanks for the link. We will do that 16:14:26 philr: great 16:14:45 philr: Is there any code yet, or is it still largely conceptual? 16:14:59 philr: i have had some libstoragemgmt work going on for a bit myself 16:15:20 philr: for Cinder, that is 16:15:42 The code we have right now is "drbdmanage" ... That is not cinder specific. It is a generic manage-a-drbd-cluster code. 16:16:00 http://git.drbd.org/gitweb.cgi?p=drbdmanage.git;a=summary 16:16:39 ...but all that is the "ground work" for our cinder integration. 16:16:49 philr: great stuff 16:17:41 PS: Last relese is 3 weeks old, we will do the next one on Friday (March 7), and then concentrate on the actual cinder work. 16:18:15 I'll take a nose through that code, and I look forward to seeing the cinder work 16:18:19 OK great, please keep us updated 16:18:34 BTW, we plan to use D-BUS from our cinder bits to drbdmanage. 16:19:12 That should be... interesting. No good reason not to that I'm aware of, just not seen it attempted before 16:19:32 Yep... 16:19:45 philr: please include that in your blueprint along with the reasoning behind it 16:19:51 ok. 16:20:34 Ok, Then, I am all set for today. 16:20:46 philr: good stuff, thanks! 16:20:57 My turn? 16:21:28 Yup 16:21:40 Thanks. 16:21:40 DuncanT-: Could you give me a few mins at the end? Wanted to bring up a blueprint for icehouse discussion. 16:21:44 I proposed new LVM driver for Juno. 16:21:46 #topic New LVM 16:21:49 https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage 16:21:57 sneelakantan: Noted 16:22:01 Could anyone review or comment for about the spec of my blueprint? 16:22:26 And then I would like to get an approval for this blueprint. 16:22:34 mtanino: so, i'm interested, i've been working on LIO and targetd support for Cinder 16:22:39 mtanino: the blueprint was hard for me to understand. is the proposal to add FC to the LVM driver? it seems like more than that? 16:22:41 but i haven't looked at this for long enough to wrap my head around it yet 16:22:48 Don't worry too much about approval, that doesn't actually mean a lot 16:23:19 this assumes a remote LVM storage setup though? 16:24:08 avishay, Not to add FC code in the LVM driver. 16:25:24 In this blueprint, condition of the environment is FC or iSCSI connected storage. 16:25:28 sorry, late 16:25:34 mtanino: so a brand new LVM driver that supports both iSCSI and FC 16:26:09 wait...so this is LVM on top of SAN storage? for example, taking a Storwize/3PAR/whatever volume, and using that as a PV? 16:26:41 since it says it presupposes virtual storage using iSCSI targetd, it sounds like this is a targetd driver and not an LVM driver 16:27:06 KurtMartin, Yes. 16:27:07 i thnk we just gave 3 different interpretations to the same paragraph of text 16:27:14 this is how religious wars start... :) 16:27:21 agree it's not clear 16:27:43 Are you planning a summit session on this? Sounds like there's lots of peopel who want ot ohear details 16:27:50 mtanino: ^^^ 16:27:58 +1 16:28:05 I mean, current LVM driver is required to use iSCSI target. Right? 16:28:12 mtanino: correct 16:28:13 or iser/rdma 16:28:39 But in SAN environment, some user does not want to use iSCSI target and access to backend storage directly. 16:29:19 mtanino: what do you mean by "directly"? 16:29:25 OK. 16:29:44 The driver does not operate the backend storage. 16:30:08 Just create LV and VG on top of mounted LU on a server. 16:30:28 Is may explain is good?... 16:30:33 mtanino: and the LU is a volume on SAN storage? 16:30:43 what is running on the server? 16:30:47 avishay, Yes. 16:31:52 eharney, what do you mean?.. 16:32:50 so if the driver doesn't operate the backend storage (meaning different from the current LVM driver which does, i think) -- what does? 16:32:51 i THINK the idea is: you have a SAN volume attached to a Nova host, and the volume is actually a VG. and then you create an LV on that VG. i don't get why, but i think that's the idea. 16:33:11 Sounds like you're mounting the same iscsi / fc lun in lots of places, and being careful to keep the creates in one, but letting the clients (compute nodes) do direct access? 16:33:22 avishay, Thanks. That's right. 16:33:38 Avoids importing and re-exporting a LUN through a server I guess? 16:34:02 basically you transport data via some LVM clustering idea instead of an iSCSI target. ok, i think i get the picture 16:34:39 Seems reasonable to me. I'd love to hear more at the design summit if you can make it, mtanino? 16:34:42 DuncanT-, Yes. Cinder node and Nova compute mount same volume. 16:35:36 DuncanT-, Thanks. I will have a plan to join a summit. 16:35:36 mtanino: will you attend the openstack summit in atlanta? 16:35:44 great 16:35:44 maybe consider adding a diagram or two to the blueprint explaining all the parts and how they're connected, just to make it simpler to understand 16:35:52 eharney: +1 16:35:52 So, can I have a session for the design summit? 16:36:23 eharney, Thanks for your comment. 16:36:26 mtanino: Apply when the slots open. I'm certainly keen to here more 16:36:33 as am i 16:36:38 me too 16:36:38 eharney: +1 16:36:39 DuncanT-, OK. 16:36:52 #action mtanino To tell us more at the design summit 16:37:06 Any more comments on that? 16:37:18 sounds good to me for now 16:37:28 eharney, Thanks. 16:37:53 Ok, sneelakantan wanted a spot 16:38:04 thanks 16:38:04 #topic sneelakantan blueprint 16:38:05 sneelakantan: topic? 16:38:10 Got a link please? 16:38:59 sneelakantan: Hello? 16:39:10 * jungleboyj hears crickets 16:39:37 We can come back to that I guess.... 16:39:54 #topic any other business 16:39:58 Any more for any more? 16:40:16 mornign 16:40:24 Can I make a clarification on Feature Freeze? 16:40:25 miss anything ? 16:40:55 jungleboyj: You can try... 16:41:02 DuncanT-: Thank you sir. 16:41:27 damn! had connection trouble with freenode. 16:41:34 For changes that are currently in flight but are still in review. Will those continue to be reviewed or are they going to be -2'd? 16:41:40 sneelakantan: NP... we'll come back to you in a sec 16:41:57 sneelakantan: Sorry, I temporarily stole the floor. 16:42:14 jungleboyj: pls go ahead 16:42:28 jungleboyj: jungleboyj Unless they have an exception, I think they've missed now 16:42:49 jungleboyj: I'm not the authority on that though 16:42:54 I guess only jgriffith knows who has exceptions 16:43:16 There's a formal process for exceptions, but I don't remember what it is 16:43:30 DuncanT-: Ok, I will have to talk to jgriffith then. Was just wondering about https://review.openstack.org/#/c/70465/ 16:43:37 note that icehouse-3 is supposed to be cut on Thursday. 16:43:54 Since that has been in flight but hasn't been +A'd yet. 16:43:54 Bug fixes still welcome, but I intend to watch out for people turning features into bugs to sneak through... that hurt us a bit at the end of H 16:44:30 jungleboyj: it's depending on an outdated dep currently 16:44:35 jungleboyj: https://review.openstack.org/#/c/75740/6 needs an update before we can get that in 16:44:56 eharney: Right. I am fixing its dep at the moment and then need to update it. 16:45:16 I'm happy to review it after the meeting, and jgriffith can make a final call I guess... 16:45:34 The I3 tag isn't cut yet, so it could still be got in 16:45:43 eharney: DuncanT- We can take that into the cinder room. Just wanted clarification since I know that Nova was -2'ing stuff. 16:45:51 Cool 16:46:05 #topic sneelakantan blueprint 16:46:14 a very similar request for this blueprint 16:46:15 https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type 16:46:20 This is something I've been working on since Dec 16:46:32 DuncanT-: Thanks guys! 16:46:33 As far as I remember it was always targetted for icehouse-3, but now today I notice that it has been moved out 16:46:45 I did not receive a notification either 16:46:49 Any idea what could have happened? 16:47:06 The code is ready in 4 patches and has been reviewed many times 16:47:14 https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/vmdk-storage-policy-volume-type,n,z 16:47:21 sneelakantan: Not sure, sorry. Might well qualify for a freeze exception 16:47:35 sneelakantan: You need jgriffith again for that 16:47:40 hmm.. ok. 16:47:43 will do that. 16:47:49 DuncanT-: Thanks. 16:48:11 It seems well contained 16:49:49 sneelakantan: the FFE process is like how nova does it...send a request to openstack-dev mailing list with a prefix of "[Cinder] FFE Request: ". Nova has some other requirements like it needs to be sponsored by two cores 16:50:00 https://review.openstack.org/#/c/73165/ Concerns me that it is not back-compatible, but if it's only not compatible with something we haven't merged yet then I'm less worred 16:50:42 KurtMartin: ok will raise FFE once it starts for cinder 16:51:09 sneelakantan: nova's already started 16:51:13 Right, ten minutes left 16:51:34 sneelakantan: don't wait, just do it :) 16:51:39 Any more eyes on those patches is probably useful 16:51:40 or not 16:51:43 lol 16:51:50 #topic Any other business? 16:52:03 avishay: :-) 16:52:14 reminder: DR call in 9 minutes for those who are interested 16:52:20 sneelakantan: (16:51:21) avishay: sneelakantan: don't wait, just do it :) 16:52:35 should I ask for FFE for client code? https://review.openstack.org/#/c/72743 or maybe just get it approved? 16:52:40 DuncanT-: ok thanks. 16:52:49 Client code doesn't get frozen ronenkat 16:52:53 ronenkat: Client code is different. 16:52:54 ronenkat: client is on a different schedule than cinder itself 16:53:07 what can I find that schedule? 16:53:17 ronenkat: inside jgriffith's head i believe :) 16:53:21 The schedule is 'when jgriffith hits the button' 16:53:22 (correction) where can I find that schedule 16:53:30 :0 16:53:32 :) 16:53:38 :-) 16:53:56 Bugging him until he does a release just to shut you up has been known to work.... 16:54:10 * jungleboyj doesn't want to see that being extracted. 16:54:13 But that is best left until *after* your work is merged ;-) 16:54:21 client release scheduler is in jgriffith's mind 16:54:34 Any more for any more? 16:55:36 * coolsvap feels DuncanT's any more any more has got great appeal :) 16:55:53 Finishing 5 minutes early then I guess. We can always carry on in the normal channel 16:56:06 #endmeeting