16:00:23 #startmeeting Cinder 16:00:24 Meeting started Wed Jul 18 16:00:23 2018 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:27 The meeting name has been set to 'cinder' 16:00:40 o/ 16:00:42 hello 16:00:46 hello 16:01:00 courtesy ping: jungleboyj DuncanT diablo_rojo, diablo_rojo_phon, rajinir tbarron xyang xyang1 e0ne gouthamr thingee erlon tpsilva patrickeast tommylikehu eharney geguileo smcginnis lhx_ lhx__ aspiers jgriffith moshele hwalsh felipemonteiro lpetrut lseki _alastor_ 16:01:06 o/ 16:01:07 <_alastor_> o/ 16:01:14 hi 16:01:15 Hi 16:01:26 hi 16:01:31 hello 16:01:38 Hello everyone. 16:01:54 Happy Wednesday. 16:02:11 hi! o/ 16:02:13 o/ 16:02:23 Shall we begin? Looks like we have a good crowd. 16:02:44 #topic announcements 16:02:55 We have the PTG planning etherpad. 16:03:06 #link https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 16:03:21 People have started indicating if they plan to attend in there. 16:03:31 Thank you to all of you who have done that. 16:03:50 Would appreciate everyone doing that so that we know how many people to expect there. 16:04:17 Also, I have done an os-brick release .... two actually. 16:04:30 I had one last week, 2.5.2 16:04:44 We got a few more bug fixes in this week and I have proposed 2.5.3 https://review.openstack.org/583562 16:05:26 #link https://review.openstack.org/583562 16:05:34 hi 16:05:34 And some notes on schedule: 16:05:37 hi 16:05:45 #link https://releases.openstack.org/rocky/schedule.html 16:06:01 This week is Cinder Feature Proposal Freeze. 16:06:27 I don't think that is a big issue for anyone. 16:06:38 Next week, however, is feature freeze. 16:06:49 Are there any features that need review that we have missed? 16:07:10 We have a couple with the client freeze coming next week. 16:07:35 smcginnis: Ok. I will need to take a look at client patches. 16:07:41 #link https://review.openstack.org/532702/ 16:07:46 We went through os-brick and those were good. 16:08:20 jungleboyj: Well, not so much just the client patches as the server side patches need to land if we want them, and if we do we need to get it done in time to get the client side support in as well. 16:08:35 # https://review.openstack.org/#/c/559397/ Client side attachment mode patch 16:08:43 #link https://review.openstack.org/#/c/559397/ Client side attachment mode patch 16:09:05 #link https://review.openstack.org/#/c/533564/ Transfer snapshots 16:09:19 yeah, transfer snapshots 16:09:23 #link https://review.openstack.org/#/c/577611/ Transfer snapshots client patch 16:09:40 Unfortunately it's another race for which one gets which mv. 16:09:44 it has few -1 and merge conflict :( 16:10:20 So it might help things if we coordinate this a little and a) decide if we want them both in rocky, and b) work on getting them merged in order to get the MVs right and things landed in time. 16:10:22 :-( 16:10:50 smcginnis: So, I think we definitely want the Attachment mode in. 16:10:55 That would be the first priority. 16:11:07 My thoughts too, so I started with updating that one yesterday. 16:11:26 The Transfer Snapshot one has been around a while. Would be good to get it in. 16:11:34 It already needs an MV update so I would make it second. 16:11:59 Both would be really good to get in, but the snapshot issue is something that's been there for awhile, so I don't think missing one more releases is the end of the world. But really good if we can get it in this release. 16:12:15 smcginnis: Agreed. 16:12:31 I was going to update that one too, but didn't want to conflict with the attachment one or assume that it will get the next version before the first one has landed. 16:13:55 smcginnis: Ok. So, lets try to get the attachment mode one in today and then get through the other one if possible tomorrow/Friday? 16:14:09 ++ 16:14:28 ++ 16:14:30 Ok. I will look at the attachment mode after the meeting. 16:14:35 +1 16:14:51 Would be good if jgriffith can take a quick look at my updates on that too, if possible. 16:15:00 I would say no -1 unless there is a major issue. Can always do a follow up fix. 16:15:10 jgriffith: ^^^ 16:15:19 jungleboyj: True, we can always fix up little issues afterwards. 16:16:12 smcginnis: Thanks for driving that work. 16:16:23 Just trying to move things along. 16:16:30 It is appreciated. 16:16:37 Ya know, since jgriffith is such a deadbeat and all. :P 16:16:52 Anything else along those lines? 16:17:01 Such a deadbeat. 16:17:05 Nothing else that I'm aware of at the moment. 16:18:02 Ok and nothing from else from a release schedule? 16:19:03 Ok. Moving on. 16:19:15 #topic Rocky Priorities Review. 16:19:25 So, we have kind-of already taken care of this. 16:19:45 I have started creating the Stein section of the document. 16:19:49 #link https://etherpad.openstack.org/p/cinder-spec-review-tracking 16:20:02 Please take a look, if you have concerns or changes go ahead and make them. 16:20:33 Any questions there? 16:20:59 #topic Weekly Harassment of geguileo 16:21:05 :) 16:21:08 sorry, no news 16:21:13 geguileo: I reviewed the document on HA. 16:21:15 but I'll update the spec today 16:21:21 jungleboyj: awesome! 16:21:24 geguileo: Good work there. Thank you! 16:21:44 jungleboyj: I'll look at your review later and update the spec with all the pending reviews 16:22:03 geguileo: Sounds great. That would be good to get in as part of this release too. 16:22:12 jungleboyj: +1 16:22:21 geguileo: Regardless, that is good progress. 16:22:29 I'll ping all reviewers once I submit new patch 16:22:43 Sounds like a plan. 16:22:52 We can move on then. 16:23:06 #topic About readding the block device driver. 16:23:09 Rambo: 16:23:24 https://docs.google.com/spreadsheets/d/1DzmktV7IRyXyv2BqZ2iCUDLjB-r2M-WMniXbPrTsOMw/edit?usp=sharing 16:23:25 The result shows LVM's IOPS is twice as bad than block device driver.More than,The block device driver is better than LVM+LIO in principle.The LVM+LIO has to through network. 16:23:42 If anyone persist to believe the LVM+LIO is better than block device driver,please show me your test data.Thank you! 16:23:49 as a former BDD maintainer, I vote to not re-add this again :( 16:24:00 #link https://docs.google.com/spreadsheets/d/1DzmktV7IRyXyv2BqZ2iCUDLjB-r2M-WMniXbPrTsOMw/edit?usp=sharing 16:24:08 Rambo: you can use this driver as out of tree driver 16:24:10 If someone can figure out a way to make it work with the minimum required feature set for a Cinder driver, feel free to propose it. 16:24:16 e0ne: ++ 16:24:38 smcginnis: ++ 16:25:02 why? 16:25:22 Why use it as an out of tree driver? 16:25:43 minimum features set, infra limitations for CI, not really cloud solution 16:25:51 e0ne: ++ (again) 16:25:54 smcginnis: Is it possible for the driver to be made to meet the minimum requirements? 16:25:59 in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios. 16:26:10 jungleboyj: I don't believe so, but if someone wants to figure that out, more power to them. 16:26:13 jungleboyj, smcginnis: it' absolutely doable, AFAIR 16:26:25 e0ne: Ok. 16:26:26 Well, it would take a lot of work. 16:26:50 but it will be very slow doing snapshots and not optimal storage usage 16:27:01 So, it sounds like the answer here is pretty simple. 16:27:05 Yep 16:27:24 Rambo: Nothing stops you from using it out of tree and if you want it in tree you can propose it with the appropriate requirements met. 16:27:47 Rambo: I can share a doc with you how to use it out of tree if its' acceptable use-case for you 16:28:32 ok,thank you,but the bdd is better than lvm 16:29:02 why you don't agree readd? 16:29:09 Rambo: NVMe is a good replacement for BDD, but it requires new hardware 16:29:27 Rambo: Because we have a list of minimum funationality to be a driver. 16:29:30 Rambo: We've told you a few times why. 16:29:37 The BDD does not meet those requirements. 16:30:33 oh 16:30:35 Rambo: So, e0ne can send you information on how to use it for your purposes. 16:30:54 #link https://review.openstack.org/#/c/398739/ 16:30:55 please share the doc,thank you 16:30:59 If you want to add it back in tree then you will need to rework the driver to meet the minimum requirements. 16:31:26 ok,I learn 16:31:36 afair, it met minimum requirements, but CI wasn't stable 16:31:56 e0ne: Hmmm, that might be true too. 16:32:01 I don't remember for sure. 16:32:13 Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? 16:32:13 I was hitting with CI for months :( 16:32:31 e0ne: It couldn't do snapshots too. 16:32:40 Rambo: I think we would be open to it. 16:32:46 smcginnis: we used dd to make snapshots :( 16:32:51 Rambo: You are welcome to propose it. 16:33:05 smcginnis: e0ne any objection? 16:33:52 jungleboyj: there are no objections from my side. we've got the same requirements for all drivers 16:34:04 e0ne: Ok, that was kind of my thought. 16:34:18 Rambo: what's your company? 16:34:18 I will not -2 on it, but jgriffith could 16:34:27 Ok. So I think we have answered the questions there and covered that topic. 16:34:41 ok,let me think deeply 16:34:49 e0ne: Possibly. He is awol right now though. :-) 16:34:56 Rambo: Ok. What is your company? 16:34:57 the unitedstack 16:35:08 do you know? 16:35:17 from my point of view, NVMe is better but it requires some additional expensive hardware 16:35:42 Rambo: No, thanks :) 16:35:43 e0ne: Yeah, lots of people going that direction. 16:35:59 what is NVMe?Can you tell me more? 16:36:04 thank you 16:36:19 Rambo: we've got NVMeT driver 16:36:27 Rambo: https://en.wikipedia.org/wiki/NVM_Express 16:36:47 oh 16:36:51 Rambo: it requires NVMe SSDs and NICs with RDMA support 16:36:58 We can take that discussion to the channel after the meeting. 16:37:06 Sorry,I don't know it 16:37:22 ok 16:37:22 jungleboyj: ++ 16:37:29 #topic: SPDK NVMf target and volume drivers in Rocky (Stein?) release as a part as an NVMf support 16:37:36 mszwed: All you. 16:37:41 I've been working on Storage Performance Development Kit (SPDK) NVMe-oF volume and target drivers as an extension of kernel NVMf driver which was merged not so long ago. I would like to propose to add this drivers to kernel NVMf blueprint and release them with Rocky. 16:38:05 mszwed: It's way past the deadline for Rocky, unfortunately. 16:38:15 it's pretty late for it in Rocky 16:38:24 * e0ne likes NVMe stuff a lot 16:38:29 mszwed: Way too late. 16:38:31 #link https://releases.openstack.org/rocky/schedule.html#r-cinder-driver-deadline 16:38:36 I see 16:38:38 Would love to get it in early in Stein though. 16:38:39 mszwed: btw, do you have CI for it? 16:38:47 not yet 16:39:01 mszwed: we can't merge it without CI :( 16:39:10 e0ne: 16:39:12 ++ 16:39:15 but I was talking with people responsible for NVMeT and their want to cooperate to make CI for SPDK drivers 16:39:17 eOne: we decided to share the MLNX CI 16:39:41 tushargohad: cool! 16:39:54 eOne, smcginnis: the patches have been up for review for quite a while 16:40:08 mszwed: Have you read through the docs here? https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver 16:40:14 and we were wondering if this can be treated as an alternative NVMe-oF implementation 16:40:19 smcginnis: yes 16:40:50 mszwed: I would focus on that CI requirement then and then hopefully we can get it in early in Stein. 16:40:55 eOne, smcginnis: mszwed developed this in parallel with the NVMeT driver, it just never made the review 16:40:59 smcginnis: ++ 16:41:43 tushargohad: you know, I'm really want to get it merged, but the deadline was more than one month ago 16:41:47 tushargohad: Ok but there is no CI and this was not held up as a driver to get on our review list. 16:42:00 https://etherpad.openstack.org/p/cinder-spec-review-tracking 16:42:03 #link https://etherpad.openstack.org/p/cinder-spec-review-tracking 16:42:08 tushargohad: I'll be happy to help you and mszwed to get it merged early in Stein 16:42:17 Some review can happen beforehand, but until there is at least a sign of CI being set up, not likely to get much attention. 16:42:28 smcginnis: +1 16:42:35 So, we can get it on that list now, and work to get it into Stein. 16:42:52 List or not, the documentation for adding a new driver needs to be followed. 16:42:56 smcginnis, jungleboyj, eOne: sounds good. Let me review the CI status -- the last time MLNX folks, eOne and us at Intel reviewed, MLNX CI was what was going to be used 16:43:02 smcginnis: ++ 16:43:22 tushargohad: Ok. Thanks. 16:43:39 we may not have a gap on the CI side unless there is a requirement specifically for the SPDK implementation (technically there isn't -- kernel and SPDK implementations can be tested in the same setup) 16:44:02 yes, they fallow the same rules 16:44:25 so it should work with both implementations 16:44:36 A CI needs to run a test with a running deployment using that configuration. 16:45:28 OK, let's (me and tushargohad) work on CI. Is their a chance to have still in Rocky if we have CI up and running, let's say, next week? 16:45:29 smcginnis: ok thanks ... mszwed and I will review 16:45:40 mszwed: No 16:45:47 *their/there 16:45:58 tushargohad: No. 16:46:10 Too large a change to get in at this point in the process. 16:46:25 Too far past the published deadline. 16:46:26 ok 16:47:01 jungleboyj, smcginnis: ok no problem -- we'll appreciate if this gets reviews ahead of Stein 16:47:13 mszwed: We will be publishing the schedule for Stein before too long so you will want to pay attention to the new driver deadline. 16:47:41 It should get more review attention once the driver requirements are met. 16:47:49 smcginnis: ++ 16:47:53 Ok, so moving on. 16:47:54 jungleboyj: Ok, I will do 16:48:01 jungleboyj: let's mention deadline for target driver the same as for drivers 16:48:12 #topic Open Discussion 16:48:17 e0ne: ++ 16:48:25 Anyone have other topics for today? 16:49:23 Going ... going .... 16:49:40 Gone. 16:49:52 see you next week! 16:49:59 Ok, so lets wrap up. Sean and I will work on getting those last features in. 16:50:08 Please start testing and fixing bugs. 16:50:17 Look forward to talking to you all next week. 16:50:24 #endmeeting.