14:02:11 #startmeeting cinder 14:02:11 Meeting started Wed Jul 3 14:02:11 2024 UTC and is due to finish in 60 minutes. The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:11 The meeting name has been set to 'cinder' 14:02:13 o/ 14:02:15 #topic roll call 14:02:17 o/ 14:02:21 o/ hello everyone 14:02:24 hi 14:02:31 hi 14:02:35 hi 14:02:46 o/ 14:03:27 o/ 14:05:12 ok, lets see 14:05:17 #annoucements 14:05:31 this week is the D-2 milestone 14:05:53 we have extended the spec freeze deadline 14:06:01 * jbernard digs for the link 14:06:25 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/4VF7SOL2RFA2PYE5LUM6WKLP6J6NH3MR/ 14:06:49 i added the weekend to account for the holidays for some this week 14:07:02 certainly myself, this weeks is complete chaos over here 14:07:03 i think we meant to say July 07th right? 14:07:18 yes, what did we say? 14:07:26 June 7th? 14:07:34 yep, i blew it :P 14:07:40 :D 14:07:45 s/June/July 14:07:51 join #openstack-meeting-alt 14:08:01 ybenshim: you're already here :) 14:08:13 no worries, it should be clear enough, or we can add a reply to that thread correcting it 14:08:18 #action update the list to correct spec freeze typo 14:09:06 also 14:09:26 this week, according to our schedule, is new driver merge deadline 14:09:57 we currently have 2 proposed: 14:10:09 #link https://review.opendev.org/c/openstack/cinder/+/921020 14:10:12 ^ ceastor 14:10:14 and 14:10:20 #link https://review.opendev.org/c/openstack/cinder/+/923016 14:10:24 ^ zte vstorage 14:10:40 there has not been any response on ceastor 14:10:51 and there is currently no CI 14:11:20 i don't see CI on ZTE as well 14:11:25 nor do i 14:12:08 we may miss the deadline for those on this cycle 14:12:59 and we can begin looking at E for these, that should be enough time to implement CI 14:13:51 there is one other question raised related to new drivers, from raghavendra 14:13:55 #link https://review.opendev.org/c/openstack/cinder/+/919424 14:14:17 ^ this adds NVMe FC support to the HPE 3par driver 14:14:33 the question was wether or not this is a new driver, or an addition to an existing one 14:14:51 i believe this one does have CI coverage from HEP 14:14:56 s/HEP/HPE 14:16:16 This isn't technically a new driver, just adding a new dataplane to existing driver code, so as long as there is a CI reporting for the new dataplane it should be OK 14:16:17 i think as the patch is adding nvme.py file to drivers, it looks like a new driver to me 14:16:30 simondodsley: ok, im fine with that 14:16:50 either way, it's passing unit tests and 3rd party CI 14:16:58 do we support nvme FC ? -- from os-brick perspective 14:17:36 whoami-rajat: an excellent question :) 14:18:22 im assuming some baseline amount of functionality is present, else their CI would not be passing 14:18:37 no - there is no support for nvme-fc in os-brick 14:18:55 so actually that driver can't work 14:19:46 is there a related patch to os-brick for that? I'm confused how this is expected to merge without os-brick support 14:19:49 their CI is passing but clicking on the logs, it redirects to a github repo which has no content 14:20:06 there is no os-brick patch for nvme-fc. 14:20:32 then there you have it :) 14:20:59 #action add comment to 3par patch (nvme-fc) to request os-brick support and CI coverage 14:20:59 Pure are providing Red Hat with an NVMe-FC capable array so they can develop one if they want, but currently Gorka is not allowed to develop it, even though he wants to 14:21:48 ok 14:21:55 there are specific nvme calls for nvme-fc that os-brick does not yet have. Pure are waiting on this too 14:23:06 simondodsley: thanks for the context 14:23:43 #topics 14:23:56 a couple of drivers have been deprecated 14:24:07 most notably the gluster driver 14:24:15 #link https://review.opendev.org/c/openstack/cinder/+/923163 14:24:31 yeah just wanted to mention this, it's the gluster backup driver 14:24:36 we got rid of the volume driver a while back 14:24:55 seems like a good time to do this unless anyone has a reason not to 14:25:56 simondodsley: a clarification about that - are you sure it's Gorka not "allowed" to? 14:26:46 Red Hat won't give hin the time to do it as they don't beleive there are any customers requesting this protocol. He even went and spent his own money on a Gen5 NVMe-FC card 14:26:47 eharney: +1 14:27:37 simondodsley: that's not the story, but I'd advice you talking with him 14:27:49 talking again 14:28:01 that's what he told me a few months ago 14:28:11 i'll re chat with him 14:29:05 I think I was summoned... 14:29:10 What's up? 14:29:39 status of os-brick for nvme-fc please... 14:29:51 geguileo: a question came up regarding support is os-brick for NVMe-FC 14:30:07 Oh yeah, it's not supported 14:30:36 I bought an HBA that supported it and could connect one port to the other to work on this 14:30:45 is there any interest or motivation from others to add support? (in your view) 14:30:46 But unfortunately priorities didn't help 14:31:07 geguileo: so, in your opinion, a third party CI running nvme-fc is probably not really working if it doesn't include os-brick patches 14:31:12 jbernard: I think customers will eventually ask for it 14:31:29 rosmaita: It cannot work, because it cannot attach volumes 14:31:33 geguileo: curious, which HBA did you get? 14:31:39 exactly 14:31:56 jbernard: Once customer start asking for it there will be a justification for me to work on this instead of the other things customers are asking for 14:32:41 jbernard: HPE SN1100Q 16Gb 2p FC HBA 14:33:08 raghavendrat: are there any efforts on your side to work on this? - as it will be needed in order to land the existing patch to 3par 14:33:33 geguileo: thanks 14:33:39 Right now, if none of our customers are going to use it, then it's not a priority, and I don't have enough cycles right now to work on things just because I want to 14:34:00 that's understandable 14:34:17 im hoping to capture the current state as best we can in this meeting to we can refer back later 14:35:23 one more minute and we can move onto review requests 14:35:26 as i said Pure are in the process of providing a storage array to Red Hat free of charge that is NVMe-FC capable to help with this effort when it gets started 14:37:19 if i understand correctly, with nvme-fc, only create/delete volume will work 14:37:34 attach/detach volume won't work. right ? 14:37:41 raghavendrat: Correct 14:37:55 Creating a snapshot will also work, although it won't do any good since there won't have any data 14:38:03 Manage/unmanage will work as well 14:38:25 for now, HPE is ok with basic nvme-fc support... we can look for attach/detach later. 14:38:26 Basically only things unrelated to nvme-fc will work 14:38:59 raghavendrat: The thing is that it doesn't make sense to have an NVMe-FC driver in Cinder when OpenStack doesn't support the technology 14:39:14 It would lead to confussion 14:40:14 raghavendrat: we need to have basic functionality, which includes attach and detach, to be able to meet the bar for inclusion; that's my view at least 14:40:17 Though it'd be good to hear other cores opinions 14:40:32 raghavendrat, so the CI jobs excludes tests related to attachment? the logs are currently unavailable on the git repo mentioned on the CI job so it would better if we can upload it in a similar format as other CIs 14:40:33 Couple of customers of NetApp, waiting for Cinder NVMe FCP 14:40:35 jbernard: Yeah, my view as well 14:40:55 msaravan: Are they RH customers? 14:41:19 Because last time I asked Pure's customers interested in it were not RH customers 14:41:21 attach detach are mandatory operations for a driver to merge https://docs.openstack.org/cinder/latest/reference/support-matrix.html 14:41:24 One is RH, and other is using JIJU 14:41:35 thanks geguileo: for sharing details... whoami-rajat: i will look at CI again 14:42:04 @geguileo : I can share more details offline if required.. 14:42:22 msaravan: let's talk offline to see if we can get some traction 14:42:28 offline == after meeting 14:42:41 @geguileo : sure, thank you.. 14:44:49 thanks jbernard and simondodsley 14:45:31 raghavendrat: no problem, it would nice to see these pieces come together 14:45:55 #topic review request 14:46:05 there are a few: 14:46:15 #link https://etherpad.opendev.org/p/cinder-dalmatian-meetings#L101 14:46:46 feel free to raise any specific points on any of them 14:47:10 if anyone has cycles, these are in need of attention this week 14:48:31 Luzi: your specs are on my list as well 14:48:38 thank you 14:48:42 it's probably worth mentioning that we have patches for a CVE working their way through the gates this morning: https://bugs.launchpad.net/cinder/+bug/2059809 14:49:53 eharney: oh yes! 14:50:23 I don't think there is anything actionable remaining from cinder, just something to be aware of 14:50:40 just baby-sitting gate failures 14:51:01 and thanks to rosmaita and eharney for working on that 14:51:25 #topic open discussion 14:52:46 jbernard: how do I make sure that my spec is targeted for D-2? https://review.opendev.org/c/openstack/cinder-specs/+/862601 14:53:33 just wanted to highlight my patch, been sitting for a while, it's a small one, already has a +2 and needed to have SDK support of default volume types https://review.opendev.org/c/openstack/cinder/+/920308 14:53:41 If anyone has chance, I'd appreciate any thoughts on https://bugs.launchpad.net/cinder/+bug/2070475, just to identify if it looks like a bug or a deployment issue 14:53:53 shouldn't take more than 2 mins for a core to approve :D 14:56:36 zaitcev: i think being targeted to D means that it is subject to the milestones and schedule for the current release 14:56:44 zaitcev: does that answer your question? 14:57:43 jbernard: So, what is left for me to do? Just ask for more (re-)reviews? 14:57:49 andrewbonney: can you add that to the etherpad? 14:57:57 andrewbonney: https://etherpad.opendev.org/p/cinder-dalmatian-meetings#L101 14:58:17 andrewbonney: even though it's not a patch, it helps to have those requests all in once place 14:58:41 Sure, would it be best under this week or a future one? 14:58:50 zaitcev: beg, barter, pester, whatever you need to do :) 14:59:36 andrewbonney: this one is fine, just so that it has more visibility 14:59:49 Ok 15:00:21 andrewbonney: at least for me, i go back through the etherpad and action items later in the week 15:00:41 ok, that's time! 15:00:46 thanks everyone 15:00:49 #endmeeting