14:02:11 <jbernard> #startmeeting cinder
14:02:11 <opendevmeet> Meeting started Wed Jul  3 14:02:11 2024 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:11 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:11 <opendevmeet> The meeting name has been set to 'cinder'
14:02:13 <Luzi> o/
14:02:15 <jbernard> #topic roll call
14:02:17 <simondodsley> o/
14:02:21 <jbernard> o/ hello everyone
14:02:24 <whoami-rajat> hi
14:02:31 <eharney> hi
14:02:35 <msaravan> hi
14:02:46 <tosky> o/
14:03:27 <rosmaita> o/
14:05:12 <jbernard> ok, lets see
14:05:17 <jbernard> #annoucements
14:05:31 <jbernard> this week is the D-2 milestone
14:05:53 <jbernard> we have extended the spec freeze deadline
14:06:01 * jbernard digs for the link
14:06:25 <jbernard> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/4VF7SOL2RFA2PYE5LUM6WKLP6J6NH3MR/
14:06:49 <jbernard> i added the weekend to account for the holidays for some this week
14:07:02 <jbernard> certainly myself, this weeks is complete chaos over here
14:07:03 <whoami-rajat> i think we meant to say July 07th right?
14:07:18 <jbernard> yes, what did we say?
14:07:26 <whoami-rajat> June 7th?
14:07:34 <jbernard> yep, i blew it :P
14:07:40 <whoami-rajat> :D
14:07:45 <jbernard> s/June/July
14:07:51 <ybenshim> join #openstack-meeting-alt
14:08:01 <jbernard> ybenshim: you're already here :)
14:08:13 <whoami-rajat> no worries, it should be clear enough, or we can add a reply to that thread correcting it
14:08:18 <jbernard> #action update the list to correct spec freeze typo
14:09:06 <jbernard> also
14:09:26 <jbernard> this week, according to our schedule, is new driver merge deadline
14:09:57 <jbernard> we currently have 2 proposed:
14:10:09 <jbernard> #link https://review.opendev.org/c/openstack/cinder/+/921020
14:10:12 <jbernard> ^ ceastor
14:10:14 <jbernard> and
14:10:20 <jbernard> #link https://review.opendev.org/c/openstack/cinder/+/923016
14:10:24 <jbernard> ^ zte vstorage
14:10:40 <jbernard> there has not been any response on ceastor
14:10:51 <jbernard> and there is currently no CI
14:11:20 <whoami-rajat> i don't see CI on ZTE as well
14:11:25 <jbernard> nor do i
14:12:08 <jbernard> we may miss the deadline for those on this cycle
14:12:59 <jbernard> and we can begin looking at E for these, that should be enough time to implement CI
14:13:51 <jbernard> there is one other question raised related to new drivers, from raghavendra
14:13:55 <jbernard> #link https://review.opendev.org/c/openstack/cinder/+/919424
14:14:17 <jbernard> ^ this adds NVMe FC support to the HPE 3par driver
14:14:33 <jbernard> the question was wether or not this is a new driver, or an addition to an existing one
14:14:51 <jbernard> i believe this one does have CI coverage from HEP
14:14:56 <jbernard> s/HEP/HPE
14:16:16 <simondodsley> This isn't technically a new driver, just adding a new dataplane to existing driver code, so as long as there is a CI reporting for the new dataplane it should be OK
14:16:17 <jbernard> i think as the patch is adding nvme.py file to drivers, it looks like a new driver to me
14:16:30 <jbernard> simondodsley: ok, im fine with that
14:16:50 <jbernard> either way, it's passing unit tests and 3rd party CI
14:16:58 <whoami-rajat> do we support nvme FC ? -- from os-brick perspective
14:17:36 <jbernard> whoami-rajat: an excellent question :)
14:18:22 <jbernard> im assuming some baseline amount of functionality is present, else their CI would not be passing
14:18:37 <simondodsley> no - there is no support for nvme-fc in os-brick
14:18:55 <simondodsley> so actually that driver can't work
14:19:46 <jbernard> is there a related patch to os-brick for that? I'm confused how this is expected to merge without os-brick support
14:19:49 <whoami-rajat> their CI is passing but clicking on the logs, it redirects to a github repo which has no content
14:20:06 <simondodsley> there is no os-brick patch for nvme-fc.
14:20:32 <jbernard> then there you have it :)
14:20:59 <jbernard> #action add comment to 3par patch (nvme-fc) to request os-brick support and CI coverage
14:20:59 <simondodsley> Pure are providing Red Hat with an NVMe-FC capable array so they can develop one if they want, but currently Gorka is not allowed to develop it, even though he wants to
14:21:48 <raghavendrat> ok
14:21:55 <simondodsley> there are specific nvme calls for nvme-fc that os-brick does not yet have. Pure are waiting on this too
14:23:06 <jbernard> simondodsley: thanks for the context
14:23:43 <jbernard> #topics
14:23:56 <jbernard> a couple of drivers have been deprecated
14:24:07 <jbernard> most notably the gluster driver
14:24:15 <jbernard> #link https://review.opendev.org/c/openstack/cinder/+/923163
14:24:31 <eharney> yeah just wanted to mention this, it's the gluster backup driver
14:24:36 <eharney> we got rid of the volume driver a while back
14:24:55 <eharney> seems like a good time to do this unless anyone has a reason not to
14:25:56 <tosky> simondodsley: a clarification about that - are you sure it's Gorka not "allowed" to?
14:26:46 <simondodsley> Red Hat won't give hin the time to do it as they don't beleive there are any customers requesting this protocol. He even went and spent his own money on a Gen5 NVMe-FC card
14:26:47 <jbernard> eharney: +1
14:27:37 <tosky> simondodsley: that's not the story, but I'd advice you talking with him
14:27:49 <tosky> talking again
14:28:01 <simondodsley> that's what he told me a few months ago
14:28:11 <simondodsley> i'll re chat with him
14:29:05 <geguileo> I think I was summoned...
14:29:10 <geguileo> What's up?
14:29:39 <simondodsley> status of os-brick for nvme-fc please...
14:29:51 <jbernard> geguileo: a question came up regarding support is os-brick for NVMe-FC
14:30:07 <geguileo> Oh yeah, it's not supported
14:30:36 <geguileo> I bought an HBA that supported it and could connect one port to the other to work on this
14:30:45 <jbernard> is there any interest or motivation from others to add support? (in your view)
14:30:46 <geguileo> But unfortunately priorities didn't help
14:31:07 <rosmaita> geguileo: so, in your opinion, a third party CI running nvme-fc is probably not really working if it doesn't include os-brick patches
14:31:12 <geguileo> jbernard: I think customers will eventually ask for it
14:31:29 <geguileo> rosmaita: It cannot work, because it cannot attach volumes
14:31:33 <jbernard> geguileo: curious, which HBA did you get?
14:31:39 <simondodsley> exactly
14:31:56 <geguileo> jbernard: Once customer start asking for it there will be a justification for me to work on this instead of the other things customers are asking for
14:32:41 <geguileo> jbernard: HPE SN1100Q 16Gb 2p FC HBA
14:33:08 <jbernard> raghavendrat: are there any efforts on your side to work on this? - as it will be needed in order to land the existing patch to 3par
14:33:33 <jbernard> geguileo: thanks
14:33:39 <geguileo> Right now, if none of our customers are going to use it, then it's not a priority, and I don't have enough cycles right now to work on things just because I want to
14:34:00 <jbernard> that's understandable
14:34:17 <jbernard> im hoping to capture the current state as best we can in this meeting to we can refer back later
14:35:23 <jbernard> one more minute and we can move onto review requests
14:35:26 <simondodsley> as i said Pure are in the process of providing a storage array to Red Hat free of charge that is NVMe-FC capable to help with this effort when it gets started
14:37:19 <raghavendrat> if i understand correctly, with nvme-fc, only create/delete volume will work
14:37:34 <raghavendrat> attach/detach volume won't work. right ?
14:37:41 <geguileo> raghavendrat: Correct
14:37:55 <geguileo> Creating a snapshot will also work, although it won't do any good since there won't have any data
14:38:03 <geguileo> Manage/unmanage will work as well
14:38:25 <raghavendrat> for now, HPE is ok with basic nvme-fc support... we can look for attach/detach later.
14:38:26 <geguileo> Basically only things unrelated to nvme-fc will work
14:38:59 <geguileo> raghavendrat: The thing is that it doesn't make sense to have an NVMe-FC driver in Cinder when OpenStack doesn't support the technology
14:39:14 <geguileo> It would lead to confussion
14:40:14 <jbernard> raghavendrat: we need to have basic functionality, which includes attach and detach, to be able to meet the bar for inclusion; that's my view at least
14:40:17 <geguileo> Though it'd be good to hear other cores opinions
14:40:32 <whoami-rajat> raghavendrat, so the CI jobs excludes tests related to attachment? the logs are currently unavailable on the git repo mentioned on the CI job so it would better if we can upload it in a similar format as other CIs
14:40:33 <msaravan> Couple of customers of NetApp, waiting for Cinder NVMe FCP
14:40:35 <geguileo> jbernard: Yeah, my view as well
14:40:55 <geguileo> msaravan: Are they RH customers?
14:41:19 <geguileo> Because last time I asked Pure's customers interested in it were not RH customers
14:41:21 <whoami-rajat> attach detach are mandatory operations for a driver to merge https://docs.openstack.org/cinder/latest/reference/support-matrix.html
14:41:24 <msaravan> One is RH, and other is using JIJU
14:41:35 <raghavendrat> thanks geguileo: for sharing details... whoami-rajat: i will look at CI again
14:42:04 <msaravan> @geguileo : I can share more details offline if required..
14:42:22 <geguileo> msaravan: let's talk offline to see if we can get some traction
14:42:28 <geguileo> offline == after meeting
14:42:41 <msaravan> @geguileo : sure, thank you..
14:44:49 <raghavendrat> thanks jbernard and simondodsley
14:45:31 <jbernard> raghavendrat: no problem, it would nice to see these pieces come together
14:45:55 <jbernard> #topic review request
14:46:05 <jbernard> there are a few:
14:46:15 <jbernard> #link https://etherpad.opendev.org/p/cinder-dalmatian-meetings#L101
14:46:46 <jbernard> feel free to raise any specific points on any of them
14:47:10 <jbernard> if anyone has cycles, these are in need of attention this week
14:48:31 <jbernard> Luzi: your specs are on my list as well
14:48:38 <Luzi> thank you
14:48:42 <eharney> it's probably worth mentioning that we have patches for a CVE working their way through the gates this morning:  https://bugs.launchpad.net/cinder/+bug/2059809
14:49:53 <jbernard> eharney: oh yes!
14:50:23 <jbernard> I don't think there is anything actionable remaining from cinder, just something to be aware of
14:50:40 <eharney> just baby-sitting gate failures
14:51:01 <jbernard> and thanks to rosmaita and eharney for working on that
14:51:25 <jbernard> #topic open discussion
14:52:46 <zaitcev> jbernard: how do I make sure that my spec is targeted for D-2? https://review.opendev.org/c/openstack/cinder-specs/+/862601
14:53:33 <whoami-rajat> just wanted to highlight my patch, been sitting for a while, it's a small one, already has a +2 and needed to have SDK support of default volume types https://review.opendev.org/c/openstack/cinder/+/920308
14:53:41 <andrewbonney> If anyone has chance, I'd appreciate any thoughts on https://bugs.launchpad.net/cinder/+bug/2070475, just to identify if it looks like a bug or a deployment issue
14:53:53 <whoami-rajat> shouldn't take more than 2 mins for a core to approve :D
14:56:36 <jbernard> zaitcev: i think being targeted to D means that it is subject to the milestones and schedule for the current release
14:56:44 <jbernard> zaitcev: does that answer your question?
14:57:43 <zaitcev> jbernard: So, what is left for me to do? Just ask for more (re-)reviews?
14:57:49 <jbernard> andrewbonney: can you add that to the etherpad?
14:57:57 <jbernard> andrewbonney: https://etherpad.opendev.org/p/cinder-dalmatian-meetings#L101
14:58:17 <jbernard> andrewbonney: even though it's not a patch, it helps to have those requests all in once place
14:58:41 <andrewbonney> Sure, would it be best under this week or a future one?
14:58:50 <jbernard> zaitcev: beg, barter, pester, whatever you need to do :)
14:59:36 <jbernard> andrewbonney: this one is fine, just so that it has more visibility
14:59:49 <andrewbonney> Ok
15:00:21 <jbernard> andrewbonney: at least for me, i go back through the etherpad and action items later in the week
15:00:41 <jbernard> ok, that's time!
15:00:46 <jbernard> thanks everyone
15:00:49 <jbernard> #endmeeting