*** bauzas_ is now known as bauzas | 03:32 | |
*** bauzas_ is now known as bauzas | 04:06 | |
*** bauzas_ is now known as bauzas | 13:46 | |
*** whoami-rajat_ is now known as whoami-rajat | 14:00 | |
raghavendrat | hi | 14:01 |
---|---|---|
ccokeke[m] | hello | 14:02 |
akawai | o/ | 14:02 |
jbernard | #startmeeting cinder | 14:02 |
opendevmeet | Meeting started Wed Jul 3 14:02:11 2024 UTC and is due to finish in 60 minutes. The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:02 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:02 |
opendevmeet | The meeting name has been set to 'cinder' | 14:02 |
Luzi | o/ | 14:02 |
jbernard | #topic roll call | 14:02 |
simondodsley | o/ | 14:02 |
jbernard | o/ hello everyone | 14:02 |
whoami-rajat | hi | 14:02 |
eharney | hi | 14:02 |
msaravan | hi | 14:02 |
tosky | o/ | 14:02 |
rosmaita | o/ | 14:03 |
jbernard | ok, lets see | 14:05 |
jbernard | #annoucements | 14:05 |
jbernard | this week is the D-2 milestone | 14:05 |
jbernard | we have extended the spec freeze deadline | 14:05 |
* jbernard digs for the link | 14:06 | |
jbernard | #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/4VF7SOL2RFA2PYE5LUM6WKLP6J6NH3MR/ | 14:06 |
jbernard | i added the weekend to account for the holidays for some this week | 14:06 |
jbernard | certainly myself, this weeks is complete chaos over here | 14:07 |
whoami-rajat | i think we meant to say July 07th right? | 14:07 |
jbernard | yes, what did we say? | 14:07 |
whoami-rajat | June 7th? | 14:07 |
jbernard | yep, i blew it :P | 14:07 |
whoami-rajat | :D | 14:07 |
jbernard | s/June/July | 14:07 |
ybenshim | join #openstack-meeting-alt | 14:07 |
jbernard | ybenshim: you're already here :) | 14:08 |
whoami-rajat | no worries, it should be clear enough, or we can add a reply to that thread correcting it | 14:08 |
jbernard | #action update the list to correct spec freeze typo | 14:08 |
jbernard | also | 14:09 |
jbernard | this week, according to our schedule, is new driver merge deadline | 14:09 |
jbernard | we currently have 2 proposed: | 14:09 |
jbernard | #link https://review.opendev.org/c/openstack/cinder/+/921020 | 14:10 |
jbernard | ^ ceastor | 14:10 |
jbernard | and | 14:10 |
jbernard | #link https://review.opendev.org/c/openstack/cinder/+/923016 | 14:10 |
jbernard | ^ zte vstorage | 14:10 |
jbernard | there has not been any response on ceastor | 14:10 |
jbernard | and there is currently no CI | 14:10 |
whoami-rajat | i don't see CI on ZTE as well | 14:11 |
jbernard | nor do i | 14:11 |
jbernard | we may miss the deadline for those on this cycle | 14:12 |
jbernard | and we can begin looking at E for these, that should be enough time to implement CI | 14:12 |
jbernard | there is one other question raised related to new drivers, from raghavendra | 14:13 |
jbernard | #link https://review.opendev.org/c/openstack/cinder/+/919424 | 14:13 |
jbernard | ^ this adds NVMe FC support to the HPE 3par driver | 14:14 |
jbernard | the question was wether or not this is a new driver, or an addition to an existing one | 14:14 |
jbernard | i believe this one does have CI coverage from HEP | 14:14 |
jbernard | s/HEP/HPE | 14:14 |
simondodsley | This isn't technically a new driver, just adding a new dataplane to existing driver code, so as long as there is a CI reporting for the new dataplane it should be OK | 14:16 |
jbernard | i think as the patch is adding nvme.py file to drivers, it looks like a new driver to me | 14:16 |
jbernard | simondodsley: ok, im fine with that | 14:16 |
*** bauzas_ is now known as bauzas | 14:16 | |
jbernard | either way, it's passing unit tests and 3rd party CI | 14:16 |
whoami-rajat | do we support nvme FC ? -- from os-brick perspective | 14:16 |
jbernard | whoami-rajat: an excellent question :) | 14:17 |
jbernard | im assuming some baseline amount of functionality is present, else their CI would not be passing | 14:18 |
simondodsley | no - there is no support for nvme-fc in os-brick | 14:18 |
simondodsley | so actually that driver can't work | 14:18 |
jbernard | is there a related patch to os-brick for that? I'm confused how this is expected to merge without os-brick support | 14:19 |
whoami-rajat | their CI is passing but clicking on the logs, it redirects to a github repo which has no content | 14:19 |
simondodsley | there is no os-brick patch for nvme-fc. | 14:20 |
jbernard | then there you have it :) | 14:20 |
jbernard | #action add comment to 3par patch (nvme-fc) to request os-brick support and CI coverage | 14:20 |
simondodsley | Pure are providing Red Hat with an NVMe-FC capable array so they can develop one if they want, but currently Gorka is not allowed to develop it, even though he wants to | 14:20 |
raghavendrat | ok | 14:21 |
simondodsley | there are specific nvme calls for nvme-fc that os-brick does not yet have. Pure are waiting on this too | 14:21 |
jbernard | simondodsley: thanks for the context | 14:23 |
jbernard | #topics | 14:23 |
jbernard | a couple of drivers have been deprecated | 14:23 |
jbernard | most notably the gluster driver | 14:24 |
jbernard | #link https://review.opendev.org/c/openstack/cinder/+/923163 | 14:24 |
eharney | yeah just wanted to mention this, it's the gluster backup driver | 14:24 |
eharney | we got rid of the volume driver a while back | 14:24 |
eharney | seems like a good time to do this unless anyone has a reason not to | 14:24 |
tosky | simondodsley: a clarification about that - are you sure it's Gorka not "allowed" to? | 14:25 |
simondodsley | Red Hat won't give hin the time to do it as they don't beleive there are any customers requesting this protocol. He even went and spent his own money on a Gen5 NVMe-FC card | 14:26 |
jbernard | eharney: +1 | 14:26 |
tosky | simondodsley: that's not the story, but I'd advice you talking with him | 14:27 |
tosky | talking again | 14:27 |
simondodsley | that's what he told me a few months ago | 14:28 |
simondodsley | i'll re chat with him | 14:28 |
geguileo | I think I was summoned... | 14:29 |
geguileo | What's up? | 14:29 |
simondodsley | status of os-brick for nvme-fc please... | 14:29 |
jbernard | geguileo: a question came up regarding support is os-brick for NVMe-FC | 14:29 |
geguileo | Oh yeah, it's not supported | 14:30 |
geguileo | I bought an HBA that supported it and could connect one port to the other to work on this | 14:30 |
jbernard | is there any interest or motivation from others to add support? (in your view) | 14:30 |
geguileo | But unfortunately priorities didn't help | 14:30 |
rosmaita | geguileo: so, in your opinion, a third party CI running nvme-fc is probably not really working if it doesn't include os-brick patches | 14:31 |
geguileo | jbernard: I think customers will eventually ask for it | 14:31 |
geguileo | rosmaita: It cannot work, because it cannot attach volumes | 14:31 |
jbernard | geguileo: curious, which HBA did you get? | 14:31 |
simondodsley | exactly | 14:31 |
geguileo | jbernard: Once customer start asking for it there will be a justification for me to work on this instead of the other things customers are asking for | 14:31 |
geguileo | jbernard: HPE SN1100Q 16Gb 2p FC HBA | 14:32 |
jbernard | raghavendrat: are there any efforts on your side to work on this? - as it will be needed in order to land the existing patch to 3par | 14:33 |
jbernard | geguileo: thanks | 14:33 |
geguileo | Right now, if none of our customers are going to use it, then it's not a priority, and I don't have enough cycles right now to work on things just because I want to | 14:33 |
jbernard | that's understandable | 14:34 |
jbernard | im hoping to capture the current state as best we can in this meeting to we can refer back later | 14:34 |
jbernard | one more minute and we can move onto review requests | 14:35 |
simondodsley | as i said Pure are in the process of providing a storage array to Red Hat free of charge that is NVMe-FC capable to help with this effort when it gets started | 14:35 |
raghavendrat | if i understand correctly, with nvme-fc, only create/delete volume will work | 14:37 |
raghavendrat | attach/detach volume won't work. right ? | 14:37 |
geguileo | raghavendrat: Correct | 14:37 |
geguileo | Creating a snapshot will also work, although it won't do any good since there won't have any data | 14:37 |
geguileo | Manage/unmanage will work as well | 14:38 |
raghavendrat | for now, HPE is ok with basic nvme-fc support... we can look for attach/detach later. | 14:38 |
geguileo | Basically only things unrelated to nvme-fc will work | 14:38 |
geguileo | raghavendrat: The thing is that it doesn't make sense to have an NVMe-FC driver in Cinder when OpenStack doesn't support the technology | 14:38 |
geguileo | It would lead to confussion | 14:39 |
jbernard | raghavendrat: we need to have basic functionality, which includes attach and detach, to be able to meet the bar for inclusion; that's my view at least | 14:40 |
geguileo | Though it'd be good to hear other cores opinions | 14:40 |
whoami-rajat | raghavendrat, so the CI jobs excludes tests related to attachment? the logs are currently unavailable on the git repo mentioned on the CI job so it would better if we can upload it in a similar format as other CIs | 14:40 |
msaravan | Couple of customers of NetApp, waiting for Cinder NVMe FCP | 14:40 |
geguileo | jbernard: Yeah, my view as well | 14:40 |
geguileo | msaravan: Are they RH customers? | 14:40 |
geguileo | Because last time I asked Pure's customers interested in it were not RH customers | 14:41 |
whoami-rajat | attach detach are mandatory operations for a driver to merge https://docs.openstack.org/cinder/latest/reference/support-matrix.html | 14:41 |
msaravan | One is RH, and other is using JIJU | 14:41 |
raghavendrat | thanks geguileo: for sharing details... whoami-rajat: i will look at CI again | 14:41 |
msaravan | @geguileo : I can share more details offline if required.. | 14:42 |
geguileo | msaravan: let's talk offline to see if we can get some traction | 14:42 |
geguileo | offline == after meeting | 14:42 |
msaravan | @geguileo : sure, thank you.. | 14:42 |
raghavendrat | thanks jbernard and simondodsley | 14:44 |
jbernard | raghavendrat: no problem, it would nice to see these pieces come together | 14:45 |
jbernard | #topic review request | 14:45 |
jbernard | there are a few: | 14:46 |
jbernard | #link https://etherpad.opendev.org/p/cinder-dalmatian-meetings#L101 | 14:46 |
jbernard | feel free to raise any specific points on any of them | 14:46 |
jbernard | if anyone has cycles, these are in need of attention this week | 14:47 |
jbernard | Luzi: your specs are on my list as well | 14:48 |
Luzi | thank you | 14:48 |
eharney | it's probably worth mentioning that we have patches for a CVE working their way through the gates this morning: https://bugs.launchpad.net/cinder/+bug/2059809 | 14:48 |
jbernard | eharney: oh yes! | 14:49 |
jbernard | I don't think there is anything actionable remaining from cinder, just something to be aware of | 14:50 |
eharney | just baby-sitting gate failures | 14:50 |
jbernard | and thanks to rosmaita and eharney for working on that | 14:51 |
jbernard | #topic open discussion | 14:51 |
zaitcev | jbernard: how do I make sure that my spec is targeted for D-2? https://review.opendev.org/c/openstack/cinder-specs/+/862601 | 14:52 |
whoami-rajat | just wanted to highlight my patch, been sitting for a while, it's a small one, already has a +2 and needed to have SDK support of default volume types https://review.opendev.org/c/openstack/cinder/+/920308 | 14:53 |
andrewbonney | If anyone has chance, I'd appreciate any thoughts on https://bugs.launchpad.net/cinder/+bug/2070475, just to identify if it looks like a bug or a deployment issue | 14:53 |
whoami-rajat | shouldn't take more than 2 mins for a core to approve :D | 14:53 |
jbernard | zaitcev: i think being targeted to D means that it is subject to the milestones and schedule for the current release | 14:56 |
jbernard | zaitcev: does that answer your question? | 14:56 |
zaitcev | jbernard: So, what is left for me to do? Just ask for more (re-)reviews? | 14:57 |
jbernard | andrewbonney: can you add that to the etherpad? | 14:57 |
jbernard | andrewbonney: https://etherpad.opendev.org/p/cinder-dalmatian-meetings#L101 | 14:57 |
jbernard | andrewbonney: even though it's not a patch, it helps to have those requests all in once place | 14:58 |
andrewbonney | Sure, would it be best under this week or a future one? | 14:58 |
jbernard | zaitcev: beg, barter, pester, whatever you need to do :) | 14:58 |
jbernard | andrewbonney: this one is fine, just so that it has more visibility | 14:59 |
andrewbonney | Ok | 14:59 |
jbernard | andrewbonney: at least for me, i go back through the etherpad and action items later in the week | 15:00 |
jbernard | ok, that's time! | 15:00 |
jbernard | thanks everyone | 15:00 |
jbernard | #endmeeting | 15:00 |
opendevmeet | Meeting ended Wed Jul 3 15:00:49 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:00 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-07-03-14.02.html | 15:00 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-07-03-14.02.txt | 15:00 |
opendevmeet | Log: https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-07-03-14.02.log.html | 15:00 |
whoami-rajat | thanks everyone! | 15:00 |
whoami-rajat | andrewbonney, are you running the online data migrations before doing the db sync? | 15:04 |
whoami-rajat | added a comment to the LP bug | 15:07 |
andrewbonney | Ta. I'll double check and feed anything back there | 15:11 |
andrewbonney | Is the upgrade guide at https://docs.openstack.org/cinder/latest/admin/upgrades.html still accurate in that respect? Migrations appear to be a post-maintenance step | 15:13 |
whoami-rajat | andrewbonney, those migrations were added in the Yoga release so looks like they were never run | 15:16 |
*** bauzas_ is now known as bauzas | 15:27 | |
andrewbonney | Ok. They certainly should have run as they're included as a step in the openstack ansible role, but that gives me something to investigate on a deployment which hasn't been upgraded yet | 15:38 |
*** bauzas_ is now known as bauzas | 23:43 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!