14:03:26 <jbernard> #startmeeting cinder
14:03:26 <opendevmeet> Meeting started Wed Nov 19 14:03:26 2025 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:03:27 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:03:27 <opendevmeet> The meeting name has been set to 'cinder'
14:03:41 <jbernard> courtesy reminder: jungleboyj rosmaita smcginnis tosky whoami-rajat m5z e0ne geguileo eharney jbernard hemna fabiooliveira yuval tobias-urdin adiare happystacker dosaboy hillpd msaravan sp-bmilanov Luzi sfernand simondodsley  zaubea nileshthathagar flelain wizardbit agalica
14:03:48 <jbernard> #topic roll call
14:03:51 <raghavendrat> hi
14:03:56 <erlon> \o
14:03:57 <jbernard> o/
14:04:01 <agalica> o/
14:04:02 <seunghunlee> hi
14:04:09 <agalica> (finally made it at 6am :D)
14:04:18 <jbernard> ouch :/
14:04:39 <cardoe> o/
14:04:55 <rosmaita> o/
14:04:58 <sfernand> hi
14:06:22 <whoami-rajat> hi
14:07:52 <jbernard> #link https://etherpad.opendev.org/p/cinder-gazpacho-meetings
14:07:58 <jbernard> etherpad for this meeting^
14:08:12 <jbernard> it looks like this will be a quick meeting today
14:08:46 <jbernard> #topic annoucments
14:08:53 <jbernard> not much :)
14:08:55 <rosmaita> i got one
14:08:56 <jbernard> we're here:
14:09:02 <jbernard> #link https://releases.openstack.org/gazpacho/schedule.html
14:09:15 <jbernard> in R-19
14:09:28 <jbernard> rosmaita: fire away
14:09:50 <rosmaita> Festival of Reviews on Friday, 1400-1600 UTC
14:09:50 <rosmaita> #link https://meetings.opendev.org/#Cinder_Festival_of_Reviews
14:10:16 <agalica> I'm attending, so if you wanna be one of the cool kids you can join us
14:10:18 <rosmaita> we had decent attendance at the last one 2 weeks ago
14:10:24 <rosmaita> let's try to set a record!
14:10:47 <rosmaita> agalica will be there, what more do you want?
14:10:56 <agalica> haha
14:11:23 <jbernard> i have an appointment that may cause me to miss it, i might try to work on the patch list some either today or tomorrow
14:11:34 <whoami-rajat> I won't be there since I'm on leave, but feel free to leave some patches that require my sight
14:11:56 <jbernard> yes, direct all queries to rajat's inbox please ;)
14:12:38 <lutimura> hey guys! sorry i'm late. i have a couple of changes that i'd love to get some feedbacks on. should i add them to the etherpad above?
14:12:45 <whoami-rajat> s/all/some :D
14:13:03 <erlon> is it this friday?
14:13:11 <agalica> erlon: yes
14:13:28 <jbernard> lutimura: https://etherpad.opendev.org/p/cinder-gazpacho-reviews
14:13:38 <agalica> jon, you beat me by like 20ms
14:13:58 <jbernard> :)
14:14:10 <erlon> I was counting on the ics from https://meetings.opendev.org/#Cinder_Festival_of_XS_Reviews
14:14:12 <opendevreview> Eric Harney proposed openstack/cinder master: Replace _add_to_threadpool with native threads  https://review.opendev.org/c/openstack/cinder/+/959383
14:14:23 <opendevreview> Eric Harney proposed openstack/cinder master: Add Slow Fake volume driver  https://review.opendev.org/c/openstack/cinder/+/959384
14:14:27 <jbernard> rosmaita: ^ re ics, you updated that, no?
14:14:43 <rosmaita> erlon: that's the old one, been replaced by https://meetings.opendev.org/#Cinder_Festival_of_Reviews
14:14:46 <erlon> it shows to my an event on Dec 5
14:15:16 <jbernard> the festival has been modified to include a bit more than just XS reviews
14:15:28 <jbernard> so we changed the name to reflect
14:15:28 <rosmaita> yeah, there was an issue with the ics file, for some reason i couldn't get it to display the meetings in november
14:15:32 <opendevreview> Eric Harney proposed openstack/cinder master: RBD changes for eventlet removal  https://review.opendev.org/c/openstack/cinder/+/959385
14:15:32 <rosmaita> i forgot about that
14:15:57 <rosmaita> but yeah, if you are showing dec 5, i think you have the new ics , since that's the first friday in december
14:16:11 <rosmaita> to remind people: the festival is now 2x a month
14:16:17 <rosmaita> first friday and third friday
14:16:23 <rosmaita> 1400-1600 UTC
14:17:25 <erlon> @rosmaita Do you have a guerrit query with the items we will be reviewing?
14:17:28 <rosmaita> i'll send an email to the mailing list to remind people, because if they're relying on the ics, they won't know about friday's meeting
14:18:00 <rosmaita> erlon: it's on the etherpad that's linked from https://meetings.opendev.org/#Cinder_Festival_of_Reviews
14:18:23 <rosmaita> #link https://etherpad.opendev.org/cinder-festival-of-reviews
14:18:47 <erlon> nice
14:18:49 <erlon> thanks
14:19:22 <rosmaita> the "new" dashboard may not show much stuff, since a priority has to be set for a review to show up in there
14:19:57 <rosmaita> but there's a link to the "old" dashboard if you show up early or something
14:21:17 <jbernard> ill try to set some priorities prior to friday
14:21:50 <jbernard> #topic noonedeadpunk 's cinder-backup patches
14:22:00 <jbernard> noonedeadpunk: ^ do you happen to be around?
14:22:03 <noonedeadpunk> o/
14:22:21 <jbernard> https://review.opendev.org/c/openstack/cinder/+/962909 and https://review.opendev.org/c/openstack/cinder/+/959425
14:22:45 <jbernard> we wanted to include these in the PTG sessions, but scheduling was hard
14:22:47 <noonedeadpunk> (there are also respective specs proposed for them)
14:23:15 <noonedeadpunk> https://review.opendev.org/c/openstack/cinder-specs/+/958838 and https://review.opendev.org/c/openstack/cinder-specs/+/962306
14:23:29 <jbernard> do these simply need reviews, or is there something that requires discussion?
14:23:30 <noonedeadpunk> not sure if they are needed, but I created them just in case :)
14:24:22 <noonedeadpunk> Well, it would be nice to validate if these are making sense at all from project prespective
14:24:36 <noonedeadpunk> but I think mainly reviews from my standpoint
14:25:11 <noonedeadpunk> as we're approaching spec freeze afaik, and would be nice to have progress this cycle
14:25:43 <rosmaita> noonedeadpunk: i will try to get comments on those specs in the next day or so ... feel free to ping me on friday if you haven't seen any action
14:25:53 <jayaanand> need core approval for support matrix update https://review.opendev.org/c/openstack/cinder/+/964479
14:26:02 <noonedeadpunk> awesome, thanks!
14:27:23 <jbernard> #topic cardoe has some things
14:27:32 <jbernard> cardoe: take it away
14:27:39 <cardoe> heh I'm noisy. :)
14:29:19 <jbernard> this looks like followup from our ptg discussion
14:29:20 <cardoe> Just continuing to work with a NetApp device that I've got. For those that weren't at the PTG, we build KVM and ESXi hypervisors in Ironic and then attaching them to a storage backend. Those KVM and ESXi setups represent a traditional OpenStack and then whatever VMware calls their thing.
14:29:58 <cardoe> My stuff is all test envs right now but the real setup is actually the Zuul infrastructure in the future.
14:30:24 <cardoe> Running 2025.1 in one env with a NetApp and there's a handful of issues that I was trying to get backported and chase down.
14:30:38 <cardoe> We're working exclusively with NVMe
14:31:07 <cardoe> The ultimate goal will be to make the Nova/Ironic/Cinder flow work nicely with NVMe.
14:31:53 <cardoe> Outside of the backports the two bugs we're seeing are the slow behavior from https://review.opendev.org/c/openstack/cinder/+/962085
14:31:58 <erlon> So I suppose the flow works well with iSCSI and other drivers today
14:32:17 <cardoe> Nova/Ironic/Cinder works with iSCSI in 1 specific flow today.
14:33:01 <cardoe> TheJulia and I are working on a spec improvement to provide full network storage device function from Ironic
14:33:26 <erlon> hmm, right
14:33:37 <erlon> Are you guys working on a CI job for that?
14:34:08 <TheJulia> We've had one on ironic for *ages*
14:34:28 <erlon> NVMe?
14:34:54 <TheJulia> no, can't do that yet, iscsi only. The issue is the labeling usage pattern as it relates to the data and cinder driver inconsistency in field values
14:35:14 <cardoe> erlon: it works but you need a pile of patches to hardcoded strings
14:35:34 <TheJulia> we basically need cleanup to be performed and consistency reached as it relates to nova's interaction and values.
14:35:39 <cardoe> like netapp wants a string of "NVMe" while other parts of Cinder pass around "nvmeof"
14:36:06 <cardoe> Nova hardcodes a few dict keys it gets from os-brick which don't necessarily line up.
14:36:26 <cardoe> Those rough edges I'm working on.
14:36:38 <cardoe> I'm happy to contribute hardware
14:36:43 <TheJulia> (a list in os-brick of expected keys would sort of be ideal to iterate through, realistically)
14:36:49 <jbernard> cardoe: yes please
14:37:05 <erlon> Right, I'm thinking on the target scenario for a running Nova/Ironic/Cinder job i
14:37:06 <rosmaita> https://opendev.org/openstack/cinder/src/branch/master/cinder/common/constants.py#L58
14:37:35 <cardoe> Out of all software I deal with Zuul is the most confusing beast so I'll need help.
14:37:41 <TheJulia> Except, other services can't be really expected to import cinder as a library
14:38:54 * TheJulia lets cardoe talk and stops interjecting
14:39:04 <cardoe> no you're right.
14:39:20 <TheJulia> I'm just pointing to that sharp annoying bit that threads hang on ;)
14:39:23 <cardoe> All I'm really trying to ask is on the performance issue and the os_type issue
14:39:52 <cardoe> So performance wise the patch landed if you've got cinder-volume configured with 1 SVM (which is how the NetApp driver is written) and you've got multiple pools on that SVM.
14:40:21 <cardoe> But if you then add another backend (which per the Cinder docs should be another SVM), performance tanks.
14:40:50 <whoami-rajat> rosmaita, isn't that the storage_protocol and not the driver_volume_type returned to os-brick?
14:40:51 <cardoe> You add a 3rd backend to the same NetApp cluster? You cannot do a get_volume_stats() without getting Cinder throwing a warning that it took too long.
14:41:51 <cardoe> So really wanting to understand the best way to work at improving this. Cause I can harass NetApp through a support channel since they want us to buy their gear.
14:42:01 <cardoe> Or if I should be trying to engage via bugs against Cinder.
14:42:06 <whoami-rajat> cardoe, netapp improved that to some extent with this https://review.opendev.org/c/openstack/cinder/+/964191
14:42:50 <cardoe> Yeah that works fine with 1 SVM and multiple pools.
14:42:55 <whoami-rajat> looks like some of the performance and dedup stats weren't useful for scheduling to they're cached now until a long configurable value
14:42:59 <cardoe> It doesn't do much as soon as you define 2 backends.
14:43:13 <cardoe> I've already got that patch pulled in.
14:43:21 <whoami-rajat> ack
14:43:39 <cardoe> The other issue I've run into is the os_type. https://bugs.launchpad.net/cinder/+bug/2131104
14:44:04 <cardoe> volume attachment create takes an "os-type" field but the NetApp driver doesn't respect that.
14:44:16 <cardoe> It actually only utilizes config flags that aren't documented.
14:44:38 <cardoe> But the issue is that os-type needs to be passed to the NetApp on the volume creation AND on the attachment
14:45:02 <cardoe> I don't see a way in Cinder to pass along os-type on the volume create. Should that be something that's part of the volume type?
14:45:20 <cardoe> If so I'm happy to work on that patch to make it do that.
14:45:55 <cardoe> As far as the volume_stats performance issue, I'm just looking for some docs or details on WHAT cinder really wants from the volume_stats. It seems each driver provides some different details.
14:46:25 <cardoe> Sorry I provided a lot of context. I'm not looking to throw a bunch of "here's issues Cinder Community! Go fix it."
14:46:52 <cardoe> What I'm looking for is some direction on the path we should take and I'll task developer resources to contribute the changes.
14:47:05 <cardoe> Unless the better approach is to raise this via a NetApp support channel.
14:47:08 <cardoe> That's what I'm asking.
14:47:14 <cardoe> </end>
14:47:38 <rosmaita> cardoe: https://opendev.org/openstack/cinder/src/branch/master/cinder/interface/volume_driver.py#L68-L172
14:47:47 <rosmaita> (about volume_stats)
14:49:28 <jbernard> i need to (am willing to) look into the volume type logic to offer some direction on os-type
14:49:57 <cardoe> rosmaita: that is helpful thank you. I was trying to find it in the docs.
14:50:45 <rosmaita> yeah, we've been trying to consolidate the info in one place, but no one knows which is the one place
14:51:15 <agalica> sounds like a standard consolidation to me
14:52:21 <jbernard> #action jbernard os-type on volume create
14:52:27 <jbernard> ok to move on?
14:52:32 <cardoe> yes thank you.
14:53:00 <cardoe> Oh. the backports... what's the best way to add things to a queue that should get an eyeball for consideration?
14:53:59 <jbernard> cardoe: same etherpad, that seems reasonable to me
14:54:07 <jbernard> https://etherpad.opendev.org/p/cinder-gazpacho-reviews
14:54:19 <cardoe> will do. thank you. sorry for the long windedness.
14:55:08 <jbernard> #topic erlon / https://review.opendev.org/c/openstack/os-brick/+/955379
14:55:33 <erlon> hey
14:56:27 <erlon> We discussed this one on the ptg as well. I just wanted to bring some attention to it, since we don't want to miss the release this time.
14:56:41 <erlon> its in the etherpad already
14:57:09 <erlon> @whoami-rajat if you can add it to your personal list as well, Id appreciated
14:57:32 <erlon> but other eyes are also welcomed
14:59:42 <jbernard> ok, will do
14:59:59 <jbernard> lutimura has added some review requests too, ill move those to the other pad too
15:00:16 <jbernard> we're at time, last call for things
15:00:20 <raghavendrat> In Dalmatian zuul is failing. Mainly the test cinder-grenade-mn-sub-volbak (voting) is failing.
15:00:35 <raghavendrat> Brian had a look & suggested that, it could be setuptools problem.
15:00:46 <raghavendrat> It would be great if i can get any additional pointers to resolve this. Thanks
15:00:55 <agalica> Zuul has been having issues for 2 weeks now - lots of timeouts and stuff
15:01:04 <raghavendrat> backport: https://review.opendev.org/c/openstack/cinder/+/964558
15:01:13 <raghavendrat> Error traceback:
15:01:24 <raghavendrat> AttributeError: module 'setuptools.build_meta' has no attribute 'get_requires_for_build_editable'. Did you mean: 'get_requires_for_build_sdist'?
15:03:05 <rosmaita> i think that was happening in master for a while a few weeks back, but i can't remember if we fixed it or if it was a ubuntu package issue
15:03:15 <rosmaita> i think it was a packaging thing
15:03:30 <rosmaita> which makes it perplexing why it is happening in 2024.2 now
15:04:06 <raghavendrat> ok
15:04:28 <rosmaita> raghavendrat: it's only on that cinder-grenade-mn-sub-volbak job, is that right?
15:04:38 <rosmaita> https://zuul.opendev.org/t/openstack/build/45c3350dc5114d2bb0ac32d1a70f1eb5
15:05:06 <raghavendrat> other two jobs are failing. but those are non voting
15:05:20 <raghavendrat> s/jobs/tests/
15:05:40 <rosmaita> ok, that reminds me ... Sean put up a patch to remove non-voting jobs from the stable branches, i think we should do that
15:05:59 <rosmaita> here's the job history: https://zuul.opendev.org/t/openstack/builds?job_name=cinder-grenade-mn-sub-volbak&project=openstack%2Fcinder&branch=stable%2F*&skip=0
15:06:04 <rosmaita> (on the stable branches)
15:06:37 <rosmaita> looks like only stable/2024.2 is having the issue
15:07:07 <raghavendrat> hhmm
15:08:41 <rosmaita> the job definition doesn't look different between branches: https://zuul.opendev.org/t/openstack/job/cinder-grenade-mn-sub-volbak
15:10:51 <jbernard> ok, i think we can wrap, is that cool?
15:10:51 <rosmaita> the regular grenade job looks OK in stable/2024.2: https://zuul.opendev.org/t/openstack/builds?job_name=grenade&project=openstack%2Fcinder&branch=stable%2F2024.2&skip=0
15:11:42 <rosmaita> yeah, i am just thinking out loud here
15:11:50 <rosmaita> (i thought the meeting was over)
15:12:00 <opendevreview> Zachary Mark Raines proposed openstack/cinder master: Add 512e/4k disk geometry configuration  https://review.opendev.org/c/openstack/cinder/+/658283
15:12:02 <raghavendrat> thank you Brian
15:13:33 <rosmaita> ok, maybe here's something to try: the "regular" job runs on ubuntu jammy
15:13:46 <rosmaita> https://zuul.opendev.org/t/openstack/job/grenade
15:16:10 <rosmaita> bummer, looks like the failing job is also running on jammy: https://zuul.opendev.org/t/openstack/build/8aaf5ab45a61491ab1dd74d9d7f571ae/log/job-output.txt#52
15:16:42 <jbernard> o,
15:16:46 <jbernard> #endmeeting