14:03:26 #startmeeting cinder 14:03:26 Meeting started Wed Nov 19 14:03:26 2025 UTC and is due to finish in 60 minutes. The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:03:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:03:27 The meeting name has been set to 'cinder' 14:03:41 courtesy reminder: jungleboyj rosmaita smcginnis tosky whoami-rajat m5z e0ne geguileo eharney jbernard hemna fabiooliveira yuval tobias-urdin adiare happystacker dosaboy hillpd msaravan sp-bmilanov Luzi sfernand simondodsley zaubea nileshthathagar flelain wizardbit agalica 14:03:48 #topic roll call 14:03:51 hi 14:03:56 \o 14:03:57 o/ 14:04:01 o/ 14:04:02 hi 14:04:09 (finally made it at 6am :D) 14:04:18 ouch :/ 14:04:39 o/ 14:04:55 o/ 14:04:58 hi 14:06:22 hi 14:07:52 #link https://etherpad.opendev.org/p/cinder-gazpacho-meetings 14:07:58 etherpad for this meeting^ 14:08:12 it looks like this will be a quick meeting today 14:08:46 #topic annoucments 14:08:53 not much :) 14:08:55 i got one 14:08:56 we're here: 14:09:02 #link https://releases.openstack.org/gazpacho/schedule.html 14:09:15 in R-19 14:09:28 rosmaita: fire away 14:09:50 Festival of Reviews on Friday, 1400-1600 UTC 14:09:50 #link https://meetings.opendev.org/#Cinder_Festival_of_Reviews 14:10:16 I'm attending, so if you wanna be one of the cool kids you can join us 14:10:18 we had decent attendance at the last one 2 weeks ago 14:10:24 let's try to set a record! 14:10:47 agalica will be there, what more do you want? 14:10:56 haha 14:11:23 i have an appointment that may cause me to miss it, i might try to work on the patch list some either today or tomorrow 14:11:34 I won't be there since I'm on leave, but feel free to leave some patches that require my sight 14:11:56 yes, direct all queries to rajat's inbox please ;) 14:12:38 hey guys! sorry i'm late. i have a couple of changes that i'd love to get some feedbacks on. should i add them to the etherpad above? 14:12:45 s/all/some :D 14:13:03 is it this friday? 14:13:11 erlon: yes 14:13:28 lutimura: https://etherpad.opendev.org/p/cinder-gazpacho-reviews 14:13:38 jon, you beat me by like 20ms 14:13:58 :) 14:14:10 I was counting on the ics from https://meetings.opendev.org/#Cinder_Festival_of_XS_Reviews 14:14:12 Eric Harney proposed openstack/cinder master: Replace _add_to_threadpool with native threads https://review.opendev.org/c/openstack/cinder/+/959383 14:14:23 Eric Harney proposed openstack/cinder master: Add Slow Fake volume driver https://review.opendev.org/c/openstack/cinder/+/959384 14:14:27 rosmaita: ^ re ics, you updated that, no? 14:14:43 erlon: that's the old one, been replaced by https://meetings.opendev.org/#Cinder_Festival_of_Reviews 14:14:46 it shows to my an event on Dec 5 14:15:16 the festival has been modified to include a bit more than just XS reviews 14:15:28 so we changed the name to reflect 14:15:28 yeah, there was an issue with the ics file, for some reason i couldn't get it to display the meetings in november 14:15:32 Eric Harney proposed openstack/cinder master: RBD changes for eventlet removal https://review.opendev.org/c/openstack/cinder/+/959385 14:15:32 i forgot about that 14:15:57 but yeah, if you are showing dec 5, i think you have the new ics , since that's the first friday in december 14:16:11 to remind people: the festival is now 2x a month 14:16:17 first friday and third friday 14:16:23 1400-1600 UTC 14:17:25 @rosmaita Do you have a guerrit query with the items we will be reviewing? 14:17:28 i'll send an email to the mailing list to remind people, because if they're relying on the ics, they won't know about friday's meeting 14:18:00 erlon: it's on the etherpad that's linked from https://meetings.opendev.org/#Cinder_Festival_of_Reviews 14:18:23 #link https://etherpad.opendev.org/cinder-festival-of-reviews 14:18:47 nice 14:18:49 thanks 14:19:22 the "new" dashboard may not show much stuff, since a priority has to be set for a review to show up in there 14:19:57 but there's a link to the "old" dashboard if you show up early or something 14:21:17 ill try to set some priorities prior to friday 14:21:50 #topic noonedeadpunk 's cinder-backup patches 14:22:00 noonedeadpunk: ^ do you happen to be around? 14:22:03 o/ 14:22:21 https://review.opendev.org/c/openstack/cinder/+/962909 and https://review.opendev.org/c/openstack/cinder/+/959425 14:22:45 we wanted to include these in the PTG sessions, but scheduling was hard 14:22:47 (there are also respective specs proposed for them) 14:23:15 https://review.opendev.org/c/openstack/cinder-specs/+/958838 and https://review.opendev.org/c/openstack/cinder-specs/+/962306 14:23:29 do these simply need reviews, or is there something that requires discussion? 14:23:30 not sure if they are needed, but I created them just in case :) 14:24:22 Well, it would be nice to validate if these are making sense at all from project prespective 14:24:36 but I think mainly reviews from my standpoint 14:25:11 as we're approaching spec freeze afaik, and would be nice to have progress this cycle 14:25:43 noonedeadpunk: i will try to get comments on those specs in the next day or so ... feel free to ping me on friday if you haven't seen any action 14:25:53 need core approval for support matrix update https://review.opendev.org/c/openstack/cinder/+/964479 14:26:02 awesome, thanks! 14:27:23 #topic cardoe has some things 14:27:32 cardoe: take it away 14:27:39 heh I'm noisy. :) 14:29:19 this looks like followup from our ptg discussion 14:29:20 Just continuing to work with a NetApp device that I've got. For those that weren't at the PTG, we build KVM and ESXi hypervisors in Ironic and then attaching them to a storage backend. Those KVM and ESXi setups represent a traditional OpenStack and then whatever VMware calls their thing. 14:29:58 My stuff is all test envs right now but the real setup is actually the Zuul infrastructure in the future. 14:30:24 Running 2025.1 in one env with a NetApp and there's a handful of issues that I was trying to get backported and chase down. 14:30:38 We're working exclusively with NVMe 14:31:07 The ultimate goal will be to make the Nova/Ironic/Cinder flow work nicely with NVMe. 14:31:53 Outside of the backports the two bugs we're seeing are the slow behavior from https://review.opendev.org/c/openstack/cinder/+/962085 14:31:58 So I suppose the flow works well with iSCSI and other drivers today 14:32:17 Nova/Ironic/Cinder works with iSCSI in 1 specific flow today. 14:33:01 TheJulia and I are working on a spec improvement to provide full network storage device function from Ironic 14:33:26 hmm, right 14:33:37 Are you guys working on a CI job for that? 14:34:08 We've had one on ironic for *ages* 14:34:28 NVMe? 14:34:54 no, can't do that yet, iscsi only. The issue is the labeling usage pattern as it relates to the data and cinder driver inconsistency in field values 14:35:14 erlon: it works but you need a pile of patches to hardcoded strings 14:35:34 we basically need cleanup to be performed and consistency reached as it relates to nova's interaction and values. 14:35:39 like netapp wants a string of "NVMe" while other parts of Cinder pass around "nvmeof" 14:36:06 Nova hardcodes a few dict keys it gets from os-brick which don't necessarily line up. 14:36:26 Those rough edges I'm working on. 14:36:38 I'm happy to contribute hardware 14:36:43 (a list in os-brick of expected keys would sort of be ideal to iterate through, realistically) 14:36:49 cardoe: yes please 14:37:05 Right, I'm thinking on the target scenario for a running Nova/Ironic/Cinder job i 14:37:06 https://opendev.org/openstack/cinder/src/branch/master/cinder/common/constants.py#L58 14:37:35 Out of all software I deal with Zuul is the most confusing beast so I'll need help. 14:37:41 Except, other services can't be really expected to import cinder as a library 14:38:54 * TheJulia lets cardoe talk and stops interjecting 14:39:04 no you're right. 14:39:20 I'm just pointing to that sharp annoying bit that threads hang on ;) 14:39:23 All I'm really trying to ask is on the performance issue and the os_type issue 14:39:52 So performance wise the patch landed if you've got cinder-volume configured with 1 SVM (which is how the NetApp driver is written) and you've got multiple pools on that SVM. 14:40:21 But if you then add another backend (which per the Cinder docs should be another SVM), performance tanks. 14:40:50 rosmaita, isn't that the storage_protocol and not the driver_volume_type returned to os-brick? 14:40:51 You add a 3rd backend to the same NetApp cluster? You cannot do a get_volume_stats() without getting Cinder throwing a warning that it took too long. 14:41:51 So really wanting to understand the best way to work at improving this. Cause I can harass NetApp through a support channel since they want us to buy their gear. 14:42:01 Or if I should be trying to engage via bugs against Cinder. 14:42:06 cardoe, netapp improved that to some extent with this https://review.opendev.org/c/openstack/cinder/+/964191 14:42:50 Yeah that works fine with 1 SVM and multiple pools. 14:42:55 looks like some of the performance and dedup stats weren't useful for scheduling to they're cached now until a long configurable value 14:42:59 It doesn't do much as soon as you define 2 backends. 14:43:13 I've already got that patch pulled in. 14:43:21 ack 14:43:39 The other issue I've run into is the os_type. https://bugs.launchpad.net/cinder/+bug/2131104 14:44:04 volume attachment create takes an "os-type" field but the NetApp driver doesn't respect that. 14:44:16 It actually only utilizes config flags that aren't documented. 14:44:38 But the issue is that os-type needs to be passed to the NetApp on the volume creation AND on the attachment 14:45:02 I don't see a way in Cinder to pass along os-type on the volume create. Should that be something that's part of the volume type? 14:45:20 If so I'm happy to work on that patch to make it do that. 14:45:55 As far as the volume_stats performance issue, I'm just looking for some docs or details on WHAT cinder really wants from the volume_stats. It seems each driver provides some different details. 14:46:25 Sorry I provided a lot of context. I'm not looking to throw a bunch of "here's issues Cinder Community! Go fix it." 14:46:52 What I'm looking for is some direction on the path we should take and I'll task developer resources to contribute the changes. 14:47:05 Unless the better approach is to raise this via a NetApp support channel. 14:47:08 That's what I'm asking. 14:47:14 14:47:38 cardoe: https://opendev.org/openstack/cinder/src/branch/master/cinder/interface/volume_driver.py#L68-L172 14:47:47 (about volume_stats) 14:49:28 i need to (am willing to) look into the volume type logic to offer some direction on os-type 14:49:57 rosmaita: that is helpful thank you. I was trying to find it in the docs. 14:50:45 yeah, we've been trying to consolidate the info in one place, but no one knows which is the one place 14:51:15 sounds like a standard consolidation to me 14:52:21 #action jbernard os-type on volume create 14:52:27 ok to move on? 14:52:32 yes thank you. 14:53:00 Oh. the backports... what's the best way to add things to a queue that should get an eyeball for consideration? 14:53:59 cardoe: same etherpad, that seems reasonable to me 14:54:07 https://etherpad.opendev.org/p/cinder-gazpacho-reviews 14:54:19 will do. thank you. sorry for the long windedness. 14:55:08 #topic erlon / https://review.opendev.org/c/openstack/os-brick/+/955379 14:55:33 hey 14:56:27 We discussed this one on the ptg as well. I just wanted to bring some attention to it, since we don't want to miss the release this time. 14:56:41 its in the etherpad already 14:57:09 @whoami-rajat if you can add it to your personal list as well, Id appreciated 14:57:32 but other eyes are also welcomed 14:59:42 ok, will do 14:59:59 lutimura has added some review requests too, ill move those to the other pad too 15:00:16 we're at time, last call for things 15:00:20 In Dalmatian zuul is failing. Mainly the test cinder-grenade-mn-sub-volbak (voting) is failing. 15:00:35 Brian had a look & suggested that, it could be setuptools problem. 15:00:46 It would be great if i can get any additional pointers to resolve this. Thanks 15:00:55 Zuul has been having issues for 2 weeks now - lots of timeouts and stuff 15:01:04 backport: https://review.opendev.org/c/openstack/cinder/+/964558 15:01:13 Error traceback: 15:01:24 AttributeError: module 'setuptools.build_meta' has no attribute 'get_requires_for_build_editable'. Did you mean: 'get_requires_for_build_sdist'? 15:03:05 i think that was happening in master for a while a few weeks back, but i can't remember if we fixed it or if it was a ubuntu package issue 15:03:15 i think it was a packaging thing 15:03:30 which makes it perplexing why it is happening in 2024.2 now 15:04:06 ok 15:04:28 raghavendrat: it's only on that cinder-grenade-mn-sub-volbak job, is that right? 15:04:38 https://zuul.opendev.org/t/openstack/build/45c3350dc5114d2bb0ac32d1a70f1eb5 15:05:06 other two jobs are failing. but those are non voting 15:05:20 s/jobs/tests/ 15:05:40 ok, that reminds me ... Sean put up a patch to remove non-voting jobs from the stable branches, i think we should do that 15:05:59 here's the job history: https://zuul.opendev.org/t/openstack/builds?job_name=cinder-grenade-mn-sub-volbak&project=openstack%2Fcinder&branch=stable%2F*&skip=0 15:06:04 (on the stable branches) 15:06:37 looks like only stable/2024.2 is having the issue 15:07:07 hhmm 15:08:41 the job definition doesn't look different between branches: https://zuul.opendev.org/t/openstack/job/cinder-grenade-mn-sub-volbak 15:10:51 ok, i think we can wrap, is that cool? 15:10:51 the regular grenade job looks OK in stable/2024.2: https://zuul.opendev.org/t/openstack/builds?job_name=grenade&project=openstack%2Fcinder&branch=stable%2F2024.2&skip=0 15:11:42 yeah, i am just thinking out loud here 15:11:50 (i thought the meeting was over) 15:12:00 Zachary Mark Raines proposed openstack/cinder master: Add 512e/4k disk geometry configuration https://review.opendev.org/c/openstack/cinder/+/658283 15:12:02 thank you Brian 15:13:33 ok, maybe here's something to try: the "regular" job runs on ubuntu jammy 15:13:46 https://zuul.opendev.org/t/openstack/job/grenade 15:16:10 bummer, looks like the failing job is also running on jammy: https://zuul.opendev.org/t/openstack/build/8aaf5ab45a61491ab1dd74d9d7f571ae/log/job-output.txt#52 15:16:42 o, 15:16:46 #endmeeting