14:00:31 <whoami-rajat> #startmeeting cinder
14:00:31 <opendevmeet> Meeting started Wed Mar  1 14:00:31 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:31 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:31 <opendevmeet> The meeting name has been set to 'cinder'
14:00:36 <whoami-rajat> #topic roll call
14:00:49 <thiagoalvoravel> o/
14:00:50 <enriquetaso> hello
14:01:39 <tosky> o/
14:01:51 <MatheusAndrade[m]> o/
14:01:57 <rosmaita> o/
14:02:13 <HelenaDantas[m]> o/
14:02:18 <lucasmoliveira059> o/
14:02:28 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-antelope-meetings
14:03:42 <whoami-rajat> hello everyone
14:03:49 <whoami-rajat> let's get started
14:03:54 <whoami-rajat> #topic announcements
14:03:59 <whoami-rajat> RC1 this week
14:04:04 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032438.html
14:04:10 <whoami-rajat> #link https://review.opendev.org/c/openstack/releases/+/875397
14:04:36 <whoami-rajat> I've created an etherpad to prioritize patches for RC1 and RC2, please add your patches here
14:04:41 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-antelope-fixes-rc
14:05:14 <whoami-rajat> during RC1, stable/2023.1 will be cut and 2023.2 will be master
14:05:34 <whoami-rajat> any changes merged after RC1 needs to be backported to stable/2023.1 to include it in the 2023.1 release
14:05:43 <felipe_rodrigues> o/
14:06:28 <whoami-rajat> in summary, please add your patches (bug fixes) that you think are important and need to go in 2023.1 Antelope release
14:06:48 <whoami-rajat> next, Vancouver PTG attendance
14:06:55 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032478.html
14:07:29 <whoami-rajat> this is a followup on our discussion last week regarding the people interested in joining the physical PTG in Vancouver
14:07:44 <whoami-rajat> NetApp and pure storage teams have shown their interests
14:08:03 <whoami-rajat> if you're planning to attend, please reply to the email stating it so we've record of people planning to attend
14:08:53 <whoami-rajat> next, TC + PTL results are out
14:09:00 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032442.html
14:09:14 <whoami-rajat> congratulations rosmaita for running for TC for another tenure!
14:09:28 <rosmaita> thanks!
14:09:41 <jungleboyj> ++  Congrats!
14:10:12 <rosmaita> will most likely be my last time, so anyone interested or curious about serving on the TC, reach out and I'll be happy to fill you in
14:10:32 <enriquetaso> congrats!
14:11:20 <whoami-rajat> and just stating it formally if people didn't notice, I will be the PTL again for 2023.2 (Bobcat)
14:11:23 <whoami-rajat> rosmaita++
14:11:32 <enriquetaso> whoami-rajat++
14:11:36 <rosmaita> whoami-rajat: congratulations!
14:12:10 <jungleboyj> Congratulations and thank you!
14:12:46 <whoami-rajat> thanks!
14:12:54 <whoami-rajat> and last announcement for today, We have a new TC Chair - Kristi Nikolla
14:13:00 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032499.html
14:13:11 <whoami-rajat> gmann stepped down as TC chair after serving 2 years
14:13:24 <whoami-rajat> so gmann++ for serving as TC chair!
14:13:45 <jungleboyj> ++
14:14:24 <whoami-rajat> that's all for announcements
14:14:53 <whoami-rajat> anyone would like to announce something? sometimes I miss news
14:15:53 <whoami-rajat> guess not
14:15:59 <whoami-rajat> let's move to topics then
14:16:03 <whoami-rajat> #topic Multiattach issue
14:16:08 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032389.html
14:16:27 <whoami-rajat> this was initially raised on ML so i will briefly describe the problem
14:16:47 <whoami-rajat> we used to support creating multiattach volumes by providing multiattach=True in volume create request body
14:17:30 <whoami-rajat> this was discouraged as users might accidentally create MA volumes and don't set up a cluster aware FS
14:17:35 <whoami-rajat> which can eventually lead to data loss
14:17:46 <whoami-rajat> we switched to using multiattach volume types to create MA volumes
14:17:58 <whoami-rajat> since volume types are created by admin users
14:18:25 <whoami-rajat> the deprecation was done in queens but we kept old behavior for compatibility
14:18:27 <enriquetaso> I think there's a upstream bug for this:
14:18:29 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/2008259
14:18:39 <whoami-rajat> thanks enriquetaso
14:18:57 <whoami-rajat> so I proposed 2 changes, one for cinder (to remove the support) and one for tempest (to update test to use new way)
14:19:41 <enriquetaso> sounds good
14:19:53 <whoami-rajat> this morning i had a discussion with gmann regarding this and his concern is we're breaking backward compatibility with this change and should be done with a MV
14:20:11 <whoami-rajat> my argument was we don't want to keep the old behavior since that's a bug
14:20:24 <whoami-rajat> i need to find logs of that discussion (i will look for that)
14:20:29 <whoami-rajat> but you can find the details in the tempest patch
14:20:31 <whoami-rajat> #link https://review.opendev.org/c/openstack/tempest/+/875372
14:20:38 <whoami-rajat> and also on the ML
14:20:39 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-March/032502.html
14:21:17 <whoami-rajat> found the discussion
14:21:18 <whoami-rajat> #link https://meetings.opendev.org/irclogs/%23openstack-qa/%23openstack-qa.2023-03-01.log.html#t2023-03-01T01:25:48
14:23:30 <whoami-rajat> so i wanted cinder team's input on this, what should be the ideal way forward, 1) remove the compat code altogether 2) keep the compat code and handle it with a new microversion
14:24:01 <rosmaita> i am against the microversioning
14:24:14 <rosmaita> this is a data loss issue, so we don't want people doing it
14:24:22 <whoami-rajat> in case of 2) users will still be able to create multiattach volumes the old way (which we clearly don't want)
14:24:54 <whoami-rajat> ack, I've the same thought
14:24:58 <jungleboyj> I would agree with rosmaita .  If it is a dataloss issue and shouldn't have been possible in the first place, then it should be fixed.
14:25:12 <jungleboyj> Not mv'ed .
14:25:21 <rosmaita> from the API ref, it looks like the 'multiattach' in the request body has been there since 3.0 ?  https://docs.openstack.org/api-ref/block-storage/v3/?expanded=create-a-volume-detail#volumes-volumes
14:25:48 <whoami-rajat> yeah, it was carried over from v2 API
14:26:17 <whoami-rajat> and we deprecated it with a MV 3.50 (introducing the volume type way)
14:26:23 <rosmaita> text says "Note that support for multiattach volumes depends on the volume type being used."
14:26:40 <rosmaita> so if you include --multiattach and the VT doesn't allow it, what happens?
14:27:19 <whoami-rajat> i think it takes either of those values, if "multiattach" is there it doesn't consider the volume type
14:27:34 <whoami-rajat> multiattach or extra_specs.get("multiatach"...
14:27:59 <whoami-rajat> which is again a issue that you rightly pointed out, we're not honoring the volume type
14:28:00 <rosmaita> well, we could say that's a bug, and change it to keep multiattach in the request body, but reject the request if the VT doesn't allow it?
14:28:15 <rosmaita> then no API change, but correct behavior?
14:28:41 <rosmaita> would be kind of stupid, but would be backward compatible
14:29:27 <whoami-rajat> hmm, but it would still break volume creation for people passing multiattach=True
14:29:38 <rosmaita> only sometimes
14:29:44 <rosmaita> :D
14:29:50 <whoami-rajat> I don't think users use both the ways simultaneously
14:30:11 <whoami-rajat> if they start using the correct volume type, multiattach automatically becomes redundant
14:30:14 <whoami-rajat> but i see your point
14:30:27 <rosmaita> me neither, probably better to just say, this is unsafe, we no longer allow it
14:30:40 <whoami-rajat> we're keeping the API request consistent but changing it's behavior on backend (is that acceptable change?)
14:31:00 <rosmaita> well, if it doesn't break tempest, no one will notice
14:31:43 <whoami-rajat> tempest will break since they create the volume with only multiattach=True
14:32:09 <whoami-rajat> not providing volume type at all (so it might take the default type which is not MA)
14:32:22 <rosmaita> gotcha
14:34:24 <whoami-rajat> so I think the consensus is we don't want to go the microversion way right?
14:34:46 <rosmaita> what's supposed to happen if you have "multiattach": false on a VT that supports multiattach?
14:35:17 <rosmaita> or maybe, what does happen currently?
14:35:36 <whoami-rajat> it has an OR operator so takes either of those values
14:36:08 <whoami-rajat> here https://review.opendev.org/c/openstack/cinder/+/874865/2/cinder/volume/flows/api/create_volume.py#b499
14:36:28 <rosmaita> thanks
14:36:50 <whoami-rajat> https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L496-L500
14:38:39 <rosmaita> ok, so currently, if you say --multiattach=False in the request, but the VT allows it, you get multiattach ... am i right about that?
14:38:58 <whoami-rajat> rosmaita, yes correct
14:38:58 <rosmaita> line 500 in https://review.opendev.org/c/openstack/cinder/+/874865/2/cinder/volume/flows/api/create_volume.py#b499
14:39:32 <rosmaita> ok, then we definitely need to remove 'multiattach'  from the volume-create request
14:40:11 <rosmaita> i'll try to write something coherent on your tempest patch and see if i can convince gmann
14:40:38 <whoami-rajat> that would be great, thanks!
14:41:02 <rosmaita> thanks for the discussion and explaining this
14:41:22 <whoami-rajat> np, thanks for all the valuable feedback
14:41:56 <whoami-rajat> so that's all the topics we had for today
14:42:00 <whoami-rajat> let's move to open discussion
14:42:05 <whoami-rajat> #topic open discussion
14:42:41 <eharney> please review this rbd fix: https://review.opendev.org/c/openstack/cinder/+/865855
14:43:50 <Tony_Saad> Hey guys we have these antelope Dell driver bugs and was wondering if there are any blockers https://review.opendev.org/c/openstack/cinder/+/768105 https://review.opendev.org/c/openstack/cinder/+/797970 https://review.opendev.org/c/openstack/cinder/+/821739 https://review.opendev.org/c/openstack/cinder/+/858370
14:45:03 <whoami-rajat> eharney, added a comment, it's missing a releasenote
14:47:26 <whoami-rajat> I don't think we've much for open discussion so let's close early
14:47:35 <whoami-rajat> remember to review the bug fixes important for RC1 and RC2
14:47:41 <whoami-rajat> also add your patches on the RC etherpad
14:47:46 <eharney> we should really look at https://review.opendev.org/c/openstack/cinder/+/873249
14:48:00 <eharney> do we use the review-priority flag?
14:48:47 <Tony_Saad> Hey guys we have these antelope Dell driver bugs and was wondering if there are any blockers https://review.opendev.org/c/openstack/cinder/+/768105 https://review.opendev.org/c/openstack/cinder/+/797970 https://review.opendev.org/c/openstack/cinder/+/821739 https://review.opendev.org/c/openstack/cinder/+/858370
14:49:48 <rosmaita> eharney: ++ on 873249
14:49:51 <whoami-rajat> eharney, we do and there should be a link to track that but please do set it
14:49:54 <whoami-rajat> seems important
14:50:02 <eharney> whoami-rajat: i set it weeks ago
14:51:05 <whoami-rajat> oh i see, I will find out the tracker link then and we can target that
14:52:40 <whoami-rajat> hmm these config values haven't been working since beginning
14:55:42 <whoami-rajat> added it to the RC tracker etherpad
14:55:56 <enriquetaso> Tony_Saad, I've left a comment on one patch
14:56:32 <Tony_Saad> thanks!
14:56:57 <enriquetaso> I'll read more about the context and review 873249
15:00:04 <whoami-rajat> we're out of time, thanks everyone for attending!
15:00:06 <whoami-rajat> #endmeeting