15:01:31 <enriquetaso> #startmeeting cinder_bs
15:01:31 <opendevmeet> Meeting started Wed May  4 15:01:31 2022 UTC and is due to finish in 60 minutes.  The chair is enriquetaso. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:31 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:31 <opendevmeet> The meeting name has been set to 'cinder_bs'
15:01:46 <enriquetaso> Hello, 5 new bugs were reported this period.
15:01:55 <enriquetaso> #link https://etherpad.opendev.org/p/cinder-bug-squad-meeting
15:01:55 <enriquetaso> #link http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028404.html
15:02:01 <enriquetaso> #topic  Temporary volume accepts deletion while it is used
15:02:09 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1970768
15:02:09 <enriquetaso> A temporary volume is created when an attached volume is backing up. This temporary volume can be deleted by DELETE API because its status is 'available'.
15:02:09 <enriquetaso> The fix is already merged on master.
15:02:36 <enriquetaso> #link https://review.opendev.org/c/openstack/cinder/+/826949
15:02:36 <enriquetaso> Moving on with a related bug
15:02:43 <enriquetaso> #topic Temporary volume could be deleted with force
15:02:43 <enriquetaso> Fix proposed to master
15:03:06 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1971483
15:04:02 <enriquetaso> #link https://review.opendev.org/c/openstack/cinder/+/830901
15:06:32 <enriquetaso> Moving on
15:06:35 <enriquetaso> #topic  Volume reset-state API validation state checking is incorrect
15:06:41 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1970624
15:06:56 <enriquetaso> eharney, Code from I0a53dfee "Reset state robustification for volume os-reset_status" aims to reject volume state updates to "error_deleting" and "detaching" but fails to do so due to a typo.
15:07:08 <enriquetaso> Original fix:
15:07:08 <enriquetaso> #link https://review.opendev.org/c/openstack/cinder/+/773985
15:07:08 <enriquetaso> Fix proposed to master:
15:07:08 <enriquetaso> #link https://review.opendev.org/c/openstack/cinder/+/839416
15:07:18 <eharney> yeah i'm going to rework the patch for this based on rosmaita's input
15:07:29 <eharney> 773985 isn't an original fix, it's where the bug was introduced
15:07:39 <enriquetaso> oh, my bad
15:08:01 <enriquetaso> sounds good, please cinder team review :)
15:08:18 <enriquetaso> moving on
15:08:41 <enriquetaso> #topic rbd_store_chunk_size in megabytes is an unwanted limitation
15:08:52 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1971154
15:09:01 <enriquetaso> The report requests some changes regarding rbd_store_chunk_size. The reporter proposed two alternatives.
15:09:14 <enriquetaso> I need help to understand if this makes sense or if I should ask more questions about the problem.
15:09:30 <eharney> well there are restrictions on what we can set it to, since glance and cinder in many configurations need to agree on the chunk size
15:10:20 <eharney> we should look into this area, there are some deficiencies we still need to address w/ RBD as far as sector sizes too  (512 vs 4k)
15:10:43 <eharney> but i don't think the suggestion to just add a new config value that lets deployers set whatever is necessarily the right answer
15:11:30 <eharney> as usual, we don't have much concrete performance data to analyze, so more info on the actual problem would be helpful
15:12:56 <enriquetaso> OK, so this bug is not trivial then.
15:13:06 <enriquetaso> I'll ask for concrete data and point out all this
15:13:22 <eharney> right, it's a good idea to improve this, but it needs a lot of thought
15:13:40 <enriquetaso> makes sense
15:14:02 <rosmaita> i think his question #2 is a good one
15:14:49 <eharney> it is
15:15:21 <rosmaita> all i can think is it's leftover from the days before there were dedicated pools?
15:17:30 <eharney> well, specifying the chunk size helps prevent situations where you end up with images that can't be moved between pools during migration, cinder<->glance, etc
15:17:43 <eharney> but i don't have a nice clear answer at the moment
15:18:17 <rosmaita> gotcha
15:20:33 <enriquetaso> OK, so this looks like something interested to look for in the future. I can bring it back later, as Eric pointed out, it needs more thought.
15:20:45 <enriquetaso> Last but not least
15:20:50 <enriquetaso> #topic Could not find any paths for the volume
15:20:58 <enriquetaso> #link https://bugs.launchpad.net/os-brick/+bug/1961613
15:21:08 <enriquetaso> In an environment using NVMeoF with SPDK, when an instance is shut off or hard restarted, it is not able to find the volume again. The volume is visible on the node with "nvme list" but nova reports: "Could not find any paths for the volume."
15:21:19 <enriquetaso> geguileo, I'm not familiar with NVMeoF. The reporter mentioned that this could be an os-brick problem. I think the bug report makes sense.
15:21:48 <geguileo> it's an os-brick problem
15:22:10 <geguileo> it's already reported, and I will be fixing it shortly
15:22:19 <enriquetaso> \o/
15:22:19 <geguileo> (there's a patch proposed now, but I'm making changes)
15:22:33 <enriquetaso> sure, thanks Gorka
15:22:39 <geguileo> I'm saying it without looking at the bug report
15:22:49 <enriquetaso> :P
15:23:11 <geguileo> I mean, it can be one of 2 cases...
15:23:16 <geguileo> that I'm fixing...
15:23:28 <geguileo> let's not forget that I'm fixing close to 12 nvmeof bugs...
15:23:43 <geguileo> this is most likely the one where you cannot call os-brick with an already existing subsystem
15:24:03 <geguileo> mixed with the not disconnecting a subsystem on volume_disconnect
15:24:08 <eharney> we should probably just reevaluate any nvmeof bugs after geguileo's current stuff lands
15:24:13 <geguileo> if they wait for 10 minutes it probably works   lol
15:24:26 <enriquetaso> of course, well, i can bring it back later when you update the patches, sounds like the right thing to do
15:24:33 <geguileo> because the nvmeof timeout will kick in and remove it...
15:24:39 <rosmaita> those bugs are great, by the time you get help from support, there is no problem
15:24:48 <geguileo> lol
15:25:11 <enriquetaso> ha
15:25:19 <enriquetaso> OK, that's all i have for today's meeting
15:25:25 <enriquetaso> thanks!
15:25:34 <rosmaita> i have 2 quick questons
15:25:39 <enriquetaso> sure
15:26:00 <rosmaita> i came across this when i was looking into eharney's patch that he mentioned earlier
15:26:04 <rosmaita> #link https://review.opendev.org/c/openstack/cinder/+/839825
15:26:26 <rosmaita> i didn't file a bug, but wonder if i should
15:27:07 <eharney> does it have any observable behavior impacts?
15:27:41 <rosmaita> not now, my concern is that if someone actually uses this and then backports the usage, it will break without this patch
15:27:47 <rosmaita> (does that make sense?)
15:28:19 <eharney> i think so, but i wouldn't be inclined to file a bug for it myself
15:28:30 <rosmaita> ok, works for me
15:28:33 <enriquetaso> no bug then
15:29:06 <rosmaita> ok, my other issue is test-only, so i won't file a bug for it either
15:29:41 <rosmaita> ok, sorry i wasted our time
15:29:54 <enriquetaso> don't worry
15:30:02 <enriquetaso> :D
15:30:08 <enriquetaso> OK, running out of time
15:30:15 <enriquetaso> #endmeeting