15:00:22 <enriquetaso> #startmeeting cinder_bs
15:00:22 <opendevmeet> Meeting started Wed Nov  9 15:00:22 2022 UTC and is due to finish in 60 minutes.  The chair is enriquetaso. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:22 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:22 <opendevmeet> The meeting name has been set to 'cinder_bs'
15:00:25 <enriquetaso> Hey!
15:00:42 <rosmaita> o/
15:00:58 <enriquetaso> We have 6 bugs to discuss today
15:01:07 <enriquetaso> Full list of bugs
15:01:10 <enriquetaso> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031133.html
15:01:21 <enriquetaso> #topic  Fail to unify allocation_capacity_gb values among multiple Active-Active Cinder-Volume services
15:01:26 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1995204
15:01:33 <enriquetaso> Summary: In Active Active deployments, the allocated_capacity_gb returns the partial stats of the hosts instead of return the total cluster's allocated capacity.
15:01:46 <enriquetaso> As far as I can tell: allocated capacity returns the provisioned volumes created by a single pool. [1] If this is correct, that's why the command is returning a different stat every time that's being used. However, this doesn't look cool and should be an alternative for A/A clusters.
15:01:46 <enriquetaso> [1] https://specs.openstack.org/openstack/cinder-specs/specs/queens/provisioning-improvements.html
15:02:01 <enriquetaso> Another thing to highlight is that usually drivers report their stats by pool. For drivers that are not reporting their stats by pool cinder uses the data from the special fixed pool created by _count_allocated_capacity().
15:02:09 <enriquetaso> This can be seeing both as a bug and as an new feature.
15:02:56 <enriquetaso> I know geguileo has been working on this topic for a while. However, maybe another cinder member would like to comment on this
15:06:55 <enriquetaso> OK, I'll leave some comments on the bug report
15:07:50 <enriquetaso> #topic cinderclient against wallaby fails to create snapshot
15:07:54 <enriquetaso> #link https://bugs.launchpad.net/python-cinderclient/+bug/1995883
15:07:59 <enriquetaso> Using the latest cinderclient 9.1.0 calling cinder wallaby release, fails to create a snapshot. Because of an invalid input for field/attribute force, the value is None. And None is not of type 'boolean', 'string'.
15:08:31 <enriquetaso> There were two proposed fixes but one was abandon a couple hours ago
15:08:43 <enriquetaso> Fix proposed to master
15:08:44 <enriquetaso> #link https://review.opendev.org/c/openstack/python-cinderclient/+/864027
15:08:55 <enriquetaso> I left a comment rosmaita
15:09:01 <rosmaita> ok, thanks!
15:09:11 <enriquetaso> moving on
15:09:16 <enriquetaso> #topic Failed to create multiple instances with boot volumes at the same time in version 20.0.2.dev11.
15:09:21 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1995863
15:09:28 <enriquetaso> The reporter is not able to boot multiple instances from multiple boot volumes at the same time. The reported mentioned that trying a lower version like cinder 19.1.1 solves the problem.
15:09:47 <enriquetaso> not sure how to proceed
15:09:58 <enriquetaso> Maybe a volunteer to reproduce this?
15:10:10 <enriquetaso> I'm not sure if I can catch this with unittest
15:10:36 <eharney> the NoneType errors hint at a driver bug
15:10:58 <rosmaita> enriquetaso: probably not, eharney: my thoughts as well
15:11:51 <rosmaita> so 20.x is yoga, and 19.x is zena
15:12:47 <enriquetaso> xena
15:13:05 <enriquetaso> eharney, what does it means?
15:13:33 <eharney> it means that the 3par driver owners should look at it
15:13:50 <enriquetaso> great, thanks eharney
15:14:07 <enriquetaso> #action(enriquetaso): update bug report
15:14:10 <enriquetaso> moving on
15:14:27 <rosmaita> looks like no change in the 3par driver between yoga and xena
15:14:28 <enriquetaso> #topic Backup fails with VolumeNotFound but not set to error
15:14:32 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1996049
15:14:51 <enriquetaso> The below is for the Xena release, the actual race should be semi-fixed by the patch to the bug https://bugs.launchpad.net/cinder/+bug/1970768 in Zed
15:14:51 <enriquetaso> When the backup fails we try to set the volume status but if the volume does not exist we fail on that so the backup is never set to "error" and will be in "creating" state forever until cloud admin fixes the status.
15:15:06 <enriquetaso> Fix proposed to master:
15:15:07 <enriquetaso> #link https://review.opendev.org/c/openstack/cinder/+/864100
15:15:58 <eharney> nice find, the fix looks sensible from a quick look
15:16:37 <enriquetaso> yes.. i was looking at it but we should be careful
15:18:29 <rosmaita> good point, but it looks like we are already saving and reraising the exception that caused the backup to fail
15:19:01 <rosmaita> so i think it's probably OK to just swallow the volume-not-found
15:20:45 <enriquetaso> sounds fine.. maybe we can continue the discussion on the patch
15:20:52 <opendevreview> Eric Harney proposed openstack/python-cinderclient master: Tests: add coverage for shell group_type_update  https://review.opendev.org/c/openstack/python-cinderclient/+/863303
15:21:03 <rosmaita> sure
15:21:07 <enriquetaso> thanks rosmaita
15:21:44 <enriquetaso> OK, I have marked the last two bugs as incomplete but I think worth the mention:
15:21:48 <enriquetaso> #topic  reimage results to stuck in downloading state
15:21:52 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1995838
15:21:57 <enriquetaso> The user reported that if we are trying to "reimage" a volume (I'm not quite sure if we is trying to reuse the volume with a different image) with a bigger image that the volume allows, the volume gets stuck in downloading state instead of failing.
15:22:27 <enriquetaso> maybe i should ask the reporter if the images are encrypted because that would involved the rekey, right? eharney
15:22:47 <enriquetaso> not sure if that helps but..
15:23:11 <eharney> i suspect it isn't that
15:23:21 <eharney> we should try this w/ nova reimage and see what happens
15:23:41 <eharney> (to get some logs, etc)
15:24:26 <opendevreview> Eric Harney proposed openstack/os-brick master: encryptors: Unbind LuksEncryptor and CryptsetupEncryptor  https://review.opendev.org/c/openstack/os-brick/+/791271
15:24:26 <enriquetaso> #action: try this w/ nova reimage and see what happens (to get some logs, etc)
15:24:29 <enriquetaso> thanks eharney
15:24:34 <enriquetaso> #topic  Volume State Update Failed After Backup Completed
15:24:38 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1996039
15:24:49 <enriquetaso> I've market this bug report as incomplete until the reporter provide more information of the version and backend that is being used.
15:24:49 <enriquetaso> Summary:
15:24:49 <enriquetaso> Generally, after the backup is created, the state of the volume will be reset to the state before the backup. However, if we create a backup of a volume with status 'in-use'(attached to an instance). When backup creating, the instance get deleted, status will be set to in-use by error, but not reset to available.
15:24:49 <enriquetaso> When an instance is deleted, the volume will be detached, and the volume status will be set to available. After that, the volume backup is completed, and the status of the volume is reset to in-use. The bug is caused by this.
15:25:51 <eharney> this is presumably around the whole issue of trying to use the volume status field to communicate the success/failure of backup operations
15:25:58 <eharney> if we're doing that, we should see if we can stop doing that
15:26:28 <enriquetaso> good point, haven't thought in that..
15:30:09 <enriquetaso> OK, ill add a comment on the bug regarding this so we could focus on why is really falling
15:30:18 <enriquetaso> thanks everyone
15:30:24 <enriquetaso> #endmeeting