15:00:22 #startmeeting cinder_bs 15:00:22 Meeting started Wed Nov 9 15:00:22 2022 UTC and is due to finish in 60 minutes. The chair is enriquetaso. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:22 The meeting name has been set to 'cinder_bs' 15:00:25 Hey! 15:00:42 o/ 15:00:58 We have 6 bugs to discuss today 15:01:07 Full list of bugs 15:01:10 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031133.html 15:01:21 #topic Fail to unify allocation_capacity_gb values among multiple Active-Active Cinder-Volume services 15:01:26 #link https://bugs.launchpad.net/cinder/+bug/1995204 15:01:33 Summary: In Active Active deployments, the allocated_capacity_gb returns the partial stats of the hosts instead of return the total cluster's allocated capacity. 15:01:46 As far as I can tell: allocated capacity returns the provisioned volumes created by a single pool. [1] If this is correct, that's why the command is returning a different stat every time that's being used. However, this doesn't look cool and should be an alternative for A/A clusters. 15:01:46 [1] https://specs.openstack.org/openstack/cinder-specs/specs/queens/provisioning-improvements.html 15:02:01 Another thing to highlight is that usually drivers report their stats by pool. For drivers that are not reporting their stats by pool cinder uses the data from the special fixed pool created by _count_allocated_capacity(). 15:02:09 This can be seeing both as a bug and as an new feature. 15:02:56 I know geguileo has been working on this topic for a while. However, maybe another cinder member would like to comment on this 15:06:55 OK, I'll leave some comments on the bug report 15:07:50 #topic cinderclient against wallaby fails to create snapshot 15:07:54 #link https://bugs.launchpad.net/python-cinderclient/+bug/1995883 15:07:59 Using the latest cinderclient 9.1.0 calling cinder wallaby release, fails to create a snapshot. Because of an invalid input for field/attribute force, the value is None. And None is not of type 'boolean', 'string'. 15:08:31 There were two proposed fixes but one was abandon a couple hours ago 15:08:43 Fix proposed to master 15:08:44 #link https://review.opendev.org/c/openstack/python-cinderclient/+/864027 15:08:55 I left a comment rosmaita 15:09:01 ok, thanks! 15:09:11 moving on 15:09:16 #topic Failed to create multiple instances with boot volumes at the same time in version 20.0.2.dev11. 15:09:21 #link https://bugs.launchpad.net/cinder/+bug/1995863 15:09:28 The reporter is not able to boot multiple instances from multiple boot volumes at the same time. The reported mentioned that trying a lower version like cinder 19.1.1 solves the problem. 15:09:47 not sure how to proceed 15:09:58 Maybe a volunteer to reproduce this? 15:10:10 I'm not sure if I can catch this with unittest 15:10:36 the NoneType errors hint at a driver bug 15:10:58 enriquetaso: probably not, eharney: my thoughts as well 15:11:51 so 20.x is yoga, and 19.x is zena 15:12:47 xena 15:13:05 eharney, what does it means? 15:13:33 it means that the 3par driver owners should look at it 15:13:50 great, thanks eharney 15:14:07 #action(enriquetaso): update bug report 15:14:10 moving on 15:14:27 looks like no change in the 3par driver between yoga and xena 15:14:28 #topic Backup fails with VolumeNotFound but not set to error 15:14:32 #link https://bugs.launchpad.net/cinder/+bug/1996049 15:14:51 The below is for the Xena release, the actual race should be semi-fixed by the patch to the bug https://bugs.launchpad.net/cinder/+bug/1970768 in Zed 15:14:51 When the backup fails we try to set the volume status but if the volume does not exist we fail on that so the backup is never set to "error" and will be in "creating" state forever until cloud admin fixes the status. 15:15:06 Fix proposed to master: 15:15:07 #link https://review.opendev.org/c/openstack/cinder/+/864100 15:15:58 nice find, the fix looks sensible from a quick look 15:16:37 yes.. i was looking at it but we should be careful 15:18:29 good point, but it looks like we are already saving and reraising the exception that caused the backup to fail 15:19:01 so i think it's probably OK to just swallow the volume-not-found 15:20:45 sounds fine.. maybe we can continue the discussion on the patch 15:20:52 Eric Harney proposed openstack/python-cinderclient master: Tests: add coverage for shell group_type_update https://review.opendev.org/c/openstack/python-cinderclient/+/863303 15:21:03 sure 15:21:07 thanks rosmaita 15:21:44 OK, I have marked the last two bugs as incomplete but I think worth the mention: 15:21:48 #topic reimage results to stuck in downloading state 15:21:52 #link https://bugs.launchpad.net/cinder/+bug/1995838 15:21:57 The user reported that if we are trying to "reimage" a volume (I'm not quite sure if we is trying to reuse the volume with a different image) with a bigger image that the volume allows, the volume gets stuck in downloading state instead of failing. 15:22:27 maybe i should ask the reporter if the images are encrypted because that would involved the rekey, right? eharney 15:22:47 not sure if that helps but.. 15:23:11 i suspect it isn't that 15:23:21 we should try this w/ nova reimage and see what happens 15:23:41 (to get some logs, etc) 15:24:26 Eric Harney proposed openstack/os-brick master: encryptors: Unbind LuksEncryptor and CryptsetupEncryptor https://review.opendev.org/c/openstack/os-brick/+/791271 15:24:26 #action: try this w/ nova reimage and see what happens (to get some logs, etc) 15:24:29 thanks eharney 15:24:34 #topic Volume State Update Failed After Backup Completed 15:24:38 #link https://bugs.launchpad.net/cinder/+bug/1996039 15:24:49 I've market this bug report as incomplete until the reporter provide more information of the version and backend that is being used. 15:24:49 Summary: 15:24:49 Generally, after the backup is created, the state of the volume will be reset to the state before the backup. However, if we create a backup of a volume with status 'in-use'(attached to an instance). When backup creating, the instance get deleted, status will be set to in-use by error, but not reset to available. 15:24:49 When an instance is deleted, the volume will be detached, and the volume status will be set to available. After that, the volume backup is completed, and the status of the volume is reset to in-use. The bug is caused by this. 15:25:51 this is presumably around the whole issue of trying to use the volume status field to communicate the success/failure of backup operations 15:25:58 if we're doing that, we should see if we can stop doing that 15:26:28 good point, haven't thought in that.. 15:30:09 OK, ill add a comment on the bug regarding this so we could focus on why is really falling 15:30:18 thanks everyone 15:30:24 #endmeeting