15:02:50 #startmeeting cinder_bs 15:02:50 Meeting started Wed Oct 12 15:02:50 2022 UTC and is due to finish in 60 minutes. The chair is enriquetaso. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:02:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:02:50 The meeting name has been set to 'cinder_bs' 15:02:55 Welcome to the bug meeting. 15:02:59 We have 6 bugs for this meeting. 15:03:06 Full list of bugs: 15:03:12 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030819.html 15:03:34 #topic Cinder fails to backup/snapshot/clone/extend volumes when the pool is full 15:03:38 #link https://bugs.launchpad.net/cinder/+bug/1992493 15:03:45 Summary: Volumes should automatically get migrated if the operation fails due to insufficient space in the existing pool. 15:04:08 walt, do you want to add any comments? 15:04:24 sounds interesting, will probably need a spec 15:04:41 I'm not sure if I should mark this bug as "whistlist" instead of "medium" because looks like a improvement 15:04:54 eharney, ok, then it's not a bug 15:05:57 Moving on :) 15:06:22 #topic [Netapp Ontap] iSCSI multipath flush fail with stderr: map in use 15:06:27 #link https://bugs.launchpad.net/os-brick/+bug/1992289 15:06:39 This error happens when I live-migrate or resize an instance to another compute host. I also checked the SAN configuration to ensure the corresponding LUN was completely changed to a new host's initiator and LUN-ID. 15:06:39 Openstack version: Victoria and Netapp Ontap driver for SAN. 15:06:55 After this error, the instance is marked ERROR and needs to reset state then manually do the command "dmsetup remove -f " before starting VM. 15:08:32 I'm lost here, isn't it a nova bug? 15:11:24 OK, i'll check later 15:11:33 #topic Multipath extend volume not working on running VM randomly 15:11:38 #link https://bugs.launchpad.net/os-brick/+bug/1992296 15:11:42 Summary: 15:11:50 1. Extend volume from horizon. 15:11:50 2. The action log from portal show the volume extended successfully 15:11:50 3. Check the lsblk on VM OS but the corresponding device was not updated size correctlynova-compute shown error: Stdout: 'timeout receiving packet\n' 15:12:06 Workaround:Using the command "virsh blockresize to update new volume size to VM. 15:13:40 I'll try to reproduce this problem with cindercli 15:17:18 Moving on 15:17:23 #topic [SVf] : lsportip needs to fetch IPs with `host` flag 15:17:27 #link https://bugs.launchpad.net/cinder/+bug/1992160 15:17:38 Summary: IBM Storwize driver initialization, lsportip command stores the node IP details. 15:17:39 Currently, lsportip is fetching all the IP addresses without any filter value. 15:17:39 The bug is Assigned to Kumar Kanishka. No patch proposed to master yet. 15:17:56 hi, yes this bug was raised by me 15:18:06 code changes are in progress 15:18:19 by Kumar Kanishka 15:19:04 harshailani++ 15:19:06 thanks! 15:19:16 Moving on 15:19:22 this bug was raised because of a customer requirement who came to us stating that lsportip output should only return the node IPs which has the host flag set as yes 15:19:54 enriquetaso++ 15:20:03 harshailani, sounds good, i'll keep an eye and I'll review it :) 15:20:12 awesome, thanks 15:20:21 sorry i'm moving to fast 15:20:45 no problem 15:21:53 we can move on :) 15:22:20 :) 15:22:27 #topic cinder backup S3 driver failure: signed integer is greater than maximum 15:22:31 #link https://bugs.launchpad.net/cinder/+bug/1992292 15:22:44 Summary: After setting backup_sha_block_size_bytes = 32768 creating a backup that is greater than the maximum fails with ` S3 connection failure: signed integer is greater than maximum`. 15:23:46 eharney, shouldn't this fail or show a message ? 15:24:14 enriquetaso: it does fail, i think 15:24:43 i looked into this a bit, it's a limitation in the ssl/http client, we need to chunk data into smaller chunks to avoid it 15:25:24 that config setting is defined for the posix driver, does the s3 driver actually use it? 15:27:33 backup_s3_block_size is the option i think 15:28:24 oh, true, backup_s3_block_size for s3 and backup_sha_block_size_bytes for posix 15:28:44 i'll mentioned it 15:30:21 guess it won't matter, the default for backup_s3_block_size is 32768 15:30:27 i think the option in the bug is right 15:30:41 s3 is based on the chunkeddriver, and the backtrace shows it using that sha code 15:32:10 anyway, it's a valid bug, we should probably advise, "if you are creating a backup of a volume bigger than X, set this option to Y" 15:33:38 so, this could be add a message in the docs with ^ and then we have the fix: chunk data into smaller chunks.. 15:35:18 OK, i think it makes sense 15:35:48 or maybe bigger chunks? is that file getting so big because it's listing a hash for each chunk? 15:36:04 rosmaita: correct 15:37:21 sure 15:39:09 OK, moving on 15:39:11 last bug 15:39:18 #topic cinder backup incremental when volume change size 15:39:22 #link https://bugs.launchpad.net/cinder/+bug/1992293 15:39:27 1. Create a full backup with a volume size of 50 GB 15:39:27 2. Extend the original volume to 100 GB 15:39:27 3. Create an incremental backup of the same volume id --> Fail created 15:40:14 If the volume is extended, should the incremental bug fail? 15:40:39 I marked it as incomplete because I think the reporter should share the backup driver and some logs 15:40:52 so, not much to say 15:42:52 #topic open discussion 15:43:02 Feel free to share your bugs now 15:46:41 OK, looks like no bugs :) 15:46:47 Thanks!! 15:46:50 #endmeeting