15:02:50 <enriquetaso> #startmeeting cinder_bs
15:02:50 <opendevmeet> Meeting started Wed Oct 12 15:02:50 2022 UTC and is due to finish in 60 minutes.  The chair is enriquetaso. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:02:50 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:02:50 <opendevmeet> The meeting name has been set to 'cinder_bs'
15:02:55 <enriquetaso> Welcome to the bug meeting.
15:02:59 <enriquetaso> We have 6 bugs for this meeting.
15:03:06 <enriquetaso> Full list of bugs:
15:03:12 <enriquetaso> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030819.html
15:03:34 <enriquetaso> #topic Cinder fails to backup/snapshot/clone/extend volumes when the pool is full
15:03:38 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1992493
15:03:45 <enriquetaso> Summary: Volumes should automatically get migrated if the operation fails due to insufficient space in the existing pool.
15:04:08 <enriquetaso> walt, do you want to add any comments?
15:04:24 <eharney> sounds interesting, will probably need a spec
15:04:41 <enriquetaso> I'm not sure if I should mark this bug as "whistlist" instead of "medium" because looks like a improvement
15:04:54 <enriquetaso> eharney, ok, then it's not a bug
15:05:57 <enriquetaso> Moving on :)
15:06:22 <enriquetaso> #topic  [Netapp Ontap] iSCSI multipath flush fail with stderr: map in use
15:06:27 <enriquetaso> #link https://bugs.launchpad.net/os-brick/+bug/1992289
15:06:39 <enriquetaso> This error happens when I live-migrate or resize an instance to another compute host. I also checked the SAN configuration to ensure the corresponding LUN was completely changed to a new host's initiator and LUN-ID.
15:06:39 <enriquetaso> Openstack version: Victoria and Netapp Ontap driver for SAN.
15:06:55 <enriquetaso> After this error, the instance is marked ERROR and needs to reset state then manually do the command "dmsetup remove -f <mapper-id>" before starting VM.
15:08:32 <enriquetaso> I'm lost here, isn't it a nova bug?
15:11:24 <enriquetaso> OK, i'll check later
15:11:33 <enriquetaso> #topic Multipath extend volume not working on running VM randomly
15:11:38 <enriquetaso> #link https://bugs.launchpad.net/os-brick/+bug/1992296
15:11:42 <enriquetaso> Summary:
15:11:50 <enriquetaso> 1. Extend volume from horizon.
15:11:50 <enriquetaso> 2. The action log from portal show the volume extended successfully
15:11:50 <enriquetaso> 3. Check the lsblk on VM OS but the corresponding device was not updated size correctlynova-compute shown error: Stdout: 'timeout receiving packet\n'
15:12:06 <enriquetaso> Workaround:Using the command "virsh blockresize  to update new volume size to VM.
15:13:40 <enriquetaso> I'll try to reproduce this problem with cindercli
15:17:18 <enriquetaso> Moving on
15:17:23 <enriquetaso> #topic [SVf] : lsportip needs to fetch IPs with `host` flag
15:17:27 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1992160
15:17:38 <enriquetaso> Summary: IBM Storwize driver initialization, lsportip command stores the node IP details.
15:17:39 <enriquetaso> Currently, lsportip is fetching all the IP addresses without any filter value.
15:17:39 <enriquetaso> The bug is Assigned to Kumar Kanishka. No patch proposed to master yet.
15:17:56 <harshailani> hi, yes this bug was raised by me
15:18:06 <harshailani> code changes are in progress
15:18:19 <harshailani> by Kumar Kanishka
15:19:04 <enriquetaso> harshailani++
15:19:06 <enriquetaso> thanks!
15:19:16 <enriquetaso> Moving on
15:19:22 <harshailani> this bug was raised because of a customer requirement who came to us stating that lsportip output should only return the node IPs which has the host flag set as yes
15:19:54 <harshailani> enriquetaso++
15:20:03 <enriquetaso> harshailani, sounds good, i'll keep an eye and I'll review it :)
15:20:12 <harshailani> awesome, thanks
15:20:21 <enriquetaso> sorry i'm moving to fast
15:20:45 <harshailani> no problem
15:21:53 <harshailani> we can move on :)
15:22:20 <enriquetaso> :)
15:22:27 <enriquetaso> #topic  cinder backup S3 driver failure: signed integer is greater than maximum
15:22:31 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1992292
15:22:44 <enriquetaso> Summary: After setting backup_sha_block_size_bytes = 32768 creating a backup that is greater than the maximum fails with ` S3 connection failure: signed integer is greater than maximum`.
15:23:46 <enriquetaso> eharney, shouldn't this fail or show a message ?
15:24:14 <eharney> enriquetaso: it does fail, i think
15:24:43 <eharney> i looked into this a bit, it's a limitation in the ssl/http client, we need to chunk data into smaller chunks to avoid it
15:25:24 <rosmaita> that config setting is defined for the posix driver, does the s3 driver actually use it?
15:27:33 <eharney> backup_s3_block_size is the option i think
15:28:24 <enriquetaso> oh, true, backup_s3_block_size for s3 and backup_sha_block_size_bytes for posix
15:28:44 <enriquetaso> i'll mentioned it
15:30:21 <rosmaita> guess it won't matter, the default for backup_s3_block_size is 32768
15:30:27 <eharney> i think the option in the bug is right
15:30:41 <eharney> s3 is based on the chunkeddriver, and the backtrace shows it using that sha code
15:32:10 <eharney> anyway, it's a valid bug, we should probably advise, "if you are creating a backup of a volume bigger than X, set this option to Y"
15:33:38 <enriquetaso> so, this could be add a message in the docs with ^ and then we have the fix: chunk data into smaller chunks..
15:35:18 <enriquetaso> OK, i think it makes sense
15:35:48 <rosmaita> or maybe bigger chunks? is that file getting so big because it's listing a hash for each chunk?
15:36:04 <eharney> rosmaita: correct
15:37:21 <enriquetaso> sure
15:39:09 <enriquetaso> OK, moving on
15:39:11 <enriquetaso> last bug
15:39:18 <enriquetaso> #topic  cinder backup incremental when volume change size
15:39:22 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1992293
15:39:27 <enriquetaso> 1. Create a full backup with a volume size of 50 GB
15:39:27 <enriquetaso> 2. Extend the original volume to 100 GB
15:39:27 <enriquetaso> 3. Create an incremental backup of the same volume id --> Fail created
15:40:14 <enriquetaso> If the volume is extended, should the incremental bug fail?
15:40:39 <enriquetaso> I marked it as incomplete because I think the reporter should share the backup driver and some logs
15:40:52 <enriquetaso> so, not much to say
15:42:52 <enriquetaso> #topic open discussion
15:43:02 <enriquetaso> Feel free to share your bugs now
15:46:41 <enriquetaso> OK, looks like no bugs :)
15:46:47 <enriquetaso> Thanks!!
15:46:50 <enriquetaso> #endmeeting