15:00:32 <enriquetaso> #startmeeting cinder_bs
15:00:32 <opendevmeet> Meeting started Wed Jun 16 15:00:32 2021 UTC and is due to finish in 60 minutes.  The chair is enriquetaso. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:32 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:32 <opendevmeet> The meeting name has been set to 'cinder_bs'
15:00:41 <enriquetaso> Hell
15:00:47 <enriquetaso> Hello, welcome to the cinder bug meeting.
15:00:55 <rosmaita> o/
15:01:05 <enriquetaso> #topic bug_1: 'LVM Critical CI problems?'
15:01:15 <enriquetaso> From cinder meeting early :
15:01:16 <enriquetaso> <eharney> we should probably write a new bug for retrying all of the other lvm commands that can segfault that weren't covered by 1901783, since that bug already has backports spanning a few branches:
15:01:26 <enriquetaso> What are the topics to cover in the new bug reports?- LVM delete volumes: we have https://bugs.launchpad.net/cinder/+bug/1901783/ . Should I reopen the bug or should I wait?
15:02:02 <rosmaita> looking
15:02:02 <eharney> that bug already has patches shipped across a handful of branches which are helpful, so we shouldn't reopen it
15:02:35 <eharney> we need a broader bug to cover the other various lvm calls we make that also occasionally crash
15:03:02 <rosmaita> eharney: i lost my link to your patch
15:03:16 <rosmaita> https://review.opendev.org/c/openstack/cinder/+/772126
15:03:20 <eharney> well i wrote https://review.opendev.org/c/openstack/cinder/+/772126 but that basically isn't helpful for this particular fix
15:03:37 <rosmaita> oh
15:03:41 <enriquetaso> calls like create vol - create/delete snap - create/delete backup eharney ?
15:04:05 <eharney> the hope was that calling lvdisplay w/ --readonly would avoid where it crashes, but it crashes anyway
15:04:34 <eharney> enriquetaso: everything that calls lvdisplay from cinder/brick/local_dev/lvm.py to start
15:04:53 <eharney> also calls to lvdisplay from os_brick/local_dev/lvm.py
15:05:09 <eharney> that's 7 methods or so just for lvdisplay
15:05:52 <enriquetaso> cool
15:05:53 <eharney> we know this hits calls to "lvs" and "lvdisplay", i don't know if it also will hit "vgs" and "lvcreate" etc or not
15:06:35 <eharney> i think we can implement the retry for lvs/lvdisplay and see what else shows up
15:06:47 <rosmaita> so is the theory that "lvdisplay --readonly" works, we just don't have the readonly flag in enough places?
15:07:00 <eharney> my guess was that it would help, but it didn't help
15:07:12 <eharney> i just think we should do it for reasons unrelated to this bug so the patch is still up
15:07:30 <eharney> (should probably remove the Related-Bug tag)
15:07:58 <rosmaita> yeah, that would help
15:08:07 <eharney> what i mean is, i saw lvdisplay also crash w/ the --readonly flag at some point after i submitted it
15:08:31 <rosmaita> i agree it makes sense to use readonly if that's all we need
15:09:58 <enriquetaso> #action(enriquetaso): check if this hits vgs and lvcreate as well.
15:10:06 <eharney> so we need to replicate https://review.opendev.org/c/openstack/cinder/+/783660 for lvs/lvdisplay calls
15:10:19 <eharney> and then look at the lvm code and see what else calls get_lv_info
15:11:10 <opendevreview> Eric Harney proposed openstack/cinder master: LVM: Use --readonly for lvdisplay in lv_has_snapshot  https://review.opendev.org/c/openstack/cinder/+/772126
15:12:10 <enriquetaso> OK, sounds like a plan
15:13:12 <enriquetaso> i can do the replication if nobody is working on that already
15:13:37 <eharney> i think nobody is
15:14:12 <rosmaita> not me!
15:14:15 <enriquetaso> #action(enriquetaso): replicate https://review.opendev.org/c/openstack/cinder/+/783660 for lvs/lvdisplay calls and then  look at the lvm code and see what else calls get_lv_info
15:14:29 <enriquetaso> great
15:14:35 <enriquetaso> moving on...
15:14:41 <enriquetaso> Cinder has 3 reported bugs related to documentation, there's two I'd like to talk about very quick:
15:14:47 <eharney> also, elastic recheck queries for code: 139 ...
15:15:31 <enriquetaso> should I track them as well? I didn't understand that part in the cinder meeting earlier
15:15:48 <enriquetaso> understood*
15:16:07 <eharney> well i think a few people have spent time rediscovering this issue where lvm tools crash
15:16:39 <eharney> the premise of the elastic recheck system is that we can avoid that by just identifying long-occurring known failures like this automatically
15:18:05 <rosmaita> did melanie have a link to a query in the original bug?
15:18:38 <eharney> there are a couple of queries in there that examine syslog for crashes but i suspect we actually want to catch it from the cinder volume log
15:19:16 <eharney> it will still crash and show up in syslog with our retry fixes
15:20:07 <eharney> so using those queries would result in unrelated failures being tagged as this...
15:21:57 <enriquetaso> Any other considerations for this topic?
15:23:04 <enriquetaso> OK
15:23:19 <enriquetaso> thanks eharney !
15:23:26 <enriquetaso> and Brian
15:23:29 <eharney> i think it'd be useful to know if it only happens on certain platforms, but that's less important than patching it up probably
15:24:50 <enriquetaso> sure
15:25:05 <enriquetaso> #topic bug_2 'Still see v2 endpoints after disabling per documentation'
15:25:11 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1928947
15:25:18 <enriquetaso> Bug description:
15:25:19 <enriquetaso> The documentation states if you perform the following "Block Storage V2 API has been deprecated. To ensure Cinder does not use the V2 API, update enable_v2_api=false and enable_v3_api=true in your cinder.conf file". However after successful deployment, executing 'openstack catalog list' sourced as overcloudrc , shows that there are still v2 cinder endpoints.
15:25:40 * enriquetaso couldn't find where in the documentation states that so I assumed is here: [1]https://github.com/openstack/cinder/blob/master/cinder/api/__init__.py#L38
15:26:25 <enriquetaso> rosmaita: Currently devstack doesn't show the v2 anymore, but I've tried to disable the v3 and it still shows it. Is there any way to disable it in cinder.conf?
15:26:26 <eharney> turning off enable_v2_api isn't going to change whether a deployment tool creates endpoints for v2
15:26:49 <opendevreview> Rajat Dhasmana proposed openstack/cinder master: Fix: Schema validation for attachment create API  https://review.opendev.org/c/openstack/cinder/+/783389
15:26:49 <enriquetaso> oh ok
15:27:03 <eharney> so i assume this isn't a cinder bug
15:27:37 <enriquetaso> so there's a misunderstanding here and the bug is invalid
15:27:54 <rosmaita> yes, i think so, i'll put a comment on it
15:28:19 <opendevreview> Rajat Dhasmana proposed openstack/python-cinderclient master: Make instance_uuid optional in attachment create  https://review.opendev.org/c/openstack/python-cinderclient/+/783628
15:28:21 <enriquetaso> thanks!
15:28:28 <enriquetaso> Last question:
15:28:37 <enriquetaso> #topic bug_3: ' Block Storage API V3 (CURRENT) in cinder - wrong URL for backup-detail'
15:28:41 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/1930526
15:28:47 <enriquetaso> Bug description: API URL of Import a backup is wrong (https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=import-a-backup-detail):
15:28:48 <enriquetaso> URL should be fixed to /v3/{project_id}/backups/import_record
15:29:09 <enriquetaso> * I was going to reproduce it now but maybe a cinder member already knows if this is correct.
15:30:24 <eharney> hmm
15:30:39 <eharney> do we support importing to an existing backup?
15:31:44 <eharney> i think the bug is correct if you look at the URLs in cinder/tests/unit/api/contrib/test_backups.py
15:32:00 <eharney> probably a bad copy/paste from export_record
15:33:18 <enriquetaso> from usage: cinder backup-import <backup_service> <backup_url>, cinder doesn't allow specifying an existing backup
15:33:36 <eharney> makes sense
15:34:43 <enriquetaso> #action(enriquetaso): patch for 1930526 and fix bad copy/paste
15:34:57 <enriquetaso> #topic Open Discussion
15:35:05 <enriquetaso> any other bugs to discuss today? :)
15:36:03 <rosmaita> nothing from me, thanks sofia
15:36:09 <eharney> no
15:36:26 <enriquetaso> then.. That's all I have for today's meeting. Thank you!
15:36:31 <enriquetaso> See you next week
15:36:39 <enriquetaso> #endmeeting