13:59:19 <pdeore> #startmeeting glance
13:59:19 <opendevmeet> Meeting started Thu Sep 14 13:59:19 2023 UTC and is due to finish in 60 minutes.  The chair is pdeore. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:59:19 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:59:19 <opendevmeet> The meeting name has been set to 'glance'
13:59:19 <pdeore> #topic roll call
13:59:19 <pdeore> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
13:59:40 <pdeore> o/
13:59:52 <mrjoshi> o/
14:00:06 <abhishekk> o/
14:00:43 <pdeore> lets wait few minutes for others to join
14:01:09 <pdeore> we have short agenda for today
14:01:51 <dansmith> o/
14:02:26 <pdeore> ok, let's start
14:02:33 <pdeore> #topic Release/periodic jobs updates
14:02:41 <pdeore> We are in rc1 release week
14:02:58 <croelandt> o/
14:03:06 <pdeore> we tried to get the sqlalchemy 2.0 patches in but unfortunately due to gate failures there are still few of them pending
14:03:33 <pdeore> So, instead of waiting for the remaining ones, I've now updated rc1 release patch with latest commit hash
14:03:36 <pdeore> #link https://review.opendev.org/c/openstack/releases/+/894658/2
14:03:49 <abhishekk> ack
14:04:17 <abhishekk> one more patch got in during last hour
14:04:41 <pdeore> yeah that's the last hash i updated ...
14:04:51 <abhishekk> ack
14:05:08 <pdeore> moving ahead
14:05:29 <pdeore> As everyone knows we have virtual PTG during last week of October...
14:05:41 <pdeore> I have created the PTG planning etherpad and added some topics , please add the topics which you want to discuss during PTG
14:05:48 <pdeore> #link https://etherpad.opendev.org/p/caracal-ptg-glance-planning
14:06:13 <abhishekk> I have added couple of topics
14:06:44 <pdeore> yeah Thanks for adding those :)
14:07:00 <dansmith> ++ on the osc topic
14:08:16 <dansmith> on the weighing mechanism,
14:08:38 <dansmith> adding something in B can't be removed in C because of SLURP unless it's not something the operator needs to handle or prepare for
14:08:50 <dansmith> isn't there config for that?
14:09:01 * dansmith struggles to remember
14:09:09 <abhishekk> yes there is we can mark it deprecated in C
14:09:17 <abhishekk> and then remove it in D ?
14:09:45 <dansmith> yeah
14:09:54 <dansmith> I would mark it as deprecated in B if we could still do that
14:10:08 <dansmith> even though you can technically do one cycle like that, two would be better if we know
14:10:10 <abhishekk> Because I am not pretty much sure how many people are using location strategy
14:10:17 <dansmith> is it too late to sneak that in?
14:10:20 <dansmith> ack
14:10:53 <abhishekk> yes, we can do it in C and remove it in D
14:11:25 <dansmith> okay
14:11:27 <abhishekk> I will update the same in PTG etherpad
14:11:41 <pdeore> ack
14:11:50 <dansmith> oh right, it's already thursday
14:12:24 <abhishekk> yes
14:13:48 <pdeore> ok, moving ahead
14:14:02 <pdeore> Periodic jobs all green except fips jobs failure
14:14:16 <abhishekk> pdeore, I guess we missed to highlght this in release highlights?
14:14:18 <abhishekk> RBD: Trash image when snapshots prevent deletion
14:14:31 <pdeore> glance-multistore-cinder-import-fips job is failing since it uses centos ,so the temporary workaround is submitted by abhishekk,
14:14:48 <pdeore> abhishekk, ohh ohh yeahh
14:15:39 <pdeore> so is it possible to do that now ?
14:15:45 <abhishekk> you can
14:16:12 <pdeore> in the same rc1 release patch ? or separate cycle highlights patch ?
14:16:59 <abhishekk> separate or same patch also do
14:17:31 <pdeore> ok
14:17:38 <pdeore> I will do it right after the meeting
14:17:53 <pdeore> Thanks for highlighting :)
14:18:30 <pdeore> ok, coming back to the fips job failure
14:18:32 <pdeore> #link https://review.opendev.org/c/openstack/glance/+/893420  - Set GLOBAL_VENV to false for centos
14:18:39 <abhishekk> np, just remembered it, I thought we missed it to merge
14:19:49 <pdeore> no it was merged i think around m2 week
14:19:59 <abhishekk> yes
14:20:54 <pdeore> croelandt, dansmith could you please have a look at this fix ^ for fips job failure ?
14:21:07 <dansmith> yeah I thought they were going to change that at the parent job level
14:21:17 <dansmith> let's ask and if not we can merge that
14:21:29 <abhishekk> not done yet
14:21:33 <dansmith> oh, nm
14:21:40 <dansmith> we don't have a parent job for that one I guess,. so whatever
14:23:46 * abhishekk am I disconnected?
14:24:12 <pdeore> dansmith, we have parent job for that
14:24:17 <pdeore> abhishekk, no :)
14:24:23 <abhishekk> ack
14:24:40 <dansmith> pdeore: it doesn't inherit from a fips-specific parent job
14:25:05 <pdeore> hmm yeah
14:25:56 <pdeore> ok, let's move to next topic
14:26:03 <pdeore> #topic Move cinder-multistore job to n-v for time being?
14:26:16 <abhishekk> +1 for me
14:26:34 <abhishekk> even though we reduced concurrency for which dansmith is not in favor
14:26:43 <abhishekk> there are still some timeouts in that job
14:27:16 <abhishekk> So I think we can move it to non-voting and consult with cinder team on solution
14:27:16 <dansmith> I'll be in favor if we see consistent OOMs where there are multiple qemu processes running
14:27:38 <dansmith> earlier in the week the cinder team was seeing that on some of their jobs
14:27:51 <abhishekk> yes
14:27:53 <dansmith> but either way, concurrency=3 is too low, 4 is what it was until recently
14:27:55 <pdeore> yeah
14:28:01 <croelandt> what do we think about https://review.opendev.org/c/openstack/glance_store/+/894514 ?
14:28:14 <dansmith> so I'd be more in favor of that or getting buy-in from gmann to go back to 4 globally
14:28:15 <croelandt> Is that something that may help with some of the timeouts we're seeing?
14:28:25 <abhishekk> no
14:28:42 <abhishekk> oh wait
14:28:54 <dansmith> what I'm concerned about is that if everyone makes concurrency changes to all their jobs, we lose the ability to compare the same job across projects for failures and we stop being able to change the knob in one place
14:29:57 <abhishekk> agree
14:30:26 <croelandt> +1
14:30:42 <abhishekk> croelandt, even though we merged it I think we need to release a store library with that fix
14:31:31 <abhishekk> we can have a dnm patch depending on this and use from git for cinder job to check first?
14:32:25 <croelandt> abhishekk: we'd also need to set rados_connect_timeout for our jobs since the default value is -1 and not 0
14:32:26 <pdeore> +1
14:33:23 <abhishekk> croelandt, ack
14:35:30 <pdeore> ok, so that's it from me for today, let's move to open discussions
14:35:32 <pdeore> #topic Open Discussions
14:36:23 <abhishekk> Nothing from me
14:36:46 <pdeore> abhishekk, the change for fixing the image race condition is still valid ? #link https://review.opendev.org/c/openstack/tempest/+/892731
14:37:12 <abhishekk> yes, for short term I think this is best option
14:38:05 <pdeore> ack
14:38:17 <dansmith> you never answered my question on that
14:38:29 <dansmith> do we really allow people to specify the image uuid?
14:38:46 <abhishekk> yes
14:38:55 <abhishekk> sorry I missed it
14:39:05 <dansmith> that's pretty high on the "never do that" list of secure best practices ;)
14:39:18 <dansmith> perhaps it's worth considering deprecating that ability?
14:39:44 <abhishekk> I don't know the reason but if someone deletes the image by mistake can create it with same UUID
14:40:03 <dansmith> right, I'm sure that's the reason. It's still a terrible idea ;)
14:40:27 <abhishekk> may be we can discuss this in PTG and invite brian for discussion
14:40:30 <dansmith> Consider there's a big main image on a siteack
14:40:35 <dansmith> oops.. ack ;)
14:40:56 <abhishekk> I was also against it :D but not entertained during H cycle
14:42:01 <dansmith> okay
14:42:27 <pdeore> ok, let's discuss more on this in PTG
14:43:01 <pdeore> anyone has anything else to discuss ?
14:43:34 <abhishekk> no problem
14:44:53 <pdeore> alright, let's conclude for the day !
14:45:01 <pdeore> Thanks everyone for joining !
14:45:19 <pdeore> #endmeeting