13:59:19 #startmeeting glance 13:59:19 Meeting started Thu Sep 14 13:59:19 2023 UTC and is due to finish in 60 minutes. The chair is pdeore. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:59:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:59:19 The meeting name has been set to 'glance' 13:59:19 #topic roll call 13:59:19 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 13:59:40 o/ 13:59:52 o/ 14:00:06 o/ 14:00:43 lets wait few minutes for others to join 14:01:09 we have short agenda for today 14:01:51 o/ 14:02:26 ok, let's start 14:02:33 #topic Release/periodic jobs updates 14:02:41 We are in rc1 release week 14:02:58 o/ 14:03:06 we tried to get the sqlalchemy 2.0 patches in but unfortunately due to gate failures there are still few of them pending 14:03:33 So, instead of waiting for the remaining ones, I've now updated rc1 release patch with latest commit hash 14:03:36 #link https://review.opendev.org/c/openstack/releases/+/894658/2 14:03:49 ack 14:04:17 one more patch got in during last hour 14:04:41 yeah that's the last hash i updated ... 14:04:51 ack 14:05:08 moving ahead 14:05:29 As everyone knows we have virtual PTG during last week of October... 14:05:41 I have created the PTG planning etherpad and added some topics , please add the topics which you want to discuss during PTG 14:05:48 #link https://etherpad.opendev.org/p/caracal-ptg-glance-planning 14:06:13 I have added couple of topics 14:06:44 yeah Thanks for adding those :) 14:07:00 ++ on the osc topic 14:08:16 on the weighing mechanism, 14:08:38 adding something in B can't be removed in C because of SLURP unless it's not something the operator needs to handle or prepare for 14:08:50 isn't there config for that? 14:09:01 * dansmith struggles to remember 14:09:09 yes there is we can mark it deprecated in C 14:09:17 and then remove it in D ? 14:09:45 yeah 14:09:54 I would mark it as deprecated in B if we could still do that 14:10:08 even though you can technically do one cycle like that, two would be better if we know 14:10:10 Because I am not pretty much sure how many people are using location strategy 14:10:17 is it too late to sneak that in? 14:10:20 ack 14:10:53 yes, we can do it in C and remove it in D 14:11:25 okay 14:11:27 I will update the same in PTG etherpad 14:11:41 ack 14:11:50 oh right, it's already thursday 14:12:24 yes 14:13:48 ok, moving ahead 14:14:02 Periodic jobs all green except fips jobs failure 14:14:16 pdeore, I guess we missed to highlght this in release highlights? 14:14:18 RBD: Trash image when snapshots prevent deletion 14:14:31 glance-multistore-cinder-import-fips job is failing since it uses centos ,so the temporary workaround is submitted by abhishekk, 14:14:48 abhishekk, ohh ohh yeahh 14:15:39 so is it possible to do that now ? 14:15:45 you can 14:16:12 in the same rc1 release patch ? or separate cycle highlights patch ? 14:16:59 separate or same patch also do 14:17:31 ok 14:17:38 I will do it right after the meeting 14:17:53 Thanks for highlighting :) 14:18:30 ok, coming back to the fips job failure 14:18:32 #link https://review.opendev.org/c/openstack/glance/+/893420 - Set GLOBAL_VENV to false for centos 14:18:39 np, just remembered it, I thought we missed it to merge 14:19:49 no it was merged i think around m2 week 14:19:59 yes 14:20:54 croelandt, dansmith could you please have a look at this fix ^ for fips job failure ? 14:21:07 yeah I thought they were going to change that at the parent job level 14:21:17 let's ask and if not we can merge that 14:21:29 not done yet 14:21:33 oh, nm 14:21:40 we don't have a parent job for that one I guess,. so whatever 14:23:46 * abhishekk am I disconnected? 14:24:12 dansmith, we have parent job for that 14:24:17 abhishekk, no :) 14:24:23 ack 14:24:40 pdeore: it doesn't inherit from a fips-specific parent job 14:25:05 hmm yeah 14:25:56 ok, let's move to next topic 14:26:03 #topic Move cinder-multistore job to n-v for time being? 14:26:16 +1 for me 14:26:34 even though we reduced concurrency for which dansmith is not in favor 14:26:43 there are still some timeouts in that job 14:27:16 So I think we can move it to non-voting and consult with cinder team on solution 14:27:16 I'll be in favor if we see consistent OOMs where there are multiple qemu processes running 14:27:38 earlier in the week the cinder team was seeing that on some of their jobs 14:27:51 yes 14:27:53 but either way, concurrency=3 is too low, 4 is what it was until recently 14:27:55 yeah 14:28:01 what do we think about https://review.opendev.org/c/openstack/glance_store/+/894514 ? 14:28:14 so I'd be more in favor of that or getting buy-in from gmann to go back to 4 globally 14:28:15 Is that something that may help with some of the timeouts we're seeing? 14:28:25 no 14:28:42 oh wait 14:28:54 what I'm concerned about is that if everyone makes concurrency changes to all their jobs, we lose the ability to compare the same job across projects for failures and we stop being able to change the knob in one place 14:29:57 agree 14:30:26 +1 14:30:42 croelandt, even though we merged it I think we need to release a store library with that fix 14:31:31 we can have a dnm patch depending on this and use from git for cinder job to check first? 14:32:25 abhishekk: we'd also need to set rados_connect_timeout for our jobs since the default value is -1 and not 0 14:32:26 +1 14:33:23 croelandt, ack 14:35:30 ok, so that's it from me for today, let's move to open discussions 14:35:32 #topic Open Discussions 14:36:23 Nothing from me 14:36:46 abhishekk, the change for fixing the image race condition is still valid ? #link https://review.opendev.org/c/openstack/tempest/+/892731 14:37:12 yes, for short term I think this is best option 14:38:05 ack 14:38:17 you never answered my question on that 14:38:29 do we really allow people to specify the image uuid? 14:38:46 yes 14:38:55 sorry I missed it 14:39:05 that's pretty high on the "never do that" list of secure best practices ;) 14:39:18 perhaps it's worth considering deprecating that ability? 14:39:44 I don't know the reason but if someone deletes the image by mistake can create it with same UUID 14:40:03 right, I'm sure that's the reason. It's still a terrible idea ;) 14:40:27 may be we can discuss this in PTG and invite brian for discussion 14:40:30 Consider there's a big main image on a siteack 14:40:35 oops.. ack ;) 14:40:56 I was also against it :D but not entertained during H cycle 14:42:01 okay 14:42:27 ok, let's discuss more on this in PTG 14:43:01 anyone has anything else to discuss ? 14:43:34 no problem 14:44:53 alright, let's conclude for the day ! 14:45:01 Thanks everyone for joining ! 14:45:19 #endmeeting