15:04:13 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:04:13 <opendevmeet> Meeting started Tue Oct 10 15:04:13 2023 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:04:13 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:04:13 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:04:17 <noonedeadpunk> #topic rollcall
15:04:18 <noonedeadpunk> o/
15:04:37 <damiandabrowski> hi!
15:05:06 <NeilHanlon> o/
15:05:17 <NeilHanlon> sorry, last meeting running long 🙃
15:06:27 <noonedeadpunk> #topic office hours
15:06:45 <noonedeadpunk> PTG.
15:07:38 <noonedeadpunk> I've booked a room for us on Tuesday, Oct 24, 14:00 - 17:00 UTC
15:07:53 <NeilHanlon> awesome
15:08:07 <noonedeadpunk> Is that fine for everyone or you have some input on how better to re-schedule that?
15:08:30 <NeilHanlon> that works fine for me
15:08:42 <NeilHanlon> #link https://ptg.opendev.org/ptg.html
15:08:52 <NeilHanlon> #link https://etherpad.opendev.org/p/oct2023-ptg-os-ansible
15:08:54 <noonedeadpunk> Another thing, is that I didn't book operator hours this time, but then TC wrote a ML asking for projects to do so.
15:09:12 <noonedeadpunk> I might re-name etherpad fwiw :)
15:09:18 <noonedeadpunk> I haven't populated it yet
15:09:20 <NeilHanlon> fair :)
15:09:24 <damiandabrowski> I'll perform openstack upgrade during that time so not sure if i'll be able to join, but i'll try to
15:09:59 <noonedeadpunk> What we think about operator hours? Do we see having any benefit from running these?
15:10:58 <noonedeadpunk> They don't have a lot of attendance, though I can appear for really an hour on Wednesday just to make an opprotunity for ppl to show up
15:11:00 <damiandabrowski> IIRC last time it was only you, me and amy :|
15:11:06 <noonedeadpunk> Though I don't think anybody will
15:11:34 <NeilHanlon> do we have a list of operators we can email directly and let them know about them?
15:11:43 <NeilHanlon> besides just posting on the -discuss list
15:12:01 <NeilHanlon> i feel as though they could be useful, if we got people to show up
15:12:02 <noonedeadpunk> I'm not sure really...
15:12:18 <noonedeadpunk> And besides openstack marketing...
15:12:55 <noonedeadpunk> But ok, let's try the last time.
15:13:02 <NeilHanlon> i can probably try and drum up some interest with Rocky, but. yeah. let's give it a shot :)
15:13:07 <noonedeadpunk> And maybe do that on Monday as it's pretty much free
15:13:09 <jamesdenton> TBH i end up with a conflict or simply forget. Apologies
15:13:32 <noonedeadpunk> and like 17UTC doesn't have a conflict with anything else yet
15:14:47 <NeilHanlon> sounds good to me
15:15:03 <noonedeadpunk> ok, good. I will book and send a ML
15:15:14 <noonedeadpunk> #action noonedeadpunk to book operator hour and send ML
15:15:54 <noonedeadpunk> Other then that, today we got debian 12 passing for metal jobs. It's failing on horizon though and I'm spawning a sandbox to check what's there
15:16:28 <noonedeadpunk> CI stability is not great - we're having TIMEOUTs and broken CentOS mirrors now
15:16:36 <noonedeadpunk> So quite hard to land anything
15:16:52 <jrosser> i think there was some discussion in horizon irc about debian12 broken with django 4
15:17:22 <noonedeadpunk> #link https://review.opendev.org/c/openstack/horizon/+/897310
15:18:10 <noonedeadpunk> I wonder why only Deb 12 is affected. Just py3.11?
15:19:05 <noonedeadpunk> As u-c are quite explicit about Django===3.2.18
15:20:41 <noonedeadpunk> SO it's really interesting what's going on
15:20:51 <jrosser> well, debian12 might have other ideas about that
15:21:44 <noonedeadpunk> Then we have landed quite some bugfixes and some were already backported.
15:21:57 <noonedeadpunk> So I'm thinking of pushing bumps for stable branches soon
15:22:16 <noonedeadpunk> However, bump for master seems to be failing with weird nova issue
15:22:22 <noonedeadpunk> during upgrade check
15:22:46 <noonedeadpunk> #link https://review.opendev.org/c/openstack/openstack-ansible/+/897434
15:23:35 <noonedeadpunk> I haven't checked what;'s up yet, but that looks like missing uuid for computes in /var/lib/nova/
15:24:11 <noonedeadpunk> `Compute node objects without service_id linkage were found in the database. Ensure all non-deleted compute services  have started with upgraded code.`
15:25:50 <noonedeadpunk> So that is a blocker for landing 2023.2 for sure and needs sorting out
15:26:12 <noonedeadpunk> Also very weird issue with mariadb upgrade, which I wasn't able to reproduce
15:26:12 <jrosser> i did start today looking for bogus/old tasks in roles we use a lot
15:26:28 <jrosser> but feels like really not going to be the solution to making CI faster
15:26:30 <noonedeadpunk> #link https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/894740
15:26:50 <noonedeadpunk> yeah, those are nice clean-ups
15:27:07 <noonedeadpunk> I wonder if we should revive parallel execution of things at least for CIs
15:27:43 <jrosser> i wonder if theres some 12 vs 12.1 stuff going on in the galera role
15:27:54 <noonedeadpunk> Like make some python script that would parse setup-infrastructure and setup-openstack and execute in threads openstack-ansible binary...
15:28:11 <noonedeadpunk> But it fails for jammy?
15:28:50 <noonedeadpunk> It somehow tries to isntall 10.6 instead of 10.11.5
15:29:06 <noonedeadpunk> like use built-in repos ignoring pinned priority
15:29:17 <jrosser> 894740 fails in repo server for jammy
15:30:08 <jrosser> oh well actually
15:30:21 <noonedeadpunk> is it? https://zuul.opendev.org/t/openstack/build/204c318c9e204e01a6f48064ab9060d7/log/job-output.txt#25724
15:30:39 <jrosser> its 894561 where we need to look
15:30:42 <noonedeadpunk> 'mariadb-server=1:10.6.12-0ubuntu0.22.04.1'' failed: E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
15:31:49 <noonedeadpunk> well... that fails differently...
15:31:58 <jrosser> oh no /o\
15:32:11 <jrosser> its that systemd_mount rescue task that keeps catching me out
15:32:15 <noonedeadpunk> but I wonder what mariadb it has installed
15:32:31 <noonedeadpunk> We should do smth to it I guess....
15:32:42 <noonedeadpunk> but that's different topic
15:33:24 <noonedeadpunk> So. 894561 has 10.6.12 at the end
15:33:30 <noonedeadpunk> when it fails to re-bootstrap the cluster
15:33:55 <noonedeadpunk> https://zuul.opendev.org/t/openstack/build/5086e874a1dc4ef0b13c072e3e3d4886/log/logs/host/dpkg.log.txt#3032
15:35:11 <jrosser> wtf https://zuul.opendev.org/t/openstack/build/5086e874a1dc4ef0b13c072e3e3d4886/log/logs/host/dpkg.log.txt#2948
15:35:12 <noonedeadpunk> it somehow looks like the infra cache mirror just doesn't have required version
15:35:27 <noonedeadpunk> it's before upgrade
15:35:33 <noonedeadpunk> on 2023.1
15:35:42 <noonedeadpunk> and then it gets removed in favor of 10.6
15:36:04 <jrosser> that line is like only 2 seconds before all the 10.6 stuff
15:36:36 <noonedeadpunk> huh
15:37:13 <jrosser> `install mariadb-common:all 1:10.11.2+maria~ubu2204 1:10.6.12-0ubuntu0.22.04.1`
15:37:19 <jrosser> ^ what is this i wonder
15:37:27 <noonedeadpunk> aha, and previous run is L1098
15:37:49 <noonedeadpunk> maybe we don't clean up enough?
15:38:02 <noonedeadpunk> and some more packages needs to be wiped for upgrade now
15:39:22 <noonedeadpunk> And L2931 it's being removed...
15:39:41 <noonedeadpunk> status half-installed mariadb-common
15:40:16 <noonedeadpunk> so no, it really installs 10.6 from default repos ignoring mariadb one
15:40:46 <jrosser> maybe we need a patch that inserts a `fail:` at the point it should be cleaned up
15:40:57 <jrosser> and get a held node to see what actually is there
15:41:18 <noonedeadpunk> So I really wonder if smth is off with repo proxy
15:41:42 <noonedeadpunk> https://zuul.opendev.org/t/openstack/build/5086e874a1dc4ef0b13c072e3e3d4886/log/logs/etc/host/apt/sources.list.d/MariaDB.list.txt
15:43:31 <jrosser> can always try to use `http://mirror.iad.rax.opendev.org:8080/MariaDB/mariadb-10.11.5/repo/ubuntu/` in a local build
15:44:02 <noonedeadpunk> is it available from outside?
15:45:04 <noonedeadpunk> but yeah, will check that
15:45:52 <noonedeadpunk> And hopefully I will be able to continue pshing stuff for quorum queues and identify more linter failures....
15:46:33 <jrosser> yes take the `-int` out of it
15:46:39 <noonedeadpunk> fwiw, horizon didn't fail in my sandbox
15:46:59 <noonedeadpunk> aha
15:48:21 <noonedeadpunk> so maybe horizon will feel better on 2023.2 branch...
15:49:22 <noonedeadpunk> or well... Maybe we're using specific SHA atm...
15:49:26 <noonedeadpunk> (in gates)
15:50:03 <noonedeadpunk> anyway...
15:50:14 <noonedeadpunk> anything else to raise/talk about?
15:52:02 <jrosser> we say it every year but would be good not to have a huge rush to release :)
15:52:19 <jrosser> so anything that can fix up the CI reliability a bit would be a bonus
15:56:06 <noonedeadpunk> Yeah, and each year I'm pretty much in agreement with that but it somehow doesn't work out at the end :(
15:56:55 <noonedeadpunk> I think we really should not attempt to land smth extra other then what was already promised/agreed
15:57:17 <noonedeadpunk> And if CI gods will be nice - it should give us to not be in rush
16:00:07 <NeilHanlon> 🤞
16:00:12 <noonedeadpunk> #endmeeting