15:04:13 #startmeeting openstack_ansible_meeting 15:04:13 Meeting started Tue Oct 10 15:04:13 2023 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:04:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:04:13 The meeting name has been set to 'openstack_ansible_meeting' 15:04:17 #topic rollcall 15:04:18 o/ 15:04:37 hi! 15:05:06 o/ 15:05:17 sorry, last meeting running long 🙃 15:06:27 #topic office hours 15:06:45 PTG. 15:07:38 I've booked a room for us on Tuesday, Oct 24, 14:00 - 17:00 UTC 15:07:53 awesome 15:08:07 Is that fine for everyone or you have some input on how better to re-schedule that? 15:08:30 that works fine for me 15:08:42 #link https://ptg.opendev.org/ptg.html 15:08:52 #link https://etherpad.opendev.org/p/oct2023-ptg-os-ansible 15:08:54 Another thing, is that I didn't book operator hours this time, but then TC wrote a ML asking for projects to do so. 15:09:12 I might re-name etherpad fwiw :) 15:09:18 I haven't populated it yet 15:09:20 fair :) 15:09:24 I'll perform openstack upgrade during that time so not sure if i'll be able to join, but i'll try to 15:09:59 What we think about operator hours? Do we see having any benefit from running these? 15:10:58 They don't have a lot of attendance, though I can appear for really an hour on Wednesday just to make an opprotunity for ppl to show up 15:11:00 IIRC last time it was only you, me and amy :| 15:11:06 Though I don't think anybody will 15:11:34 do we have a list of operators we can email directly and let them know about them? 15:11:43 besides just posting on the -discuss list 15:12:01 i feel as though they could be useful, if we got people to show up 15:12:02 I'm not sure really... 15:12:18 And besides openstack marketing... 15:12:55 But ok, let's try the last time. 15:13:02 i can probably try and drum up some interest with Rocky, but. yeah. let's give it a shot :) 15:13:07 And maybe do that on Monday as it's pretty much free 15:13:09 TBH i end up with a conflict or simply forget. Apologies 15:13:32 and like 17UTC doesn't have a conflict with anything else yet 15:14:47 sounds good to me 15:15:03 ok, good. I will book and send a ML 15:15:14 #action noonedeadpunk to book operator hour and send ML 15:15:54 Other then that, today we got debian 12 passing for metal jobs. It's failing on horizon though and I'm spawning a sandbox to check what's there 15:16:28 CI stability is not great - we're having TIMEOUTs and broken CentOS mirrors now 15:16:36 So quite hard to land anything 15:16:52 i think there was some discussion in horizon irc about debian12 broken with django 4 15:17:22 #link https://review.opendev.org/c/openstack/horizon/+/897310 15:18:10 I wonder why only Deb 12 is affected. Just py3.11? 15:19:05 As u-c are quite explicit about Django===3.2.18 15:20:41 SO it's really interesting what's going on 15:20:51 well, debian12 might have other ideas about that 15:21:44 Then we have landed quite some bugfixes and some were already backported. 15:21:57 So I'm thinking of pushing bumps for stable branches soon 15:22:16 However, bump for master seems to be failing with weird nova issue 15:22:22 during upgrade check 15:22:46 #link https://review.opendev.org/c/openstack/openstack-ansible/+/897434 15:23:35 I haven't checked what;'s up yet, but that looks like missing uuid for computes in /var/lib/nova/ 15:24:11 `Compute node objects without service_id linkage were found in the database. Ensure all non-deleted compute services have started with upgraded code.` 15:25:50 So that is a blocker for landing 2023.2 for sure and needs sorting out 15:26:12 Also very weird issue with mariadb upgrade, which I wasn't able to reproduce 15:26:12 i did start today looking for bogus/old tasks in roles we use a lot 15:26:28 but feels like really not going to be the solution to making CI faster 15:26:30 #link https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/894740 15:26:50 yeah, those are nice clean-ups 15:27:07 I wonder if we should revive parallel execution of things at least for CIs 15:27:43 i wonder if theres some 12 vs 12.1 stuff going on in the galera role 15:27:54 Like make some python script that would parse setup-infrastructure and setup-openstack and execute in threads openstack-ansible binary... 15:28:11 But it fails for jammy? 15:28:50 It somehow tries to isntall 10.6 instead of 10.11.5 15:29:06 like use built-in repos ignoring pinned priority 15:29:17 894740 fails in repo server for jammy 15:30:08 oh well actually 15:30:21 is it? https://zuul.opendev.org/t/openstack/build/204c318c9e204e01a6f48064ab9060d7/log/job-output.txt#25724 15:30:39 its 894561 where we need to look 15:30:42 'mariadb-server=1:10.6.12-0ubuntu0.22.04.1'' failed: E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). 15:31:49 well... that fails differently... 15:31:58 oh no /o\ 15:32:11 its that systemd_mount rescue task that keeps catching me out 15:32:15 but I wonder what mariadb it has installed 15:32:31 We should do smth to it I guess.... 15:32:42 but that's different topic 15:33:24 So. 894561 has 10.6.12 at the end 15:33:30 when it fails to re-bootstrap the cluster 15:33:55 https://zuul.opendev.org/t/openstack/build/5086e874a1dc4ef0b13c072e3e3d4886/log/logs/host/dpkg.log.txt#3032 15:35:11 wtf https://zuul.opendev.org/t/openstack/build/5086e874a1dc4ef0b13c072e3e3d4886/log/logs/host/dpkg.log.txt#2948 15:35:12 it somehow looks like the infra cache mirror just doesn't have required version 15:35:27 it's before upgrade 15:35:33 on 2023.1 15:35:42 and then it gets removed in favor of 10.6 15:36:04 that line is like only 2 seconds before all the 10.6 stuff 15:36:36 huh 15:37:13 `install mariadb-common:all 1:10.11.2+maria~ubu2204 1:10.6.12-0ubuntu0.22.04.1` 15:37:19 ^ what is this i wonder 15:37:27 aha, and previous run is L1098 15:37:49 maybe we don't clean up enough? 15:38:02 and some more packages needs to be wiped for upgrade now 15:39:22 And L2931 it's being removed... 15:39:41 status half-installed mariadb-common 15:40:16 so no, it really installs 10.6 from default repos ignoring mariadb one 15:40:46 maybe we need a patch that inserts a `fail:` at the point it should be cleaned up 15:40:57 and get a held node to see what actually is there 15:41:18 So I really wonder if smth is off with repo proxy 15:41:42 https://zuul.opendev.org/t/openstack/build/5086e874a1dc4ef0b13c072e3e3d4886/log/logs/etc/host/apt/sources.list.d/MariaDB.list.txt 15:43:31 can always try to use `http://mirror.iad.rax.opendev.org:8080/MariaDB/mariadb-10.11.5/repo/ubuntu/` in a local build 15:44:02 is it available from outside? 15:45:04 but yeah, will check that 15:45:52 And hopefully I will be able to continue pshing stuff for quorum queues and identify more linter failures.... 15:46:33 yes take the `-int` out of it 15:46:39 fwiw, horizon didn't fail in my sandbox 15:46:59 aha 15:48:21 so maybe horizon will feel better on 2023.2 branch... 15:49:22 or well... Maybe we're using specific SHA atm... 15:49:26 (in gates) 15:50:03 anyway... 15:50:14 anything else to raise/talk about? 15:52:02 we say it every year but would be good not to have a huge rush to release :) 15:52:19 so anything that can fix up the CI reliability a bit would be a bonus 15:56:06 Yeah, and each year I'm pretty much in agreement with that but it somehow doesn't work out at the end :( 15:56:55 I think we really should not attempt to land smth extra other then what was already promised/agreed 15:57:17 And if CI gods will be nice - it should give us to not be in rush 16:00:07 🤞 16:00:12 #endmeeting