15:00:20 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:00:20 <opendevmeet> Meeting started Tue Sep 10 15:00:20 2024 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:20 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:20 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:00:24 <noonedeadpunk> #topic rollcall
15:00:26 <noonedeadpunk> o/
15:00:38 <hamburgler> o/
15:01:39 <NeilHanlon> o/
15:01:50 <jrosser> o/ hello
15:03:18 <noonedeadpunk> #topic office hours
15:03:36 <noonedeadpunk> so, noble test jobs finally merged
15:04:28 <noonedeadpunk> though we've missed moving noble with playbooks
15:05:12 <noonedeadpunk> and the fix failed on gate intermittently and currently in recheck
15:05:20 <noonedeadpunk> #link https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/928592/3
15:05:54 <noonedeadpunk> There is also a current issue with apache on metal
15:06:15 <noonedeadpunk> as we're using different MPMs across roles, which causes upgrade job failures
15:06:40 <noonedeadpunk> (once upgrade jobs track correct branch)
15:07:10 <noonedeadpunk> so whatever fix needed shoud be backported to 2024.1
15:07:24 <jrosser> i found that by trying to understand the job failures in more depth
15:07:43 <noonedeadpunk> and i guess this should be kinda last thing for backport before doing first minor release
15:07:54 <noonedeadpunk> Ah, except octavia thing that I realized just today
15:08:07 <noonedeadpunk> #link https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/928815
15:08:10 <jrosser> do we have broken apache/metal on 2024.1?
15:08:17 <noonedeadpunk> yeah
15:08:22 <jrosser> oh dear, ok
15:08:35 <noonedeadpunk> I think that second run of playbooks will break it
15:09:15 <jrosser> fixing the upgrade job branch could bring more CI trouble, just a release earlier
15:11:12 <noonedeadpunk> yeah, true
15:14:22 <noonedeadpunk> so there's quite some things to work on, but not sure what needs deeper discussion
15:16:21 <jrosser> i found the horizon compress failure is not specifically an OSA issue
15:16:44 <noonedeadpunk> oh
15:17:02 <jrosser> it aparrently occurs when installing UCA pacakges, as part of building debian packages, and also in devstack
15:17:51 <jrosser> there is a bug which is now correctly assigned to the horizon project https://bugs.launchpad.net/horizon/+bug/2045394
15:18:53 <jrosser> i also spent some time looking at why jobs fail to get u-c when that should be from the disk
15:19:14 <jrosser> and unfortuntley that happens a lot in upgrade jobs and there are insufficient logs collected
15:20:48 <jrosser> this (+ a backport) should address the log collection https://review.opendev.org/c/openstack/openstack-ansible/+/928790
15:20:59 <jrosser> but that is kind of hard to test
15:35:13 <noonedeadpunk> it looks reasonable enough
15:38:37 <jrosser> for the u-c errors it is clear that the code takes the path for the url being https:// rather than file://
15:39:19 <jrosser> but why it does that is not obvious yet - it could be that we have changed the way that the redirection of the URLs to files works between releases
15:39:54 <jrosser> so what is set up for the initial upgrade branch does not do the right thing for the target branch
15:40:48 <jrosser> i think this is the most likley explanation for those kind of errors
15:47:35 <noonedeadpunk> so if that for upgrade jobs only - that might be the case
15:47:46 <noonedeadpunk> as there we kind of ignore zuul-provided repos
15:48:03 <noonedeadpunk> just to leave them in "original" state to preserve depends-on
15:48:30 <noonedeadpunk> which could explain why upgrade on N-1 might try to do web fetch of u-c
15:53:44 <jrosser> how do i discover where the opensearch log collection service is?
15:53:51 <jrosser> ^ for CI jobs
15:56:51 <jrosser> ML says https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global
16:06:38 <noonedeadpunk> #endmeeting