15:00:20 #startmeeting openstack_ansible_meeting 15:00:20 Meeting started Tue Sep 10 15:00:20 2024 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:20 The meeting name has been set to 'openstack_ansible_meeting' 15:00:24 #topic rollcall 15:00:26 o/ 15:00:38 o/ 15:01:39 o/ 15:01:50 o/ hello 15:03:18 #topic office hours 15:03:36 so, noble test jobs finally merged 15:04:28 though we've missed moving noble with playbooks 15:05:12 and the fix failed on gate intermittently and currently in recheck 15:05:20 #link https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/928592/3 15:05:54 There is also a current issue with apache on metal 15:06:15 as we're using different MPMs across roles, which causes upgrade job failures 15:06:40 (once upgrade jobs track correct branch) 15:07:10 so whatever fix needed shoud be backported to 2024.1 15:07:24 i found that by trying to understand the job failures in more depth 15:07:43 and i guess this should be kinda last thing for backport before doing first minor release 15:07:54 Ah, except octavia thing that I realized just today 15:08:07 #link https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/928815 15:08:10 do we have broken apache/metal on 2024.1? 15:08:17 yeah 15:08:22 oh dear, ok 15:08:35 I think that second run of playbooks will break it 15:09:15 fixing the upgrade job branch could bring more CI trouble, just a release earlier 15:11:12 yeah, true 15:14:22 so there's quite some things to work on, but not sure what needs deeper discussion 15:16:21 i found the horizon compress failure is not specifically an OSA issue 15:16:44 oh 15:17:02 it aparrently occurs when installing UCA pacakges, as part of building debian packages, and also in devstack 15:17:51 there is a bug which is now correctly assigned to the horizon project https://bugs.launchpad.net/horizon/+bug/2045394 15:18:53 i also spent some time looking at why jobs fail to get u-c when that should be from the disk 15:19:14 and unfortuntley that happens a lot in upgrade jobs and there are insufficient logs collected 15:20:48 this (+ a backport) should address the log collection https://review.opendev.org/c/openstack/openstack-ansible/+/928790 15:20:59 but that is kind of hard to test 15:35:13 it looks reasonable enough 15:38:37 for the u-c errors it is clear that the code takes the path for the url being https:// rather than file:// 15:39:19 but why it does that is not obvious yet - it could be that we have changed the way that the redirection of the URLs to files works between releases 15:39:54 so what is set up for the initial upgrade branch does not do the right thing for the target branch 15:40:48 i think this is the most likley explanation for those kind of errors 15:47:35 so if that for upgrade jobs only - that might be the case 15:47:46 as there we kind of ignore zuul-provided repos 15:48:03 just to leave them in "original" state to preserve depends-on 15:48:30 which could explain why upgrade on N-1 might try to do web fetch of u-c 15:53:44 how do i discover where the opensearch log collection service is? 15:53:51 ^ for CI jobs 15:56:51 ML says https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global 16:06:38 #endmeeting