14:01:10 <ykarel> #startmeeting neutron_ci
14:01:10 <opendevmeet> Meeting started Mon Sep 22 14:01:10 2025 UTC and is due to finish in 60 minutes.  The chair is ykarel. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:10 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:10 <opendevmeet> The meeting name has been set to 'neutron_ci'
14:01:15 <ykarel> Ping list: bcafarel, lajoskatona, slawek, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira
14:01:20 <ralonsoh> hello
14:01:20 <ykarel> it's IRC today \o/
14:01:41 <bcafarel> good, as there is the OVN community meeting in parallel :)
14:04:41 <slaweq> o/
14:05:16 <lajoskatona> o/
14:05:30 <ykarel> Hello everyone, let's start with the topics
14:05:43 <ykarel> We didn't had an action item last week, so moving on
14:05:48 <ykarel> #topic Stable branches
14:05:54 <ykarel> Not much activity and all good except 2024.1 rally which we discuss later
14:06:03 <ykarel> bcafarel, anything to add ^?
14:06:30 <bcafarel> +1 and also just usual heads-up now we have 2025.2 branch (in the backport chain)
14:06:59 <bcafarel> rest is overall good indeed
14:07:05 <ykarel> thx bcafarel for the udpates
14:07:07 <ykarel> #topic Stadium projects
14:07:17 <lajoskatona> All green this week :-)
14:07:17 <ykarel> all green in weekly periodics
14:07:31 <ykarel> anything else to add lajoskatona ?
14:07:57 <lajoskatona> not much, some hanging patches in those repos if you have time please check them :-)
14:09:06 <ykarel> ok thx
14:09:41 <ykarel> #topic Rechecks
14:10:03 <ykarel> we have some rechecks to known intermittent issues, will be discussed in failures section
14:10:16 <ykarel> bare recheck wise we had 3/13 failures
14:10:35 <ykarel> notified the contributor about this to avoid bare rechecks
14:11:03 <ykarel> Now lets talk about the failures
14:11:05 <ykarel> #topic fullstack/functional
14:11:14 <ykarel> random functional job timeouts
14:11:22 <ykarel> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_a02/openstack/a0219a0f15bd4213ad80c2e2afb62f5a/job-output.txt
14:11:22 <ykarel> https://4012fe1b2f450bdd20fd-e30f275ad83ed88289c7399adb6c5ee6.ssl.cf2.rackcdn.com/openstack/770b8af756564f4491a4deebbc3e3864/job-output.txt
14:11:22 <ykarel> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9ee/openstack/9ee33d8cfca64cbc98bb081bff101afb/job-output.txt
14:11:22 <ykarel> Noticed atleast the db migrations taking much time in these
14:11:48 <ykarel> Is this something you observed? or any recent change you can related this to?
14:12:11 <ralonsoh> I saw this with the eventlet-removal changes
14:12:17 <ralonsoh> but nothing yet merged
14:12:54 <ykarel> yes seen those in periodic, gate jobs so not related to unmerged ones atleast
14:13:00 <ykarel> ok will report an issue for this to track
14:13:10 <ykarel> #action ykarel to report issue with functional job timeouts
14:13:20 <ykarel> #topic Tempest/Scenario
14:13:37 <ykarel> - multinode failure keystone timed out or No route to host
14:13:43 <ykarel> - https://553968d56d5c3ae1c42d-c01956f0d2337f9ef2061ae614307798.ssl.cf5.rackcdn.com/openstack/47d38463db2648138a4b6280d4b10155/job-output.txt
14:13:43 <ykarel> - https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9e2/openstack/9e2896f8e7734a25a439c8d204bccb3b/job-output.txt
14:13:43 <ykarel> - https://39d563aa6881906ba38a-c46c6711d422731404e9aacada362597.ssl.cf2.rackcdn.com/openstack/36407411f7b143f0bb60eb0011a5d886/job-output.txt
14:13:43 <ykarel> - https://c615f6cb2e225c62edd6-7e310815a60a2b301b989273943a8e9b.ssl.cf5.rackcdn.com/openstack/c1b86a008f2c4038906a6d2a10b829ce/job-output.txt
14:14:01 <ykarel> Asked in #openstack-infra, there are some recent changes in infra side for this, so will report a bug to track this
14:14:22 <ykarel> #action ykarel to report bug to track fixes of multinode job issues
14:14:29 <ykarel> #topic Rally jobs
14:14:37 <ykarel> 2024.1 broken https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-rally-task&project=openstack%2Fneutron&branch=stable%2F2024.1&skip=0
14:14:45 <ykarel> context in https://review.opendev.org/c/openstack/neutron/+/961490/1#message-1636456da98534b3fa11f2429fa8c2ce38781693
14:15:03 <ykarel> tobiko is branchless, and recent patches there broke this job in stable/2024.1
14:16:01 <ralonsoh> only for this stable branch?
14:16:15 <ykarel> considering 2024.1 about to go unmaintained we can remove these jobs or can fix for time being by pinning these jobs to working tobiko tag
14:16:24 <ykarel> yes only 2024.1
14:17:00 <ralonsoh> we can do the second, pinning tobiko
14:17:10 <ykarel> sorry not tobiko, but rally :)
14:17:17 <ralonsoh> yeah rally heheheh
14:17:57 <ykarel> ok
14:18:16 <ykarel> then can push a change to pin for now
14:18:52 <ralonsoh> I'll do it
14:18:55 <ykarel> to https://opendev.org/openstack/rally/src/tag/4.1.0/
14:18:56 <ykarel> ok thx
14:19:11 <ykarel> #action ralonsoh to pin rally in 2024.1 jobs
14:19:54 <ykarel> thats it for failures
14:19:55 <ykarel> #topic Grafana
14:20:07 <ykarel> let's have quick look on dashboard too https://grafana.opendev.org/d/f913631585/neutron-failure-rate
14:21:05 <ykarel> ok we can see some failures in multinode ones as already discussed
14:21:29 <ykarel> tempest-integrated gate one had failed with kernel panic in guest vm
14:21:46 <ykarel> apart from these looks normal
14:21:48 <ykarel> anything to add?
14:22:00 <ralonsoh> not really
14:22:26 <lajoskatona> nothing from me
14:22:48 <ykarel> ok let's move
14:22:49 <ykarel> #topic On Demand
14:22:55 <ykarel> anything else you would like to raise?
14:23:10 <ralonsoh> no
14:25:26 <ykarel> ok in that case let's close early and have everyone almost 35 minutes back
14:25:31 <ykarel> thx all for joining
14:25:34 <ykarel> #endmeeting