14:01:42 #startmeeting neutron_ci 14:01:42 Meeting started Mon Jun 30 14:01:42 2025 UTC and is due to finish in 60 minutes. The chair is ykarel. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:42 The meeting name has been set to 'neutron_ci' 14:01:47 Ping list: bcafarel, lajoskatona, slawek, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira 14:01:51 hi 14:02:00 o/ 14:02:13 \o 14:02:22 o/ 14:02:37 o/ 14:02:46 is this video or only IRC? 14:03:11 IRC 14:03:52 ack 14:04:20 k lets start with topics 14:04:33 these are from last to last week as last week we didn't met 14:04:35 #topic Actions from previous meetings 14:04:41 ralonsoh to check issue with test_create_bridges 14:04:52 #link https://review.opendev.org/c/openstack/neutron/+/952828 14:05:26 thx ralonsoh 14:05:29 ralonsoh to check https://bugs.launchpad.net/neutron/+bug/2114732 14:05:55 #link https://review.opendev.org/c/openstack/devstack/+/953398 14:06:04 testing patch: https://review.opendev.org/c/openstack/nova/+/953402 14:06:08 thx ralonsoh 14:06:09 job is passing 14:06:13 gmaan, can you check ^ 14:06:50 ykarel to report bug for test_direct_route_for_address_scope failure 14:07:04 reported https://bugs.launchpad.net/neutron/+bug/2115026 14:07:29 also found test_fip_connection_for_address_scope impacted for similar reasons so included that as well part of bug report 14:08:12 moved these tests to run with concurrency 1 , that helped for test_direct_route_for_address_scope but we still seeing it for test_fip_connection_for_address_scope 14:08:30 can discuss while we look into failures 14:08:45 #topic Stable branches 14:09:13 I am still behind on my reviews but from what I saw overall stable branches looked in good shape 14:09:31 thx bcafarel for the update 14:09:32 thanks to lajos, rodolfo, brian, and you for also keeping the queue short there :) 14:09:44 from periodic we have tobiko job broken, can check that during failure discussion 14:09:50 #topic Stadium projects 14:09:54 all green in periodics 14:09:59 lajoskatona, anything to add? 14:10:22 nothing from my side, all green this week as you said, fingers crossed :-) 14:10:59 #topic Rechecks 14:11:23 we still have couple of rechecks due to random known issues, mostly it's related to functional tests 14:11:37 bare recheck wise we have 3/25 14:11:43 2 of them were in same patch 14:11:51 let's keep avoiding bare rechecks 14:11:59 #topic fullstack/functional 14:12:50 test_fip_connection_for_address_scope 14:12:55 https://bugs.launchpad.net/neutron/+bug/2115026 14:13:32 for this case seeing behavior https://bugs.launchpad.net/neutron/+bug/2115026/comments/3 14:13:53 i.e after adding arp entry manual or pinging the router gateway port, fip pings starts working in local reproducer 14:14:16 ^ i have observed for test test_direct_route_for_address_scope 14:14:20 that shouldn't be the normal behaviour, right? 14:14:55 yes this observed only when issue reproducing 14:15:49 do you see what can trigger this behavior ? 14:15:59 i recall seeing something in past but can't find a bug for that 14:16:29 I'll try to find it tomorrow morning 14:16:38 ok thx ralonsoh 14:16:55 and for test_fip_connection_for_address_scope seeing different behavior 14:17:09 i.e test fails with ProcessExecutionError("Exit code: 1; Cmd: ['ip', 'netns', 'exec', 'test-b619fe36-b473-4ea7-9103-35d5754d27ae', 'ping', '-W', 1, '-c', 3, '19.4.4.11']; Stdin: ; Stdout: PING 19.4.4.11 (19.4.4.11) 56(84) bytes of data.\n\n--- 19.4.4.11 ping statistics ---\n3 packets transmitted, 0 received, 100% packet loss, time 2036ms\n\n; Stderr: ") 14:17:24 if i run that manually i see 14:18:42 seeing dup packets https://paste.openstack.org/show/bFDgf9Q7FFI8aYnvYwSx/ 14:19:19 and rerun same command works fine https://paste.openstack.org/show/bDq8b7xK1c5Lj9IwIogD/ 14:19:45 do you recall something like that? 14:19:55 no, sorry 14:20:38 if anyone recalls something please update it on the bug 14:21:08 test_floatingip_mac_bindings random failures 14:21:19 handled with timeout increase https://review.opendev.org/q/Ia153b48aaec72e6a073b313ef2aea8efac6dbbae 14:21:29 #topic Tempest/Scenario 14:21:38 random fail image customize 14:21:44 https://369db73f1617af64c678-28a40d3de53ae200fe2797286e214883.ssl.cf5.rackcdn.com/openstack/0fe5092db63d45c2882910c3dd641526/job-output.txt 14:21:50 attempted in past but needs to be reworked https://launchpad.net/bugs/2110191 14:22:13 that used to work for focal for not jammy so needs to be checked, i will check that 14:22:26 #action ykarel to check https://launchpad.net/bugs/2110191 for jammy 14:23:05 Also for a few days last week with etcd.service failed because the control process exited with error code but not seeing it now so not reported a bug for it 14:23:43 but found an attempt to bump etcd with https://review.opendev.org/c/openstack/devstack/+/952755 from tkajinam 14:23:58 may be that's related 14:24:25 somewhere frickler mentioned that it was some infra issue in tha background 14:25:05 ohkk not seeing since 26th concluded it must be resolved by something :) 14:25:18 #topic Periodic 14:25:26 mariadb jobs(centos 9-stream/debian-bookwarm) broken with https://review.opendev.org/c/openstack/neutron/+/952819 14:25:36 Bug https://bugs.launchpad.net/neutron/+bug/2115629 14:25:47 ralonsoh, pushed a fix for that, i will abandon the revert patch 14:26:12 devstack tobiko job broken in stable/2024.1 and 2024.2 with https://review.opendev.org/c/x/devstack-plugin-tobiko/+/952886 14:26:35 Need to check with eolivare_ slaweq on this 14:26:48 #action ykarel to report bug for tobiko job failure in stable 14:27:07 looking 14:27:33 thx slaweq can be checked offline, moving ahead 14:27:34 #topic Grafana 14:27:41 https://grafana.opendev.org/d/f913631585/neutron-failure-rate 14:27:47 let's have a quick look here too 14:29:24 so we no longer have fullstack jobs in master so they can be cleaned from dashboard 14:29:34 ralonsoh, no plan to bring those job back, right? 14:29:35 I'll push the patches 14:29:39 no for now 14:29:43 k thx 14:30:00 #action ralonsoh to cleanup fullstack jobs from grafana dashboard 14:30:21 apart from that looks normal, known failures and patch specific failures 14:30:24 anything to add? 14:30:52 for fullstack there's an etherpad also to track: https://etherpad.opendev.org/p/migrate_Neutron_fullstack_to_Tempest 14:31:04 thx lajoskatona 14:31:20 so if you have some time and grab tests here you can mark that 14:31:41 I started some analysis also, but no time since last week to finish that 14:31:59 ok 14:32:07 #topic On Demand 14:32:12 anything else you would like to raise? 14:33:06 nothing from me 14:35:11 all good for me 14:35:29 assuming same for others :) 14:35:40 ok in that case let's close early and have everyone almost 25 minutes back 14:35:40 thx everyone for joining 14:35:40 #endmeeting