14:01:42 <ykarel> #startmeeting neutron_ci 14:01:42 <opendevmeet> Meeting started Mon Jun 30 14:01:42 2025 UTC and is due to finish in 60 minutes. The chair is ykarel. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:42 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:42 <opendevmeet> The meeting name has been set to 'neutron_ci' 14:01:47 <ykarel> Ping list: bcafarel, lajoskatona, slawek, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira 14:01:51 <ralonsoh> hi 14:02:00 <slaweq> o/ 14:02:13 <mlavalle> \o 14:02:22 <lajoskatona> o/ 14:02:37 <bcafarel> o/ 14:02:46 <mlavalle> is this video or only IRC? 14:03:11 <ykarel> IRC 14:03:52 <lajoskatona> ack 14:04:20 <ykarel> k lets start with topics 14:04:33 <ykarel> these are from last to last week as last week we didn't met 14:04:35 <ykarel> #topic Actions from previous meetings 14:04:41 <ykarel> ralonsoh to check issue with test_create_bridges 14:04:52 <ralonsoh> #link https://review.opendev.org/c/openstack/neutron/+/952828 14:05:26 <ykarel> thx ralonsoh 14:05:29 <ykarel> ralonsoh to check https://bugs.launchpad.net/neutron/+bug/2114732 14:05:55 <ralonsoh> #link https://review.opendev.org/c/openstack/devstack/+/953398 14:06:04 <ralonsoh> testing patch: https://review.opendev.org/c/openstack/nova/+/953402 14:06:08 <ykarel> thx ralonsoh 14:06:09 <ralonsoh> job is passing 14:06:13 <ykarel> gmaan, can you check ^ 14:06:50 <ykarel> ykarel to report bug for test_direct_route_for_address_scope failure 14:07:04 <ykarel> reported https://bugs.launchpad.net/neutron/+bug/2115026 14:07:29 <ykarel> also found test_fip_connection_for_address_scope impacted for similar reasons so included that as well part of bug report 14:08:12 <ykarel> moved these tests to run with concurrency 1 , that helped for test_direct_route_for_address_scope but we still seeing it for test_fip_connection_for_address_scope 14:08:30 <ykarel> can discuss while we look into failures 14:08:45 <ykarel> #topic Stable branches 14:09:13 <bcafarel> I am still behind on my reviews but from what I saw overall stable branches looked in good shape 14:09:31 <ykarel> thx bcafarel for the update 14:09:32 <bcafarel> thanks to lajos, rodolfo, brian, and you for also keeping the queue short there :) 14:09:44 <ykarel> from periodic we have tobiko job broken, can check that during failure discussion 14:09:50 <ykarel> #topic Stadium projects 14:09:54 <ykarel> all green in periodics 14:09:59 <ykarel> lajoskatona, anything to add? 14:10:22 <lajoskatona> nothing from my side, all green this week as you said, fingers crossed :-) 14:10:59 <ykarel> #topic Rechecks 14:11:23 <ykarel> we still have couple of rechecks due to random known issues, mostly it's related to functional tests 14:11:37 <ykarel> bare recheck wise we have 3/25 14:11:43 <ykarel> 2 of them were in same patch 14:11:51 <ykarel> let's keep avoiding bare rechecks 14:11:59 <ykarel> #topic fullstack/functional 14:12:50 <ykarel> test_fip_connection_for_address_scope 14:12:55 <ykarel> https://bugs.launchpad.net/neutron/+bug/2115026 14:13:32 <ykarel> for this case seeing behavior https://bugs.launchpad.net/neutron/+bug/2115026/comments/3 14:13:53 <ykarel> i.e after adding arp entry manual or pinging the router gateway port, fip pings starts working in local reproducer 14:14:16 <ykarel> ^ i have observed for test test_direct_route_for_address_scope 14:14:20 <ralonsoh> that shouldn't be the normal behaviour, right? 14:14:55 <ykarel> yes this observed only when issue reproducing 14:15:49 <ykarel> do you see what can trigger this behavior ? 14:15:59 <ykarel> i recall seeing something in past but can't find a bug for that 14:16:29 <ralonsoh> I'll try to find it tomorrow morning 14:16:38 <ykarel> ok thx ralonsoh 14:16:55 <ykarel> and for test_fip_connection_for_address_scope seeing different behavior 14:17:09 <ykarel> i.e test fails with ProcessExecutionError("Exit code: 1; Cmd: ['ip', 'netns', 'exec', 'test-b619fe36-b473-4ea7-9103-35d5754d27ae', 'ping', '-W', 1, '-c', 3, '19.4.4.11']; Stdin: ; Stdout: PING 19.4.4.11 (19.4.4.11) 56(84) bytes of data.\n\n--- 19.4.4.11 ping statistics ---\n3 packets transmitted, 0 received, 100% packet loss, time 2036ms\n\n; Stderr: ") 14:17:24 <ykarel> if i run that manually i see 14:18:42 <ykarel> seeing dup packets https://paste.openstack.org/show/bFDgf9Q7FFI8aYnvYwSx/ 14:19:19 <ykarel> and rerun same command works fine https://paste.openstack.org/show/bDq8b7xK1c5Lj9IwIogD/ 14:19:45 <ykarel> do you recall something like that? 14:19:55 <ralonsoh> no, sorry 14:20:38 <ykarel> if anyone recalls something please update it on the bug 14:21:08 <ykarel> test_floatingip_mac_bindings random failures 14:21:19 <ykarel> handled with timeout increase https://review.opendev.org/q/Ia153b48aaec72e6a073b313ef2aea8efac6dbbae 14:21:29 <ykarel> #topic Tempest/Scenario 14:21:38 <ykarel> random fail image customize 14:21:44 <ykarel> https://369db73f1617af64c678-28a40d3de53ae200fe2797286e214883.ssl.cf5.rackcdn.com/openstack/0fe5092db63d45c2882910c3dd641526/job-output.txt 14:21:50 <ykarel> attempted in past but needs to be reworked https://launchpad.net/bugs/2110191 14:22:13 <ykarel> that used to work for focal for not jammy so needs to be checked, i will check that 14:22:26 <ykarel> #action ykarel to check https://launchpad.net/bugs/2110191 for jammy 14:23:05 <ykarel> Also for a few days last week with etcd.service failed because the control process exited with error code but not seeing it now so not reported a bug for it 14:23:43 <ykarel> but found an attempt to bump etcd with https://review.opendev.org/c/openstack/devstack/+/952755 from tkajinam 14:23:58 <ykarel> may be that's related 14:24:25 <lajoskatona> somewhere frickler mentioned that it was some infra issue in tha background 14:25:05 <ykarel> ohkk not seeing since 26th concluded it must be resolved by something :) 14:25:18 <ykarel> #topic Periodic 14:25:26 <ykarel> mariadb jobs(centos 9-stream/debian-bookwarm) broken with https://review.opendev.org/c/openstack/neutron/+/952819 14:25:36 <ykarel> Bug https://bugs.launchpad.net/neutron/+bug/2115629 14:25:47 <ykarel> ralonsoh, pushed a fix for that, i will abandon the revert patch 14:26:12 <ykarel> devstack tobiko job broken in stable/2024.1 and 2024.2 with https://review.opendev.org/c/x/devstack-plugin-tobiko/+/952886 14:26:35 <ykarel> Need to check with eolivare_ slaweq on this 14:26:48 <ykarel> #action ykarel to report bug for tobiko job failure in stable 14:27:07 <slaweq> looking 14:27:33 <ykarel> thx slaweq can be checked offline, moving ahead 14:27:34 <ykarel> #topic Grafana 14:27:41 <ykarel> https://grafana.opendev.org/d/f913631585/neutron-failure-rate 14:27:47 <ykarel> let's have a quick look here too 14:29:24 <ykarel> so we no longer have fullstack jobs in master so they can be cleaned from dashboard 14:29:34 <ykarel> ralonsoh, no plan to bring those job back, right? 14:29:35 <ralonsoh> I'll push the patches 14:29:39 <ralonsoh> no for now 14:29:43 <ykarel> k thx 14:30:00 <ykarel> #action ralonsoh to cleanup fullstack jobs from grafana dashboard 14:30:21 <ykarel> apart from that looks normal, known failures and patch specific failures 14:30:24 <ykarel> anything to add? 14:30:52 <lajoskatona> for fullstack there's an etherpad also to track: https://etherpad.opendev.org/p/migrate_Neutron_fullstack_to_Tempest 14:31:04 <ykarel> thx lajoskatona 14:31:20 <lajoskatona> so if you have some time and grab tests here you can mark that 14:31:41 <lajoskatona> I started some analysis also, but no time since last week to finish that 14:31:59 <ykarel> ok 14:32:07 <ykarel> #topic On Demand 14:32:12 <ykarel> anything else you would like to raise? 14:33:06 <lajoskatona> nothing from me 14:35:11 <bcafarel> all good for me 14:35:29 <ykarel> assuming same for others :) 14:35:40 <ykarel> ok in that case let's close early and have everyone almost 25 minutes back 14:35:40 <ykarel> thx everyone for joining 14:35:40 <ykarel> #endmeeting