14:02:24 <ykarel> #startmeeting neutron_ci
14:02:24 <opendevmeet> Meeting started Mon Sep  9 14:02:24 2024 UTC and is due to finish in 60 minutes.  The chair is ykarel. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:24 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:24 <opendevmeet> The meeting name has been set to 'neutron_ci'
14:02:32 <ykarel> Ping list: bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira
14:02:46 <lajoskatona> o/
14:02:47 <ralonsoh> hello
14:03:01 <slaweq> hi
14:03:02 <lajoskatona> ykarel: IRC this time, am I right?
14:03:08 <ykarel> yes right
14:05:45 <ykarel> let's start with from action items
14:05:46 <ykarel> #topic Actions from previous meetings
14:05:54 <ykarel> slaweq to check test_metadata_proxy_rate_limiting_ipv6 ft failure
14:06:18 <slaweq> I looked into it but didn't saw anything wrong there
14:06:34 <slaweq> so I proposed https://review.opendev.org/c/openstack/neutron/+/928136 to enable iptables debug in those L3 agent tests
14:06:47 <slaweq> but it is failing on various tests which I need to investigate
14:07:25 <ykarel> k thx slaweq for checking it, new logs should help to investigate it further
14:07:32 <slaweq> but it seems for me that config overrides done in the neutron/tests/functional/agent/l3/framework.py are not really visible in the L3 agent created during the tests
14:08:01 <slaweq> so I will try to fix this my patch and hopefully with this new info we will know more about why this issue happens
14:08:09 <ykarel> ++
14:08:16 <ykarel> next one
14:08:18 <ykarel> ralonsoh to check test_fip_connection_for_address_scope ft failure
14:08:20 <slaweq> in the meantime we can mark this test as unstable if that will be happening more often
14:08:33 <ralonsoh> Only saw 4 occurrences of this error (in the last 30 days): https://opensearch.logs.openstack.org/_dashboards/app/data-explorer/discover#?_a=(discover:(columns:!(_source),isDirty:!f,sort:!()),metadata:(indexPattern:'94869730-aea8-11ec-9e6a-83741af3fdcd',view:discover))&_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_q=(filters:!(),query:(language:kuery,query:'message:%22test_fip_c
14:08:33 <ralonsoh> onnection_for_address_scope%22%20and%20build_status:FAILURE'))
14:09:08 <ralonsoh> https://opensearch.logs.openstack.org/_dashboards/app/data-explorer/discover#?_a=(discover:(columns:!(_source),isDirty:!f,sort:!()),metadata:(indexPattern:'94869730-aea8-11ec-9e6a-83741af3fdcd',view:discover))&_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_q=(filters:!(),query:(language:kuery,query:'message:%22test_fip_connection_for_address_scope%22%20and%20build_status:FAILURE'))
14:09:16 <ykarel> i think we have those logs only for 2 weeks
14:09:33 <ralonsoh> in any case, I think this is just a random error. There is no need to open a LP bug or investigate it (if there are no more ocurrences)
14:09:48 <lajoskatona> yes, not possible to fetch older results from opensearch
14:09:55 <ralonsoh> I checked that last week and we had the same result
14:10:10 <ralonsoh> only 4 occurrences
14:10:22 <ralonsoh> and I can't reproduce it locally
14:11:03 <ykarel> k thx ralonsoh, can watch for it if we see it again
14:11:23 <ykarel> ralonsoh to check failures in postgres job
14:11:33 <ralonsoh> done: https://review.opendev.org/q/I55142ce21cec8bd8e2d6b7b8b20c0147873699da
14:11:57 <ralonsoh> I saw this error in a local environment
14:12:03 <ykarel> thx all backports merged, stable periodics were also green on it
14:12:17 <ralonsoh> soemthing that can't be because the port listing is protected by a reader context
14:12:29 <ralonsoh> but is happening with postgresql
14:13:38 <ykarel> okk
14:13:47 <ykarel> #topic Stable branches
14:14:00 <ykarel> Bernard not around today
14:14:24 <ykarel> haven't seen any issue with stable branches jobs
14:14:32 <ykarel> couple of patches merged in stable last week
14:14:45 <ykarel> #topic Stadium projects
14:14:50 <ykarel> all green here too
14:14:55 <ykarel> lajoskatona anything to add?
14:14:57 <lajoskatona> yes, all green >/(
14:14:59 <lajoskatona> :-)
14:15:25 <lajoskatona> I don't know about anything special for stadiums, the usual:
14:15:42 <lajoskatona> please check the reviews sometimes on stadiums also :-)
14:16:10 <lajoskatona> Due to low activity 30 mins per week can work I believe :-)
14:16:36 <lajoskatona> of course there are difficult things to reproduce, but that is another topic...
14:16:44 <lajoskatona> that's it from me for stadiums
14:16:47 <ykarel> ++ thx lajoskatona
14:17:25 <ykarel> #topic Rechecks
14:18:00 <ykarel> couple of patches merged last week, some of those required couple of rechecks
14:18:35 <ykarel> brae recheck wise was good, 2/20 rechecks
14:19:16 <ykarel> 2 new gate failure bugs were reported
14:19:17 <ykarel> https://bugs.launchpad.net/neutron/+bug/2079048
14:19:22 <ykarel> https://bugs.launchpad.net/neutron/+bug/2078846
14:19:35 <ykarel> #topic fullstack/functional
14:19:41 <ykarel> test_arp_spoof_doesnt_block_ipv6
14:19:46 <ykarel> https://3e47488f5ba579f9e43a-ef216d720cde863031e2791348aab89c.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-oslo-master/d0183c5/testr_results.html
14:20:21 <ykarel> found an old issue https://bugs.launchpad.net/neutron/+bug/2003196
14:20:32 <ykarel> where some work was done by ralonsoh for this
14:20:39 <ralonsoh> only adding logs
14:21:14 <ralonsoh> I can check again the new errors (with the new logs)
14:21:23 <ykarel> yes please
14:22:38 <ykarel> #action ralonsoh to check logs for https://bugs.launchpad.net/neutron/+bug/2003196
14:22:47 <ykarel> OSError: [Errno 24] Too many open files: '/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/etc/neutron.conf'
14:22:56 <ykarel> seen this twice in functional jobs
14:23:04 <ykarel> https://14d65eceddbce78ddf51-8bfb5d70b83a273fa97d15d51d14f1ae.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-pyroute2-master/82341c6/testr_results.html
14:23:10 <ykarel> https://8d0785ad59ff61c89613-2093f8081c337530c67183f7eaa6e193.ssl.cf1.rackcdn.com/916387/3/check/neutron-functional-with-uwsgi/3dc046c/testr_results.html
14:24:17 <ykarel> first occurance 2 days ago, so likely will see more occurances
14:24:46 <ykarel> any idea on this? or anyone would like to check it further?
14:24:55 <lajoskatona> I can check it
14:25:03 <ykarel> thx lajoskatona
14:25:24 <ykarel> #action lajoskatona to check ft failures for too many open files
14:25:31 <ykarel> #topic Periodic
14:25:40 <ykarel> stable 2023.2/2024.1 running wrong version of linuxbridge job
14:25:50 <ykarel> proposed https://review.opendev.org/q/I57e43a828462d72859d6830f26007849874a58d7
14:25:58 <ykarel> please check when you get a chance
14:26:04 <ykarel> #topic Grafana
14:26:13 <ykarel> https://grafana.opendev.org/d/f913631585/neutron-failure-rate
14:30:00 <ykarel> gate queue had some failures, i had checked those and none of them was a new issue
14:31:24 <ykarel> check queue have couple of spikes, /me havent' checked failures this week in check queue but likely patches specific, will keep an eye
14:31:32 <ykarel> anything to add?
14:31:49 <ralonsoh> not from me
14:31:58 <lajoskatona> nothing from me
14:32:44 <ykarel> #topic On Demand
14:32:55 <ykarel> anything you would like to raise here?
14:33:10 <lajoskatona> nope
14:34:04 <slaweq> nope
14:35:32 <ykarel> k thx in that case let's close early and have everyone 25 minutes back
14:35:35 <ykarel> #endmeeting