15:00:10 <slaweq> #startmeeting neutron_ci
15:00:10 <opendevmeet> Meeting started Tue Oct  4 15:00:10 2022 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:10 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:10 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:00:12 <mlavalle> o/
15:00:28 <lajoskatona> o/
15:00:34 <ralonsoh> hi
15:00:58 <slaweq> hi
15:01:01 <slaweq> Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1
15:01:10 <slaweq> please open it and we can start
15:01:59 <slaweq> there were no action items from last week
15:02:07 <slaweq> so lets go directly to the next topic
15:02:08 <slaweq> which is
15:02:12 <slaweq> #topic Stable branches
15:02:20 <slaweq> bcafarel any updates?
15:02:39 <bcafarel> o/ wow already my turn? :)
15:02:48 <slaweq> yeah :)
15:02:55 <lajoskatona> :D
15:03:00 <bcafarel> overall good health in stable branches (recent backports went in directly most of them)
15:03:22 <bcafarel> so continuing the "this is a good week" trend!
15:04:44 <slaweq> that's good news
15:04:45 <slaweq> thx
15:04:51 <slaweq> so next one
15:04:53 <slaweq> #topic Stadium projects
15:05:04 <slaweq> I saw that all periodic jobs are green for last 2 weeks at least
15:05:09 <lajoskatona> all green, good omen for the next release
15:05:15 <lajoskatona> but I have a mixed topic
15:05:23 <slaweq> sure
15:05:25 <lajoskatona> the stable branches of some stadiums seems to be failing
15:05:38 <lajoskatona> I will check it later this week I hope
15:06:24 <slaweq> ok, please let us know if You will need any help with it
15:07:00 <lajoskatona> slaweq: thanks
15:07:45 <slaweq> anything else regarding stadium CI?
15:07:50 <slaweq> or can we move on?
15:08:18 <lajoskatona> we can move on
15:08:22 <slaweq> ok
15:08:25 <slaweq> so next topic
15:08:28 <slaweq> #topic Grafana
15:08:34 <slaweq> #link https://grafana.opendev.org/d/f913631585/neutron-failure-rate
15:08:39 <slaweq> IMO all looks good there
15:09:00 <slaweq> we don't really have many patches in gate/check queues recently so there is no a lot of data there
15:09:12 <slaweq> but it looks good in general I think
15:09:21 <mlavalle> yeap
15:10:31 <slaweq> I think we can move on
15:10:47 <slaweq> #topic Rechecks
15:10:48 <slaweq> recheck numbers looks good
15:11:19 <slaweq> this week we have 3 rechecks in average to get patch merged
15:11:27 <slaweq> but it was only one patch which was rechecked 3 times:
15:11:33 <slaweq> https://review.opendev.org/c/openstack/neutron/+/857490
15:12:07 <slaweq> and IIRC one of those failures was due to ovn issue (ralonsoh fixed it last week already), one was related to live-migration (not neutron related)
15:12:27 <ralonsoh> yeah, right
15:12:29 <slaweq> and last one was our "old friend": https://79e217fdf27e44ce3f2d-e67def1c0f240be320274a3285fccc21.ssl.cf2.rackcdn.com/857490/3/check/neutron-tempest-plugin-openvswitch-iptables_hybrid/8a775c7/testr_results.html
15:12:41 <slaweq> and failed test neutron_tempest_plugin.scenario.test_floatingip.FloatingIPPortDetailsTest
15:12:51 <slaweq> in the past it was mostly failing in the linuxbridge job
15:13:00 <slaweq> but this time it failed in the openvswitch job too
15:13:15 <ralonsoh> ^^ https://bugs.launchpad.net/neutron/+bug/1991501
15:13:17 <lajoskatona> that is not a good sign
15:13:32 <ralonsoh> maybe related, I manually tested executing the pyroute2 code to delete the FIP
15:13:36 <ralonsoh> and I couldn't
15:13:46 <ralonsoh> check c#1
15:13:59 <opendevreview> Arnaud Morin proposed openstack/neutron master: Allow restoration of tun_ofports on agent restart  https://review.opendev.org/c/openstack/neutron/+/860270
15:14:08 <ralonsoh> (and not only with pyroute2 0.6.6, I tested with 0.7.2)
15:14:43 <lajoskatona> te original bug was perhaps this one: https://bugs.launchpad.net/neutron/+bug/1799790
15:15:31 <slaweq> lajoskatona yes, it seems like that one again
15:15:53 <slaweq> ralonsoh I don't think it's the same pyroute2 issue as there is no such error in l3 agent logs there
15:16:03 <ralonsoh> right
15:16:44 <lajoskatona> ok so the 2 are not duplicate
15:16:47 <slaweq> anyway, for now it was just one hit, but maybe someone will have cycles to investigate it again
15:17:15 <lajoskatona> I had a patch in the past: https://review.opendev.org/c/openstack/neutron/+/827728
15:17:26 <lajoskatona> but long forgotten what I wanted to do with it :-)
15:18:05 <slaweq> lajoskatona maybe You can revisit it now :)
15:18:24 <lajoskatona> my bad, I should have close my mouth :-)
15:18:29 <slaweq> haha
15:18:35 <lajoskatona> but I can of course
15:18:43 <slaweq> You made my day with this comment :)
15:20:11 <slaweq> ok, I think we can move on
15:20:52 <slaweq> #topic fullstack/functional
15:21:00 <slaweq> I found 2 failures in the functional jobs this week
15:21:12 <slaweq> first is (again) Interface not found in namespace
15:21:16 <slaweq> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_bd0/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-uwsgi-fips/bd0538a/testr_results.html
15:21:24 <slaweq> I will try to investigate that one again
15:21:45 <slaweq> #action slaweq to check recent occurence of Interface not found in namespace failure
15:22:06 <slaweq> and second one is failed test neutron.tests.functional.agent.linux.test_ip_lib.IpMonitorTestCase.test_interface_added_after_initialization(no_namespace)
15:22:07 <slaweq> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d78/periodic/opendev.org/openstack/neutron/master/neutron-functional/d78cba5/testr_results.html
15:22:15 <slaweq> did You saw something like that already?
15:23:21 <ralonsoh> let me check that
15:23:34 <ralonsoh> could be related to the fact that the interface is not created in a namespace
15:23:39 <ralonsoh> an other test is deleting it
15:23:49 <ralonsoh> (we can have problems with this kind of testing)
15:24:11 <slaweq> true
15:24:40 <ralonsoh> to be honest, I think we can remove the no_namespace case
15:24:59 <ralonsoh> that introduce red herring errors only
15:25:02 <slaweq> ++
15:25:03 <ralonsoh> I'll update it
15:25:06 <slaweq> thx
15:25:19 <slaweq> ralonsoh to remove "no namespace" test cases from functional tests
15:25:48 <slaweq> with that I think we can move on to the next topic
15:25:58 <slaweq> for scenario jobs, we already discussed failure which I had
15:26:03 <slaweq> so next topic is
15:26:08 <slaweq> #topic grenade
15:26:15 <slaweq> here I have one thing to discuss
15:26:20 <slaweq> According to email https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030654.html we should probably add our grenade-multinode-tick-tick job to the check queue now
15:26:31 <slaweq> what do You think about it?
15:26:31 <lajoskatona> https://review.opendev.org/c/openstack/neutron/+/859991
15:26:51 <lajoskatona> I pushed a patch for it, please check I am not sure if it is what we need
15:26:52 <slaweq> thx lajoskatona, You were faster than me :)
15:26:53 <ralonsoh> is it stable enough?
15:27:00 <slaweq> I think it is
15:27:04 <lajoskatona> the grenade jobs not executed
15:27:07 <slaweq> it's in periodic queue for now
15:27:11 <slaweq> let me check it
15:27:11 <ralonsoh> ok then
15:27:30 <lajoskatona> Ipused a 2nd ps for it to remove the grenade zuul yaml from the irelevant list
15:27:48 <lajoskatona> but the normal grenade was executed only, not the slurp one
15:27:54 <bcafarel> periodic runs are good enough or we can run a few recheck tests to check stability?
15:29:56 <slaweq> https://zuul.openstack.org/builds?job_name=neutron-ovs-grenade-multinode-tick-tick&project=openstack%2Fneutron&branch=master&skip=0
15:29:56 <slaweq> btw. we can also remove this job from the stable/zed branch
15:29:56 <slaweq> it's not needed there at all
15:29:56 <ralonsoh> right
15:29:56 <lajoskatona> +1
15:29:56 <ralonsoh> ^^ looks very stable this job
15:30:16 <slaweq> so that's all from me for today
15:30:19 <bcafarel> hmm taht reminds me I need to verify our checklist on stable releases (like removing *-master jobs from stable/zed)
15:30:21 <slaweq> periodic jobs are all good
15:30:30 <slaweq> bcafarel++
15:30:37 <slaweq> anything else You want to discuss today?
15:30:47 <slaweq> or if not, we can finish 30 minutes earlier
15:30:52 <ralonsoh> fine for me
15:31:06 <bcafarel> let's keep the "short meetings" trend :)
15:31:07 <mlavalle> let's do it
15:31:11 <slaweq> ++
15:31:17 <slaweq> ok, thx for attending the meeting
15:31:19 <lajoskatona> \o/
15:31:20 <slaweq> and see You online
15:31:22 <slaweq> o/
15:31:22 <ralonsoh> bye
15:31:23 <slaweq> #endmeeting