15:00:10 #startmeeting neutron_ci 15:00:10 Meeting started Tue Oct 4 15:00:10 2022 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:10 The meeting name has been set to 'neutron_ci' 15:00:12 o/ 15:00:28 o/ 15:00:34 hi 15:00:58 hi 15:01:01 Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1 15:01:10 please open it and we can start 15:01:59 there were no action items from last week 15:02:07 so lets go directly to the next topic 15:02:08 which is 15:02:12 #topic Stable branches 15:02:20 bcafarel any updates? 15:02:39 o/ wow already my turn? :) 15:02:48 yeah :) 15:02:55 :D 15:03:00 overall good health in stable branches (recent backports went in directly most of them) 15:03:22 so continuing the "this is a good week" trend! 15:04:44 that's good news 15:04:45 thx 15:04:51 so next one 15:04:53 #topic Stadium projects 15:05:04 I saw that all periodic jobs are green for last 2 weeks at least 15:05:09 all green, good omen for the next release 15:05:15 but I have a mixed topic 15:05:23 sure 15:05:25 the stable branches of some stadiums seems to be failing 15:05:38 I will check it later this week I hope 15:06:24 ok, please let us know if You will need any help with it 15:07:00 slaweq: thanks 15:07:45 anything else regarding stadium CI? 15:07:50 or can we move on? 15:08:18 we can move on 15:08:22 ok 15:08:25 so next topic 15:08:28 #topic Grafana 15:08:34 #link https://grafana.opendev.org/d/f913631585/neutron-failure-rate 15:08:39 IMO all looks good there 15:09:00 we don't really have many patches in gate/check queues recently so there is no a lot of data there 15:09:12 but it looks good in general I think 15:09:21 yeap 15:10:31 I think we can move on 15:10:47 #topic Rechecks 15:10:48 recheck numbers looks good 15:11:19 this week we have 3 rechecks in average to get patch merged 15:11:27 but it was only one patch which was rechecked 3 times: 15:11:33 https://review.opendev.org/c/openstack/neutron/+/857490 15:12:07 and IIRC one of those failures was due to ovn issue (ralonsoh fixed it last week already), one was related to live-migration (not neutron related) 15:12:27 yeah, right 15:12:29 and last one was our "old friend": https://79e217fdf27e44ce3f2d-e67def1c0f240be320274a3285fccc21.ssl.cf2.rackcdn.com/857490/3/check/neutron-tempest-plugin-openvswitch-iptables_hybrid/8a775c7/testr_results.html 15:12:41 and failed test neutron_tempest_plugin.scenario.test_floatingip.FloatingIPPortDetailsTest 15:12:51 in the past it was mostly failing in the linuxbridge job 15:13:00 but this time it failed in the openvswitch job too 15:13:15 ^^ https://bugs.launchpad.net/neutron/+bug/1991501 15:13:17 that is not a good sign 15:13:32 maybe related, I manually tested executing the pyroute2 code to delete the FIP 15:13:36 and I couldn't 15:13:46 check c#1 15:13:59 Arnaud Morin proposed openstack/neutron master: Allow restoration of tun_ofports on agent restart https://review.opendev.org/c/openstack/neutron/+/860270 15:14:08 (and not only with pyroute2 0.6.6, I tested with 0.7.2) 15:14:43 te original bug was perhaps this one: https://bugs.launchpad.net/neutron/+bug/1799790 15:15:31 lajoskatona yes, it seems like that one again 15:15:53 ralonsoh I don't think it's the same pyroute2 issue as there is no such error in l3 agent logs there 15:16:03 right 15:16:44 ok so the 2 are not duplicate 15:16:47 anyway, for now it was just one hit, but maybe someone will have cycles to investigate it again 15:17:15 I had a patch in the past: https://review.opendev.org/c/openstack/neutron/+/827728 15:17:26 but long forgotten what I wanted to do with it :-) 15:18:05 lajoskatona maybe You can revisit it now :) 15:18:24 my bad, I should have close my mouth :-) 15:18:29 haha 15:18:35 but I can of course 15:18:43 You made my day with this comment :) 15:20:11 ok, I think we can move on 15:20:52 #topic fullstack/functional 15:21:00 I found 2 failures in the functional jobs this week 15:21:12 first is (again) Interface not found in namespace 15:21:16 https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_bd0/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-uwsgi-fips/bd0538a/testr_results.html 15:21:24 I will try to investigate that one again 15:21:45 #action slaweq to check recent occurence of Interface not found in namespace failure 15:22:06 and second one is failed test neutron.tests.functional.agent.linux.test_ip_lib.IpMonitorTestCase.test_interface_added_after_initialization(no_namespace) 15:22:07 https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d78/periodic/opendev.org/openstack/neutron/master/neutron-functional/d78cba5/testr_results.html 15:22:15 did You saw something like that already? 15:23:21 let me check that 15:23:34 could be related to the fact that the interface is not created in a namespace 15:23:39 an other test is deleting it 15:23:49 (we can have problems with this kind of testing) 15:24:11 true 15:24:40 to be honest, I think we can remove the no_namespace case 15:24:59 that introduce red herring errors only 15:25:02 ++ 15:25:03 I'll update it 15:25:06 thx 15:25:19 ralonsoh to remove "no namespace" test cases from functional tests 15:25:48 with that I think we can move on to the next topic 15:25:58 for scenario jobs, we already discussed failure which I had 15:26:03 so next topic is 15:26:08 #topic grenade 15:26:15 here I have one thing to discuss 15:26:20 According to email https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030654.html we should probably add our grenade-multinode-tick-tick job to the check queue now 15:26:31 what do You think about it? 15:26:31 https://review.opendev.org/c/openstack/neutron/+/859991 15:26:51 I pushed a patch for it, please check I am not sure if it is what we need 15:26:52 thx lajoskatona, You were faster than me :) 15:26:53 is it stable enough? 15:27:00 I think it is 15:27:04 the grenade jobs not executed 15:27:07 it's in periodic queue for now 15:27:11 let me check it 15:27:11 ok then 15:27:30 Ipused a 2nd ps for it to remove the grenade zuul yaml from the irelevant list 15:27:48 but the normal grenade was executed only, not the slurp one 15:27:54 periodic runs are good enough or we can run a few recheck tests to check stability? 15:29:56 https://zuul.openstack.org/builds?job_name=neutron-ovs-grenade-multinode-tick-tick&project=openstack%2Fneutron&branch=master&skip=0 15:29:56 btw. we can also remove this job from the stable/zed branch 15:29:56 it's not needed there at all 15:29:56 right 15:29:56 +1 15:29:56 ^^ looks very stable this job 15:30:16 so that's all from me for today 15:30:19 hmm taht reminds me I need to verify our checklist on stable releases (like removing *-master jobs from stable/zed) 15:30:21 periodic jobs are all good 15:30:30 bcafarel++ 15:30:37 anything else You want to discuss today? 15:30:47 or if not, we can finish 30 minutes earlier 15:30:52 fine for me 15:31:06 let's keep the "short meetings" trend :) 15:31:07 let's do it 15:31:11 ++ 15:31:17 ok, thx for attending the meeting 15:31:19 \o/ 15:31:20 and see You online 15:31:22 o/ 15:31:22 bye 15:31:23 #endmeeting