15:00:04 #startmeeting neutron_ci 15:00:04 Meeting started Tue Jun 27 15:00:04 2023 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:04 The meeting name has been set to 'neutron_ci' 15:00:07 o/ 15:00:09 ping bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira 15:00:12 hello 15:00:17 o/ 15:00:19 o/ 15:00:51 Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1 15:00:51 Please open now :) 15:01:02 I think we can start 15:01:04 #topic Actions from previous meetings 15:01:12 ykarel to check test failures in neutron-ovn-tempest-ipv6-only-ovs-release 15:01:29 Proposed https://review.opendev.org/c/openstack/neutron/+/885074 15:01:38 That's i think is from couple of weeks before 15:01:44 yeah 15:01:49 Amit Uniyal proposed openstack/os-vif stable/yoga: set default qos policy https://review.opendev.org/c/openstack/os-vif/+/886710 15:01:55 we haven't have ci meeting in June yet :) 15:02:02 it's our first this month 15:02:09 ack :) 15:02:12 just in time, it's not July yet 15:02:13 recently i saw more failures, will push a follow up patch for that 15:02:27 ok 15:02:29 thx 15:02:32 https://bugs.launchpad.net/neutron/+bug/2007166/comments/9 15:03:36 thx ykarel 15:03:57 I think we can move on to the next topic then 15:04:10 #topic Stable branches 15:04:25 a quiet month overall :) 15:05:01 I have to check the latest backports (one antelope got TIMED_OUT on rally twice) but no open issues that I know of at the moment 15:05:41 those time outs are pretty common issue currently 15:05:53 and don't seems to be related to any branch/change/anything else 15:06:36 yep that's what I am thinking too 15:06:43 so tl;dr all good 15:06:50 iirc we bumped timeout in master recently for those job 15:06:58 not sure if backported to antelope 15:07:11 great, thx bcafarel 15:08:06 https://review.opendev.org/c/openstack/neutron/+/885045 15:08:40 nope, it's not backported 15:08:44 ok, time to do it 15:08:58 +1 15:09:00 Rodolfo Alonso proposed openstack/neutron stable/2023.1: Raise the timeout of "neutron-ovn-rally-task" to 9000 https://review.opendev.org/c/openstack/neutron/+/887056 15:09:06 :) 15:09:33 +2 15:09:48 ok, I think we can move on to the next topic then 15:09:51 #topic Stadium projects 15:09:58 lajoskatona any updates? 15:10:09 things seems to be quiet 15:10:27 for bagpipe I pushed one patch to fix issues with sqlalchemy 2: 15:10:32 Amit Uniyal proposed openstack/os-vif stable/xena: set default qos policy https://review.opendev.org/c/openstack/os-vif/+/886716 15:10:37 https://review.opendev.org/c/openstack/networking-bagpipe/+/887024 15:11:01 I also saw that networking-odl periodic jobs are failing and wanted to ask if we still need to run those jobs as project is deprecated 15:11:36 no, I'll delete them 15:11:38 I have to go back to it as it looked easy just rmeove subtransactions=True, but something is missing and weird things happen sometimes without that keyword 15:11:56 yes we can delete those now 15:12:58 just one more thing for stadiums: please check the open patches for them, I try to keep an eye on them also :-) 15:12:59 #action ralonsoh to remove networking-odl periodic jobs 15:13:03 thx 15:14:37 qq, what jobs? the master branch has been deleted 15:14:43 do you have the names? 15:15:24 https://zuul.openstack.org/buildset/bdb18cb84e3e411daaf23f7cb86ad1c5 15:15:43 those are periodic jobs run for networking-odl 15:15:51 hmmm, rodolfo is right thoses should be deleted now 15:15:53 at least those jobs were run this saturday 15:17:08 the deleting patch was merged yesterday so it should be ok, and last time we saw those jobs :-) 15:17:16 ahh, ok then 15:17:40 the ap can be changed and assigned to me to check if we have some common reporisoties with jobs for ODL 15:18:36 #action lajoskatona to check if we have some common reporisoties with jobs for ODL 15:18:50 I will forget about AI on ralonsoh :) 15:18:50 ok ok, these are the sable branches 15:19:10 I'll remove the periodic executions from the stable branches too 15:21:00 ++ 15:21:04 ok, I think we can move on 15:21:08 next topic 15:21:14 #topic Grafana 15:21:19 ok 15:21:39 generally it looks fine 15:22:08 I see that jobs are on pretty high failure rate but in most cases those are timeouts which we see everywhere 15:22:54 anything else regarding grafana anyone wants to add? 15:23:03 do yo know perhaps if there is common background for th timeouts? 15:23:36 I mean issue in infrastructure or similar 15:23:41 no, I don't know 15:23:49 but I will ask later today on the tc meeting about it 15:24:06 ok, thanks 15:24:53 ok, next topic 15:24:54 #topic Rechecks 15:25:05 generally we are doing a lot of rechecks again 15:25:17 https://etherpad.opendev.org/p/neutron-ci-meetings#L41 15:25:30 there was an issue in FTs last week 15:25:38 in last few weeks it was more than 2 in average to get anything merged 15:26:43 that's all from me about rechecks 15:27:06 if there are no other questions/comments I think we can move on 15:27:12 good for me 15:27:20 #topic fullstack/functional 15:27:31 here I found just one potentially interesting issue in last few days 15:27:36 test_update_minimum_bandwidth_queue 15:27:40 https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_709/886992/1/check/neutron-functional-with-uwsgi/70996ce/testr_results.html 15:28:18 I don't think it was related to the patch on which it was run 15:28:22 https://review.opendev.org/c/openstack/neutron/+/886992 15:28:50 but the issue seems like something what should be reproducible (assertion error) 15:29:10 it failed due to different "queue_num" value 15:29:14 right 15:29:28 did You saw something like that already? 15:30:03 no unless that any other test is interfering, but each OVS instance is independent for each test 15:30:06 if I'm not wrong 15:30:20 ovs is the same for all tests 15:30:27 even the DB? 15:30:29 it creates separate bridges for tests 15:30:46 yes, I think that ovs db is shared across all tests 15:30:53 which just run it once and then uses it 15:31:09 so this could be a problem, let me check if I can find what other test is affecting it 15:31:11 and we are using devstack code to run ovs 15:31:22 thx ralonsoh 15:31:34 You can open LP for that if You think it's valid issue 15:31:45 I can do it, thanks! 15:32:03 thank You 15:32:26 #action ralonsoh to check failed test_update_minimum_bandwidth_queue functional test 15:32:32 #topic Tempest/Scenario 15:32:49 first on the list is test_hotplug_nic added by ykarel 15:32:58 but he already mentioned that earlier today 15:33:03 https://bugs.launchpad.net/neutron/+bug/2007166 15:33:36 other than that I saw one issue with connectivity in linuxbridge job: 15:33:38 https://1e3bcd2e249bfc8aee18-7f77ff85d71aba60d48e1d6b772dec0b.ssl.cf2.rackcdn.com/885999/9/check/neutron-tempest-plugin-linuxbridge/93f0305/testr_results.html 15:34:40 now I have a question about this LB job 15:35:30 looking at https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1&viewPanel=16&from=now-30d&to=now it seems that this jobs is a bit less stable than other scenario jobs, especially in last weeks 15:35:46 do You think it's already time to maybe move this job from check to e.g. periodic queue? 15:36:23 I'm ok with this, we agreed on that and have make this public during the last cycles 15:36:38 that we are no longer actively supporting LB nor fixing CI errors 15:36:43 +1 15:36:55 ok, I will propose patch for that 15:36:58 perfect 15:37:11 #action slaweq to move LB scenario job to periodic queue 15:37:32 I also noticed some connectivity issues in the grenade job https://zuul.opendev.org/t/openstack/build/5cdd01584e844e839b28fcfc273537ae 15:37:57 it could be (again) slow nodes and nothing else 15:38:05 no no 15:38:11 these are from a specific patch 15:38:26 these test (I don't know why) are always failing 15:38:32 ahh, ok 15:38:36 https://review.opendev.org/c/openstack/neutron/+/883907 15:38:38 so false alarm then :) 15:38:47 I don't know what is wrong in this patch... 15:38:51 I didn't though that this patch can be related 15:38:53 (help is welcome) 15:39:22 please ping me tomorrow, I can try to take a look at this patch 15:39:26 thanks! 15:39:39 ok, that's all on this topic from me 15:39:44 any other questions/comments? 15:39:51 issues to discuss maybe 15:39:55 nope 15:40:18 so lets move on to the last topic 15:40:19 #topic Periodic 15:40:37 here I found out that neutron-functional-with-oslo-master and neutron-functional-with-sqlalchemy-master are failing since few days at least 15:40:42 Bug reported https://bugs.launchpad.net/neutron/+bug/2025126 15:40:48 failure examples: 15:40:49 patch proposed 15:40:53 https://zuul.openstack.org/build/55a065238b784ac28e91469d2acce3da 15:40:53 https://zuul.openstack.org/build/2d8d000b62a1448d984eab7059d677a7 15:40:57 https://review.opendev.org/c/openstack/neutron/+/886961 15:41:01 ralonsoh that's fast :) 15:41:18 ^^ this is the problem of testing with master branches 15:41:25 that we need to revert previous fixes 15:41:53 at least we are catching such issues fairly quickly without breaking our gate :) 15:42:01 exactly 15:42:23 and that's all what I had for today 15:42:37 anything else related to CI You want to discuss maybe? 15:42:49 all good 15:42:51 or if not, I will give You back few minutes 15:43:05 nothing from me either 15:43:42 thx for attending the meeting and see You online 15:43:42 #endmeeting