15:01:17 #startmeeting neutron_ci 15:01:17 Meeting started Tue Apr 25 15:01:17 2023 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:17 The meeting name has been set to 'neutron_ci' 15:01:21 o/ 15:01:28 bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira 15:01:29 hello 15:01:32 o/ 15:01:36 ci meeting is starting :) 15:01:39 hi 15:01:55 o/ 15:02:24 o/ 15:02:30 Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1 15:02:47 #topic Actions from previous meetings 15:02:51 slaweq to check dhcp issue in fullstack tests 15:02:57 I still didn't had time for this 15:03:00 sorry 15:03:06 #action slaweq to check dhcp issue in fullstack tests 15:03:12 I will try this week 15:03:19 next one 15:03:21 mtomaska to check and fix "no such process" error in kill cmd 15:03:39 I posted patch https://review.opendev.org/c/openstack/neutron/+/880893 15:03:51 thx 15:03:55 Also AI from week ago needs final review 15:03:58 I will add it to my review's queue 15:04:01 https://review.opendev.org/c/openstack/neutron/+/878549 15:04:24 thank you 15:04:49 and the last one from last week 15:04:50 slaweq to open bug regarding shelve/unshelve failures 15:04:58 I opened https://bugs.launchpad.net/nova/+bug/2016967 15:05:13 but nobody from nova team checked it 15:06:11 #topic Stable branches 15:06:30 bcafarel is on different meeting but he told me that all seems good with stable branches this week 15:06:42 anything You have to add there or can we move on to next topics? 15:07:38 nothing 15:07:45 ok, lets move on then 15:07:49 #topic Stadium projects 15:08:30 odl and bagpipe's periodic jobs are red this week 15:08:58 could be related to the py3.8 issue? 15:09:03 or the mirror issues? 15:09:05 yes, for bagpipe I think it is only not master from sfc, but have to check in details 15:09:27 for ODL, I had no time to check, but it is also sqlalachemy2.0 15:09:34 ok 15:09:43 I dont't think it is related to py38 15:10:15 and nothing more from me 15:10:56 ok, thx 15:11:06 #topic Grafana 15:11:30 here I see that all our neutron-tempest-plugin jobs are still broken 15:11:58 also functional and py38 jobs were broken yesterday but those are already good 15:12:15 so graphana shows us real data :-) 15:12:22 yeah 15:12:27 as those were/are really broken 15:12:41 and our gate is broken now 15:12:56 wait for https://review.opendev.org/c/openstack/requirements/+/881433/2 (and the previous patch) 15:14:13 yes once https://review.opendev.org/c/openstack/requirements/+/881466 merges scenario jobs should be green again 15:14:40 that will be great 15:14:45 thx guys for working on all those issues 15:15:14 anything else regarding grafana? or can we move on? 15:15:22 let's move on 15:15:52 #topic Rechecks 15:16:14 avg recheckes from last week looked good (0.33) 15:16:52 bare rechecks: 9/22 were bare so pretty many but not dramatic :) 15:17:19 if you find bare rechecks, comment that in the patch 15:17:33 sure 15:17:34 ++ 15:17:35 I don't think all neutron developers are aware of that 15:18:00 or sometimes we need reminders 15:18:09 good idea, I will keep my eyes open 15:18:35 thx all 15:18:40 ok, lets move on 15:18:41 #topic fullstack/functional 15:18:48 here I have only one issue to mention 15:18:54 neutron.tests.functional.agent.ovn.extensions.test_qos_hwol.OVSInterfaceEventTestCase.test_port_creation_and_deletion 15:18:58 https://7e36fd2cde2eb81dcf41-647b4a42ed353e16c17ad589257e07eb.ssl.cf5.rackcdn.com/877831/13/check/neutron-functional-with-uwsgi/0610ffa/testr_results.html 15:18:58 https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4d6/878527/9/check/neutron-functional-with-uwsgi/4d6adb2/testr_results.html 15:19:06 I've pushed a patch 15:19:13 those are 2 examples but I think I saw same error few more times 15:19:25 https://review.opendev.org/c/openstack/neutron/+/880934 15:20:07 if that patch doesn't work, I'll mark it as unstable 15:20:20 thx 15:20:22 I added it to review list 15:20:26 for tomorrow morning 15:21:43 me too 15:21:44 so we can move on I guess 15:21:45 #topic Tempest/Scenario 15:21:46 here I don't have any specific issues for today 15:22:07 but ralonsoh wanted to talk about migration of neutron-tempest-plugin jobs to Jammy 15:22:22 is the nova patch mentioned by ykarel the only think we need? 15:22:26 well, I think we have discussed that during the last meeting 15:22:30 yes, this patch 15:22:37 and ykarel is testing that in neutron 15:22:38 one sec 15:22:57 https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/881391 15:22:58 also i prepared https://etherpad.opendev.org/p/jammy-issues to have details together 15:23:46 linux bridge tests seems unstable 15:23:51 more than OVS or OVN 15:23:56 Am I right? 15:24:32 so would need nova patch + jobs rework in neutron side(current idea is split slow tests in seperate job) and as per results seems to work better 15:24:42 is that related to the py38/nested virt issues? 15:24:49 yes it is 15:24:57 so create new jobs for slow tests? 15:25:03 ralonsoh, yes till now seen 1 failure and that is in linuxbridge https://6b56f02e616b126f480f-032ac4cea892347a8142b5cb4ce2d8a7.ssl.cf2.rackcdn.com/881391/3/check/neutron-tempest-plugin-linuxbridge-4/00c563b/testr_results.html 15:25:07 with concurrency = 1? 15:25:29 current tests are with concurrency = 2 and working fine 15:25:35 ok 15:25:47 but do we want to have slow jobs for each variant? LIke openvswitch, ovn, openvswitch-iptables_hybdir, linuxbridge? 15:25:55 so 4 new jobs running on every patch? 15:25:59 or how? 15:26:03 so if there is some other idea then slow jobs, we can rework 15:26:03 we can execute then in periodic only 15:26:40 as temporary solution? 15:27:01 so far you have identified 6 tests 15:27:11 are those times mentioned in https://etherpad.opendev.org/p/jammy-issues on nodes with really enabled nested virt? 15:27:26 I don't think it takes that much time currently 15:27:38 slaweq, those are non non-nested virt nodes or nested-virts(with qemu not kvm) 15:27:54 ahh, ok 15:27:57 nested virts results are far better, like 2-3 times better 15:28:06 so because it is using qemu instead of kvm it is so slow 15:28:11 yes 15:28:15 good 15:29:05 so what do you think about having these 4 new jobs executing the slow tests? 15:29:49 are "slow" tests the same ones as those which requires advanced image 15:29:50 ? 15:30:05 yes most of them are those only 15:30:16 ok, that makes sens 15:30:27 but where do we execute these jobs? 15:30:28 I would then move those new jobs to periodic queue probably 15:30:33 ^^ right 15:30:46 I don't think we should have 4 more jobs in check/gate queue 15:30:50 for sure no 15:30:52 it's a lot of resources on every patch 15:31:45 ok, and when this libvirt /nested issue is fixed we can move them back to regular check/gate queue? 15:32:17 no, I don't think so 15:32:29 yes if Team is ok i think we can move back, from infra side it's not recommended to rely on those nodes as they are best effort only 15:32:43 as those worked great for almost an year for us 15:32:51 but this policy will change, iif I'm not wrong 15:33:06 if it's policy change then we have no option 15:33:34 I mean it is not recommended to use these nodes (or have them as non voting) 15:33:42 ok, so lets do this 15:33:55 ack 15:34:08 ykarel will You propose patches? 15:34:25 slaweq, will discuss it in nova meeting today and update nova patch as per that 15:34:31 and then update neutron jobs 15:34:45 ++ thx 15:34:51 till then just will have tests in test patch 15:34:54 thanks ykarel 15:35:05 #action ykarel to discuss with nova team and update neutron-tempest-plugin jobs 15:35:10 thanks! 15:35:32 and that's all what I had for today 15:35:42 any other CI topics You want to discuss today? 15:35:48 no thansk 15:36:00 nothing from me 15:36:14 just a small one 15:36:23 https://review.opendev.org/c/openstack/neutron/+/881464 15:36:34 ah yes, sure 15:37:16 approved :) 15:37:28 qq 15:37:38 should we remove the py38 jobs there? 15:37:53 IMO not until we support py38 15:37:54 I know we didn't migrate yet 15:37:58 ok then 15:38:11 good question, and shall we push the changes for the stadiums? or that would brake the tempest jobs? 15:38:15 it's just UT job so no big deal IMO to keep it for a bit longer 15:38:23 ok, so keep everything else 15:38:25 py39 should work in stadium 15:38:52 I check if with dnm patch 15:39:04 thx lajoskatona 15:39:21 #action lajoskatona to check with dnm patch stadium projects with py39 15:39:23 if openstack-python3 template is used py39 jobs should be already running in stadium 15:39:45 openstack-python3-jobs 15:40:31 k i see openstack-python3-jobs-neutron template is being used 15:40:37 yes, agree, what I am not sure what will happen with tempest if the project says "hey I am min py39" 15:41:11 for stadiums we have less tempest tests, so it should be ok 15:41:31 https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/project-templates.yaml#L802 15:42:05 lajoskatona, stadium projects too running jobs on focal? 15:42:10 thanks 15:42:21 ykarel: hmmm, good question, I have to check that 15:42:31 yes, they are inheriting from neutron and n-t-p 15:43:05 or manually setting the nodeset 15:43:07 to focal 15:43:28 ok if there is no reason to run on focal then those can be moved to jammy 15:43:50 ok, I propose patches and let's see 15:43:55 +1 15:44:00 trial and error :-) 15:44:03 ++ 15:44:37 anything else for today? 15:44:52 if not, I will give You 15 minutes back today 15:45:12 ok, so thanks for attending the meeting and have a nice week 15:45:14 #endmeeting