15:01:17 <slaweq> #startmeeting neutron_ci
15:01:17 <opendevmeet> Meeting started Tue Apr 25 15:01:17 2023 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:17 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:17 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:01:21 <mlavalle> o/
15:01:28 <slaweq> bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira
15:01:29 <ralonsoh> hello
15:01:32 <mtomaska> o/
15:01:36 <slaweq> ci meeting is starting :)
15:01:39 <slaweq> hi
15:01:55 <lajoskatona> o/
15:02:24 <ykarel> o/
15:02:30 <slaweq> Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1
15:02:47 <slaweq> #topic Actions from previous meetings
15:02:51 <slaweq> slaweq to check dhcp issue in fullstack tests
15:02:57 <slaweq> I still didn't had time for this
15:03:00 <slaweq> sorry
15:03:06 <slaweq> #action slaweq to check dhcp issue in fullstack tests
15:03:12 <slaweq> I will try this week
15:03:19 <slaweq> next one
15:03:21 <slaweq> mtomaska to check and fix "no such process" error in kill cmd
15:03:39 <mtomaska> I posted patch https://review.opendev.org/c/openstack/neutron/+/880893
15:03:51 <slaweq> thx
15:03:55 <mtomaska> Also AI from week ago needs final review
15:03:58 <slaweq> I will add it to my review's queue
15:04:01 <mtomaska> https://review.opendev.org/c/openstack/neutron/+/878549
15:04:24 <mtomaska> thank you
15:04:49 <slaweq> and the last one from last week
15:04:50 <slaweq> slaweq to open bug regarding shelve/unshelve failures
15:04:58 <slaweq> I opened https://bugs.launchpad.net/nova/+bug/2016967
15:05:13 <slaweq> but nobody from nova team checked it
15:06:11 <slaweq> #topic Stable branches
15:06:30 <slaweq> bcafarel is on different meeting but he told me that all seems good with stable branches this week
15:06:42 <slaweq> anything You have to add there or can we move on to next topics?
15:07:38 <ralonsoh> nothing
15:07:45 <slaweq> ok, lets move on then
15:07:49 <slaweq> #topic Stadium projects
15:08:30 <slaweq> odl and bagpipe's periodic jobs are red this week
15:08:58 <ralonsoh> could be related to the py3.8 issue?
15:09:03 <ralonsoh> or the mirror issues?
15:09:05 <lajoskatona> yes, for bagpipe I think it is only not master from sfc, but have to check in details
15:09:27 <lajoskatona> for ODL, I had no time to check, but it is also sqlalachemy2.0
15:09:34 <slaweq> ok
15:09:43 <lajoskatona> I dont't think it is related to py38
15:10:15 <lajoskatona> and nothing more from me
15:10:56 <slaweq> ok, thx
15:11:06 <slaweq> #topic Grafana
15:11:30 <slaweq> here I see that all our neutron-tempest-plugin jobs are still broken
15:11:58 <slaweq> also functional and py38 jobs were broken yesterday but those are already good
15:12:15 <lajoskatona> so graphana shows us real data :-)
15:12:22 <slaweq> yeah
15:12:27 <lajoskatona> as those were/are really broken
15:12:41 <slaweq> and our gate is broken now
15:12:56 <ralonsoh> wait for https://review.opendev.org/c/openstack/requirements/+/881433/2 (and the previous patch)
15:14:13 <ykarel> yes once https://review.opendev.org/c/openstack/requirements/+/881466 merges scenario jobs should be green again
15:14:40 <slaweq> that will be great
15:14:45 <slaweq> thx guys for working on all those issues
15:15:14 <slaweq> anything else regarding grafana? or can we move on?
15:15:22 <mlavalle> let's move on
15:15:52 <slaweq> #topic Rechecks
15:16:14 <slaweq> avg recheckes from last week looked good (0.33)
15:16:52 <slaweq> bare rechecks: 9/22 were bare so pretty many but not dramatic :)
15:17:19 <ralonsoh> if you find bare rechecks, comment that in the patch
15:17:33 <slaweq> sure
15:17:34 <slaweq> ++
15:17:35 <ralonsoh> I don't think all neutron developers are aware of that
15:18:00 <mlavalle> or sometimes we need reminders
15:18:09 <lajoskatona> good idea, I will keep my eyes open
15:18:35 <slaweq> thx all
15:18:40 <slaweq> ok, lets move on
15:18:41 <slaweq> #topic fullstack/functional
15:18:48 <slaweq> here I have only one issue to mention
15:18:54 <slaweq> neutron.tests.functional.agent.ovn.extensions.test_qos_hwol.OVSInterfaceEventTestCase.test_port_creation_and_deletion
15:18:58 <slaweq> https://7e36fd2cde2eb81dcf41-647b4a42ed353e16c17ad589257e07eb.ssl.cf5.rackcdn.com/877831/13/check/neutron-functional-with-uwsgi/0610ffa/testr_results.html
15:18:58 <slaweq> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4d6/878527/9/check/neutron-functional-with-uwsgi/4d6adb2/testr_results.html
15:19:06 <ralonsoh> I've pushed  a patch
15:19:13 <slaweq> those are 2 examples but I think I saw same error few more times
15:19:25 <ralonsoh> https://review.opendev.org/c/openstack/neutron/+/880934
15:20:07 <ralonsoh> if that patch doesn't work, I'll mark it as unstable
15:20:20 <slaweq> thx
15:20:22 <slaweq> I added it to review list
15:20:26 <slaweq> for tomorrow morning
15:21:43 <lajoskatona> me too
15:21:44 <slaweq> so we can move on I guess
15:21:45 <slaweq> #topic Tempest/Scenario
15:21:46 <slaweq> here I don't have any specific issues for today
15:22:07 <slaweq> but ralonsoh wanted to talk about migration of neutron-tempest-plugin jobs to Jammy
15:22:22 <slaweq> is the nova patch mentioned by ykarel the only think we need?
15:22:26 <ralonsoh> well, I think we have discussed that during the last meeting
15:22:30 <ralonsoh> yes, this patch
15:22:37 <ralonsoh> and ykarel is testing that in neutron
15:22:38 <ralonsoh> one sec
15:22:57 <ralonsoh> https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/881391
15:22:58 <ykarel> also i prepared https://etherpad.opendev.org/p/jammy-issues to have details together
15:23:46 <ralonsoh> linux bridge tests seems unstable
15:23:51 <ralonsoh> more than OVS or OVN
15:23:56 <ralonsoh> Am I right?
15:24:32 <ykarel> so would need nova patch + jobs rework in neutron side(current idea is split slow tests in seperate job) and as per results seems to work better
15:24:42 <lajoskatona> is that related to the py38/nested virt issues?
15:24:49 <ralonsoh> yes it is
15:24:57 <ralonsoh> so create new jobs for slow tests?
15:25:03 <ykarel> ralonsoh, yes till now seen 1 failure and that is in linuxbridge https://6b56f02e616b126f480f-032ac4cea892347a8142b5cb4ce2d8a7.ssl.cf2.rackcdn.com/881391/3/check/neutron-tempest-plugin-linuxbridge-4/00c563b/testr_results.html
15:25:07 <ralonsoh> with concurrency = 1?
15:25:29 <ykarel> current tests are with concurrency = 2 and working fine
15:25:35 <ralonsoh> ok
15:25:47 <slaweq> but do we want to have slow jobs for each variant? LIke openvswitch, ovn, openvswitch-iptables_hybdir, linuxbridge?
15:25:55 <slaweq> so 4 new jobs running on every patch?
15:25:59 <slaweq> or how?
15:26:03 <ykarel> so if there is some other idea then slow jobs, we can rework
15:26:03 <ralonsoh> we can execute then in periodic only
15:26:40 <lajoskatona> as temporary solution?
15:27:01 <ralonsoh> so far you have identified 6 tests
15:27:11 <slaweq> are those times mentioned in https://etherpad.opendev.org/p/jammy-issues on nodes with really enabled nested virt?
15:27:26 <slaweq> I don't think it takes that much time currently
15:27:38 <ykarel> slaweq, those are non non-nested virt nodes or nested-virts(with qemu not kvm)
15:27:54 <slaweq> ahh, ok
15:27:57 <ykarel> nested virts results are far better, like 2-3 times better
15:28:06 <slaweq> so because it is using qemu instead of kvm it is so slow
15:28:11 <ykarel> yes
15:28:15 <slaweq> good
15:29:05 <ralonsoh> so what do you think about having these 4 new jobs executing the slow tests?
15:29:49 <slaweq> are "slow" tests the same ones as those which requires advanced image
15:29:50 <slaweq> ?
15:30:05 <ykarel> yes most of them are those only
15:30:16 <slaweq> ok, that makes sens
15:30:27 <ralonsoh> but where do we execute these jobs?
15:30:28 <slaweq> I would then move those new jobs to periodic queue probably
15:30:33 <ralonsoh> ^^ right
15:30:46 <slaweq> I don't think we should have 4 more jobs in check/gate queue
15:30:50 <ralonsoh> for sure no
15:30:52 <slaweq> it's a lot of resources on every patch
15:31:45 <lajoskatona> ok, and when this libvirt /nested issue is fixed we can move them back to regular check/gate queue?
15:32:17 <ralonsoh> no, I don't think so
15:32:29 <ykarel> yes if Team is ok i think we can move back, from infra side it's not recommended to rely on those nodes as they are best effort only
15:32:43 <ykarel> as those worked great for almost an year for us
15:32:51 <ralonsoh> but this policy will change, iif I'm not wrong
15:33:06 <ykarel> if it's policy change then we have no option
15:33:34 <ralonsoh> I mean it is not recommended to use these nodes (or have them as non voting)
15:33:42 <slaweq> ok, so lets do this
15:33:55 <lajoskatona> ack
15:34:08 <slaweq> ykarel will You propose patches?
15:34:25 <ykarel> slaweq, will discuss it in nova meeting today and update nova patch as per that
15:34:31 <ykarel> and then update neutron jobs
15:34:45 <slaweq> ++ thx
15:34:51 <ykarel> till then just will have tests in test patch
15:34:54 <lajoskatona> thanks ykarel
15:35:05 <slaweq> #action ykarel to discuss with nova team and update neutron-tempest-plugin jobs
15:35:10 <mlavalle> thanks!
15:35:32 <slaweq> and that's all what I had for today
15:35:42 <slaweq> any other CI topics You want to discuss today?
15:35:48 <ralonsoh> no thansk
15:36:00 <lajoskatona> nothing from me
15:36:14 <ykarel> just a small one
15:36:23 <ykarel> https://review.opendev.org/c/openstack/neutron/+/881464
15:36:34 <ralonsoh> ah yes, sure
15:37:16 <slaweq> approved :)
15:37:28 <ralonsoh> qq
15:37:38 <ralonsoh> should we remove the py38 jobs there?
15:37:53 <slaweq> IMO not until we support py38
15:37:54 <ralonsoh> I know we didn't migrate yet
15:37:58 <ralonsoh> ok then
15:38:11 <lajoskatona> good question, and shall we push the changes for the stadiums? or that would brake the tempest jobs?
15:38:15 <slaweq> it's just UT job so no big deal IMO to keep it for a bit longer
15:38:23 <lajoskatona> ok, so keep everything else
15:38:25 <ralonsoh> py39 should work in stadium
15:38:52 <lajoskatona> I check if with dnm patch
15:39:04 <slaweq> thx lajoskatona
15:39:21 <slaweq> #action lajoskatona to check with dnm patch stadium projects with py39
15:39:23 <ykarel> if openstack-python3 template is used py39 jobs should be already running in stadium
15:39:45 <ykarel> openstack-python3-jobs
15:40:31 <ykarel> k i see openstack-python3-jobs-neutron template is being used
15:40:37 <lajoskatona> yes, agree, what I am not sure what will happen with tempest if the project says "hey I am min py39"
15:41:11 <lajoskatona> for stadiums we have less tempest tests, so it should be ok
15:41:31 <ykarel> https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/project-templates.yaml#L802
15:42:05 <ykarel> lajoskatona, stadium projects too running jobs on focal?
15:42:10 <lajoskatona> thanks
15:42:21 <lajoskatona> ykarel: hmmm, good question, I have to check that
15:42:31 <ralonsoh> yes, they are inheriting from neutron and n-t-p
15:43:05 <ralonsoh> or manually setting the nodeset
15:43:07 <ralonsoh> to focal
15:43:28 <ykarel> ok if there is no reason to run on focal then those can be moved to jammy
15:43:50 <lajoskatona> ok, I propose patches and let's see
15:43:55 <ykarel> +1
15:44:00 <lajoskatona> trial and error :-)
15:44:03 <slaweq> ++
15:44:37 <slaweq> anything else for today?
15:44:52 <slaweq> if not, I will give You 15 minutes back today
15:45:12 <slaweq> ok, so thanks for attending the meeting and have a nice week
15:45:14 <slaweq> #endmeeting