15:00:06 <slaweq> #startmeeting neutron_ci
15:00:06 <opendevmeet> Meeting started Tue Jun 29 15:00:06 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:06 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:06 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:00:12 <ralonsoh> hi
15:00:15 <slaweq> hi again :)
15:00:18 <obondarev> hi
15:00:32 <slaweq> First of all Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:01:15 <slaweq> ok, let's start
15:01:18 <slaweq> #topic Actions from previous meetings
15:01:25 <slaweq> slaweq to check failing update mtu fullstack test: https://bugs.launchpad.net/neutron/+bug/1933234
15:01:32 <slaweq> I checked it today
15:02:14 <slaweq> and I think I know what is going on there, I described it in comment on LP: https://bugs.launchpad.net/neutron/+bug/1933234/comments/2
15:02:31 <slaweq> but to confirm that I will need additional logs in L3 agent: https://review.opendev.org/c/openstack/neutron/+/798648
15:02:51 <opendevreview> Ilya Chukhnakov proposed openstack/neutron-specs master: [WIP] Add Node-Local Virtual IP Spec  https://review.opendev.org/c/openstack/neutron-specs/+/797798
15:03:05 <lajoskatona> Hi
15:03:18 <slaweq> obondarev regarding Your question, I don't think it will be useful in that specific case to log those ports ids there but I can add it
15:03:43 <opendevreview> Ilya Chukhnakov proposed openstack/neutron-specs master: [WIP] Add Node-Local Virtual IP Spec  https://review.opendev.org/c/openstack/neutron-specs/+/797798
15:04:52 <obondarev> slaweq: got it
15:06:11 <slaweq> and that's basically all regarding this issue for now
15:06:27 <opendevreview> Mamatisa Nurmatov proposed openstack/neutron master: use payloads for PORT AFTER_DELETE events  https://review.opendev.org/c/openstack/neutron/+/797004
15:06:28 <slaweq> if it will happen often I will propose to mark this test as unstable for now
15:06:38 <slaweq> but maybe it will not be necessary :)
15:06:51 <slaweq> I think we can move on
15:06:56 <slaweq> next was:
15:06:59 <slaweq> lajoskatona to check with infra status of the tap-as-a-service move
15:08:15 <opendevreview> Merged openstack/neutron-tempest-plugin master: Add a test for overlapping SG rules  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/520251
15:08:19 <opendevreview> Merged openstack/neutron-tempest-plugin master: Revert "Skip scenario tests if HA router will not be active"  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/796858
15:08:39 <slaweq> lajoskatona are You here?
15:08:51 <opendevreview> Merged openstack/neutron stable/ussuri: Updates for python3.8  https://review.opendev.org/c/openstack/neutron/+/793417
15:09:36 <lajoskatona> sorry, yes
15:10:05 <lajoskatona> I asked around ( https://meetings.opendev.org/irclogs/%23openstack-infra/%23openstack-infra.2021-06-24.log.html#t2021-06-24T16:58:18 )
15:10:18 <lajoskatona> sorry, my client plays with me
15:11:14 <lajoskatona> so basically whenever the move/rename happens all open patches will be moved
15:12:04 <slaweq> so IIUC we are good to merge and send patches as we want now and we are good, right?
15:12:11 <lajoskatona> exactly
15:12:23 <slaweq> great, thx lajoskatona :)
15:13:34 <slaweq> ok, that's all actions from previous meeting
15:13:37 <slaweq> #topic Stadium projects
15:13:55 <slaweq> lajoskatona any other updated about stadium's CI?
15:14:32 <lajoskatona> the only one is from the mail  http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023321.html to fix broken zuul config
15:15:04 <lajoskatona> that is mostly odl rocky perhaps, amotoki started to check that, thanks for that
15:15:29 <amotoki> all jobs are failing. perhaps we can drop them
15:15:31 <slaweq> I saw amotoki volunteered for that on ML so I didn't check it
15:15:37 <lajoskatona> https://review.opendev.org/c/openstack/networking-odl/+/798298
15:15:57 <amotoki> but requirements change is also required and the requirmenets-check fails too. I need to ask it to the requirments team.
15:17:01 <slaweq> maybe we should simply make networking-odl rocky EOL now?
15:17:04 <slaweq> wdyt?
15:17:07 <lajoskatona> elod this morning mentioned something that isort is not in rocky requirements, and that one is broken too, so hard
15:18:38 <lajoskatona> yeah, for ODL we seem to have even liberty (frm http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023347.html & http://paste.openstack.org/show/806995/ )
15:18:39 <amotoki> lajoskatona: yes. we don't have a proper isort in the requirements, so if we'd like to fix it we also need a requirement change but the job is failing (it is not specific to neutron team)
15:19:02 <lajoskatona> amotoki: exactly
15:19:31 <lajoskatona> amotoki: so perhaps we can clean these old branches, not sure what to need for that
15:20:30 <amotoki> I am okay with EOL'ing networking-odl rocky and older
15:21:13 <slaweq> me too :)
15:21:15 <amotoki> lajoskatona: what do you mean by 'clean'? EOL?
15:21:23 <slaweq> do You want me to start that process?
15:21:23 <lajoskatona> yes
15:21:36 <slaweq> or do You want to do it?
15:21:43 <lajoskatona> but that means we put tag on it and the branch is deleted, or am I wrong?
15:21:49 <amotoki> slaweq: me?
15:21:53 <lajoskatona> I can
15:22:05 <amotoki> i can help it
15:22:15 <lajoskatona> amotoki: thanks
15:22:46 <amotoki> I think we can drop jobs which cause zuul configuration error first along with failing jobs.
15:23:26 <amotoki> in parallel, we can move forward the process of EOL of networking-eol older branches.
15:23:32 <lajoskatona> amotoki: ok
15:23:36 <slaweq> sounds good for me
15:23:58 <slaweq> so amotoki will You propose removal of that broken jobs?
15:24:06 <amotoki> slaweq: yes
15:24:14 <slaweq> and lajoskatona You will start EOLing it, correct?
15:24:22 <lajoskatona> salweq: yes
15:24:47 <amotoki> I will also check zuul errors in other repos this week.
15:24:51 <slaweq> #action amotoki to clean failing jobs in networking-odl rocky and older
15:25:11 <slaweq> #action lajoskatona to start EOL process for networking-odl rocky and older
15:25:18 <slaweq> thank You both :)
15:26:21 <slaweq> anything else or can we move on?
15:27:09 <lajoskatona> yes
15:27:34 <slaweq> lajoskatona "yes, something else" or "yes, we can move on"? :D
15:27:35 <amotoki> none from me either
15:28:03 <lajoskatona> sorry we can move
15:28:14 <slaweq> ok :)
15:28:35 <slaweq> As we don't have bcafarel today, I think we can skip stable branches topic
15:28:50 <slaweq> I didn't saw any serious issues there last week
15:28:53 <slaweq> #topic Grafana
15:29:30 <slaweq> https://grafana.opendev.org/d/BmiopeEMz/neutron-failure-rate?orgId=1
15:31:29 <slaweq> I see that today neutron-tempest-slow-py3 is going high but there was small number of the runs so maybe it's nothing really serious
15:31:41 <slaweq> worth to keep an eye on it for now
15:31:58 <slaweq> other than that things looks pretty ok
15:32:32 <slaweq> do You see any issues there?
15:33:46 <slaweq> ok, so let's move on
15:33:48 <slaweq> #topic fullstack/functional
15:34:05 <slaweq> I noticed (again) functional tests failure with db migration timeouts this week:
15:34:11 <slaweq> https://099638a2437d4a88b01b-b7b49a3857e4a968cf2542b58172db3c.ssl.cf2.rackcdn.com/798156/1/check/neutron-functional-with-uwsgi/6d49cc5/testr_results.html
15:34:21 <slaweq> ralonsoh may be interesting for You :)
15:34:40 <ralonsoh> I saw it, yes
15:34:48 <slaweq> as I think You recently removed some skip if timeout decorator from those tests, right?
15:34:51 <ralonsoh> I'll keep an eye on the CI
15:34:56 <slaweq> thx
15:35:00 <ralonsoh> some weeks ago
15:35:15 <slaweq> for now I saw it only once so maybe nothing really serious and just very overloaded node
15:35:15 <ralonsoh> and we don't execute those test in parallel
15:35:21 <slaweq> but worth to be aware
15:35:24 <ralonsoh> agree
15:35:47 <slaweq> and that's all regarding functional tests
15:35:59 <slaweq> for fullstack I only saw this error with mtu update
15:36:07 <slaweq> but that is already reported and in progress
15:36:12 <slaweq> so I think we can move on
15:36:14 <slaweq> #topic Tempest/Scenario
15:36:20 <slaweq> neutron-tempest-slow-py3 - again nameservers issue
15:36:25 <slaweq> https://8a9b6f5914f633048b01-596b3ee18079cc72e9aa5b1ed231f9fc.ssl.cf5.rackcdn.com/795117/12/check/neutron-tempest-slow-py3/6518f8d/testr_results.html
15:36:46 <ralonsoh> again?
15:36:48 <slaweq> ralonsoh is Your fix related to such issue fixed already?
15:36:56 <ralonsoh> let me check, I think so
15:37:08 <ralonsoh> I'll ping you later
15:37:16 <slaweq> ralonsoh yes, that was from yesterday so fresh thing :)
15:37:38 <slaweq> ok, please let me know if we should reopen our bug for that
15:37:40 <slaweq> can be tomorrow
15:37:56 <ralonsoh> ah ok
15:38:05 <ralonsoh> but we use cirros there, right?
15:38:29 <ralonsoh> my patch introduced the use of resolvectl
15:38:29 <lajoskatona> not ubuntu?
15:38:35 <ralonsoh> that is not in cirros, only advance images
15:38:40 <ralonsoh> yes, it's in ubuntu
15:39:06 <slaweq> in this test we use cirros
15:39:14 <slaweq> this is tempest test
15:39:21 <slaweq> not neutron_tempest_plugin
15:39:25 <ralonsoh> yeah...
15:39:36 <lajoskatona> oh ,and there we use only cirros....
15:39:41 <ralonsoh> I think this is just a problem with the OS
15:39:42 <slaweq> and in tempest there is no this "advanced image" thing
15:39:44 <slaweq> it's only in neutron-tempest-plugin
15:39:58 <ralonsoh> yes... now I realized that
15:40:35 <ralonsoh> I'll see if, in cirros, there is another way to check that
15:40:57 <slaweq> thx
15:41:11 <lajoskatona> and the reason behind using only cirros is to keep it tempest fast?
15:41:12 <slaweq> ralonsoh to check if there is better way to check dns nameservers in cirros
15:41:16 <slaweq> #action ralonsoh to check if there is better way to check dns nameservers in cirros
15:41:34 <slaweq> lajoskatona yes, with ubuntu and without nested virt it's a nightmare
15:41:43 <slaweq> we had it like that in neutron_tempest_plugin jobs in the past
15:41:57 <slaweq> and that's why we did this advanced_image option to use ubuntu only for some tests
15:41:58 <slaweq> not for all
15:42:19 <lajoskatona> ok, thanks
15:43:21 <slaweq> ok, that's basically all what I had for today for You
15:43:37 <slaweq> generally CI looks pretty ok, I see many patches merged without rechecks recently
15:43:47 <slaweq> great job guys with all those CI issues :)
15:43:49 <slaweq> thank You!
15:44:23 <slaweq> do You have anything else what You would like to discuss today?
15:44:30 <slaweq> or if not, we can finish a bit earlier today
15:45:31 <ralonsoh> I'm fine
15:46:02 <lajoskatona> me too
15:46:11 <slaweq> ok, thx for attending the meeting
15:46:13 <ralonsoh> bye
15:46:16 <slaweq> and have a great week
15:46:17 <slaweq> o/
15:46:21 <slaweq> #endmeeting