15:00:14 <slaweq> #startmeeting neutron_ci
15:00:15 <openstack> Meeting started Wed Sep 23 15:00:14 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:19 <openstack> The meeting name has been set to 'neutron_ci'
15:00:21 <ralonsoh> hi
15:01:08 <lajoskatona> Hi
15:01:27 <bcafarel> o/
15:01:40 <slaweq> ok, I think we can start
15:01:47 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:01:48 <slaweq> Please open now :)
15:02:36 <slaweq> #topic Actions from previous meetings
15:02:42 <slaweq> ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807
15:02:53 <openstack> slaweq: Error: Could not gather data from Launchpad for bug #1886807 (https://launchpad.net/bugs/1886807). The error has been logged
15:03:18 <ralonsoh> slaweq, sorry, I didn't start this task
15:03:34 <slaweq> ralonsoh: np
15:03:42 <slaweq> should I assign this one to You for next week?
15:03:47 <ralonsoh> yes
15:03:49 <bcafarel> probably the bot complaining launchpad is slooow
15:04:05 <slaweq> #action ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807
15:04:15 <openstack> slaweq: Error: Could not gather data from Launchpad for bug #1886807 (https://launchpad.net/bugs/1886807). The error has been logged
15:04:15 <slaweq> thx ralonsoh
15:04:17 <slaweq> ok, next one
15:04:19 <slaweq> ralonsoh to check issue with pep8 failures like https://zuul.opendev.org/t/openstack/build/6c8fbf9b97b44139bf1d70b9c85455bb
15:04:56 <ralonsoh> yes, I pushed a patch for this one
15:05:03 <ralonsoh> and Oleg another one on top of mine
15:05:04 <ralonsoh> let me check
15:05:27 <ralonsoh> https://review.opendev.org/#/c/748594/
15:05:39 <ralonsoh> https://review.opendev.org/#/c/749320/
15:05:49 <ralonsoh> No more issues reported
15:06:10 <slaweq> thx ralonsoh :)
15:06:21 <slaweq> ok, and the last one from last week
15:06:23 <slaweq> bcafarel to check failing periodic fedora job
15:06:52 * bcafarel looks for links
15:07:24 <bcafarel> short version fedora jobs are broken on f31, infra aims to switch to 32
15:07:39 <slaweq> bcafarel: and on 32 it works, right?
15:07:47 <slaweq> if yes, I'm ok with that
15:08:02 <bcafarel> with https://review.opendev.org/#/c/750292 as depends-on that job works
15:08:07 <tosky> f31 is out of support in ~2 months anyway
15:08:17 <tosky> one month after the release of f33
15:08:31 <slaweq> bcafarel++ ok, thx for checking
15:08:41 <bcafarel> but the devstack fedora job fails in linked review on neutron-dhcp-agent issues (quite a few "no such file or directory" on pid and conf files...)
15:10:16 <slaweq> ouch
15:10:18 <slaweq> https://zuul.opendev.org/t/openstack/build/d7d455eadaac4f5081fca5dd7197b6ea/log/controller/logs/screen-q-dhcp.txt?severity=4
15:10:49 <slaweq> it can be something with python 3.8 maybe
15:10:57 <slaweq> as f32 is using that version of python already
15:11:43 <slaweq> hmm, ubuntu focal also uses python 3.8
15:12:00 <bcafarel> focal is on 3.8 too (just checked)
15:12:03 <slaweq> and we have there errors like https://zuul.opendev.org/t/openstack/build/af52abc771a541548d8e528af79a524d/log/controller/logs/screen-q-dhcp.txt?severity=4
15:12:10 <slaweq> but job passed
15:12:26 <slaweq> anyone wants to report bug and investigate that?
15:12:48 <ralonsoh> maybe next week
15:13:20 <slaweq> ok, so I will report a bug and we will see next week
15:13:27 <slaweq> it's not critical for us I think
15:13:40 <slaweq> but worth to have it tracked
15:13:57 <slaweq> #action slaweq to report bug with neutron-dhcp-agent and Fedora 32
15:13:58 <ralonsoh> TBH, this should not be logged as an error
15:14:15 <slaweq> ralonsoh: so maybe that's redherring
15:14:22 <ralonsoh> we are just checking if the interface exists and has MAC
15:14:30 <ralonsoh> could be, yes
15:15:04 <lajoskatona> Than perhaps warning should be enough?
15:15:11 <slaweq> or even debug
15:15:21 <ralonsoh> info
15:15:28 <ralonsoh> we return false
15:15:34 <ralonsoh> not an exception
15:15:40 <ralonsoh> we should inform about this
15:15:55 <ralonsoh> that could be used in an active wait, waiting for the interface to be ready
15:16:39 <lajoskatona> I run a quick search on logstash and as I see it happens quite often, but the jobs are passing
15:16:45 <slaweq> ok, but still there are failed tests on that job
15:16:56 <slaweq> and we have to check why so many tests are failing due to ssh issue
15:17:02 <slaweq> I will report this bug and we will see
15:17:18 <bcafarel> could it be similar to failing tests we see in focal?
15:17:19 <slaweq> ralonsoh: if You have time please report a bug for this logging thing
15:17:23 <ralonsoh> sure
15:17:32 <slaweq> bcafarel: nah, that's other tests
15:17:40 <slaweq> so I don't think it's the same thing really
15:18:34 <bcafarel> it would have been too easy to have one single root cause :(
15:18:46 <slaweq> bcafarel: exactly
15:18:53 <slaweq> ok, I think we can move on now
15:18:56 <slaweq> #topic Switch to Ubuntu Focal
15:19:05 <slaweq> Etherpad: https://etherpad.opendev.org/p/neutron-victoria-switch_to_focal
15:19:09 <slaweq> I updated it a bit today
15:19:32 <slaweq> I know that gmann is going to pull the trigger and switch all devstack based jobs to focal today
15:20:01 <slaweq> so we may have problems with some jobs and in such case we should propose patch(es) to switch unstable jobs to bionic temporary
15:20:09 <slaweq> and then focus on debugging those issues
15:20:10 <gmann> yeah these are on gate now  https://review.opendev.org/#/c/731207/  https://review.opendev.org/#/c/734700/
15:21:25 <slaweq> for unstable neutron-tempest-plugin scenario jobs I proposed today https://review.opendev.org/#/c/753552/
15:21:32 <slaweq> hopefully it will help
15:21:34 <slaweq> but lets see
15:22:41 <bcafarel> if https://review.opendev.org/#/c/738163/ passes fine on top of it it will be nice
15:22:50 <bcafarel> sorry https://review.opendev.org/#/c/748367
15:23:02 <bcafarel> (too many tabs with "Focal" in it...)
15:23:47 <slaweq> thx bcafarel for taking care of it
15:23:55 <slaweq> any other updates regarding this topic?
15:24:47 <bcafarel> did the review-priority patches get merged? after the jobs are switched it may be a bit longer to merge stuff
15:25:12 <slaweq> bcafarel: which review-priority patches?
15:25:44 <bcafarel> https://review.opendev.org/746347 / https://review.opendev.org/747774
15:26:29 <bcafarel> hopefully we can get them in with all the focal switch + ovs hash issue
15:26:31 <lajoskatona> no they are failing with the ovs build issue :-(
15:27:05 <slaweq> yep, both failed on ovn jobs :/
15:27:42 <slaweq> ok, I think we can move on
15:27:45 <slaweq> #topic Stadium projects
15:27:54 <slaweq> anything new regarding ci of stadium projects?
15:28:59 <lajoskatona> nothing from my side
15:29:23 <slaweq> ok, thx lajoskatona
15:29:25 <slaweq> so lets move on
15:29:30 <slaweq> #topic Stable branches
15:29:40 <slaweq> anything here? bcafarel?
15:29:41 <bcafarel> we may see some activity on not spotted focal issues there, but so far so good
15:29:46 <bcafarel> (late answer for stadium projects)
15:30:31 <bcafarel> for stable, a few rechecks here and there needed, but backports got in nicely in last week (up to queens)
15:31:30 <ralonsoh> bcafarel, hmmmmm
15:31:31 <ralonsoh> https://review.opendev.org/#/c/752387/
15:31:35 <ralonsoh> still waiting!!
15:31:53 <ralonsoh> ok, sorry, the OVN CI jobs
15:32:11 <slaweq> :)
15:32:30 <slaweq> ok, so lets move on
15:32:31 <bcafarel> ah ussuri yep it probably is broken now
15:32:32 <slaweq> #topic Grafana
15:33:57 <slaweq> #link http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?orgId=1
15:34:35 <slaweq> what worries me is neutron-functional-with-uwsgi job which is failing pretty often recently
15:34:53 <slaweq> but when I was checking some jobs' results today, it was mostly some ovs or db timeouts
15:34:59 <ralonsoh> agree with this
15:35:09 <slaweq> so looked for me that this may be just overloaded infra maybe
15:35:50 <slaweq> so unless You have some specific example of issues to investigate there, I would say to wait few more days and see how it will be
15:37:41 <slaweq> and that is basically all from my side for today
15:38:21 <slaweq> do You have anything else You want to discuss today?
15:38:26 <ralonsoh> no thanks
15:38:32 <slaweq> if not I will give You 20 minutes back :)
15:39:06 <slaweq> ok, so thx for attending the meeting
15:39:11 <slaweq> and see You online :)
15:39:13 <slaweq> o/
15:39:15 <bcafarel> o/ and let's see how focal goes
15:39:18 <lajoskatona> Bye
15:39:19 <slaweq> #endmeeting