15:00:36 <slaweq> #startmeeting neutron_ci
15:00:36 <openstack> Meeting started Wed Sep 16 15:00:36 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:40 <openstack> The meeting name has been set to 'neutron_ci'
15:01:27 <slaweq> hi
15:01:55 <lajoskatona> Hi
15:02:01 <bcafarel> o/
15:02:32 <slaweq> ok, I don't think we will have more people here so lets start and do that quickly
15:02:33 <slaweq> :)
15:02:40 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:02:57 <slaweq> #topic Actions from previous meetings
15:03:15 <slaweq> there are only 2 actions on ralonsoh from last week
15:03:21 <slaweq> ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807
15:03:22 <openstack> Launchpad bug 1886807 in neutron "neutron-ovn-tempest-full-multinode-ovs-master job is failing 100% times" [High,Confirmed] - Assigned to Maciej Jozefczyk (maciejjozefczyk)
15:03:31 <slaweq> and     ralonsoh to check issue with pep8 failures like https://zuul.opendev.org/t/openstack/build/6c8fbf9b97b44139bf1d70b9c85455bb
15:03:56 <slaweq> but I don't think he did anything with this last week so I will just assign it to him for next week, as a reminder
15:04:08 <slaweq> #action ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807
15:04:17 <slaweq> #action ralonsoh to check issue with pep8 failures like https://zuul.opendev.org/t/openstack/build/6c8fbf9b97b44139bf1d70b9c85455bb
15:04:25 <slaweq> and we can move on to the next topic
15:04:30 <bcafarel> +1
15:04:30 <slaweq> #topic Switch to Ubuntu Focal
15:04:38 <slaweq> bcafarel: any updates on that one?
15:04:46 <slaweq> as I saw You were playing a bit with this
15:05:06 <bcafarel> things could be worse, but so far the switch on "simple" jobs is not too bad
15:05:17 <bcafarel> mostly lower-constraints and sometimes pep8 that needs new hacking
15:05:39 <slaweq> ok
15:05:48 <slaweq> and do You know about scenario jobs?
15:06:11 <bcafarel> most should be visible in https://review.opendev.org/#/q/topic:migrate-to-focal+status:open (neutron*,network*, os-ken) and some will need to be backported to victoria branch (when it already exists)
15:06:54 <bcafarel> at least for neutron-tempest-plugin tests it looks not too bad https://review.opendev.org/#/c/738163/
15:07:06 <bcafarel> sorry wrong link this is neutron dnm patch
15:07:42 <bcafarel> https://review.opendev.org/#/c/748367/ at some point most were passing, I need to check on latest rechecks how it goes
15:08:00 <slaweq> it is almost all red
15:08:02 <slaweq> :/
15:08:49 <bcafarel> it was almost all green 2 rechecks ago :(
15:08:51 <slaweq> I didn't check Your recent changes in that patch
15:08:56 <slaweq> :)
15:09:07 <bcafarel> I will take a look, at least stable branches should still be passing
15:09:50 <bcafarel> mostly ansible syntax fixes so nothing big (in theory)
15:09:55 <slaweq> ok
15:10:01 <slaweq> thx for taking care of it bcafarel
15:10:21 <slaweq> today I also saw error in ovn job: https://zuul.opendev.org/t/openstack/build/3f0a572da4f343ceb5289f510029606f
15:10:28 <slaweq> did You saw it before too?
15:11:11 <bcafarel> np,  lajoskatona and others also helped in this fix flurry!
15:11:45 <bcafarel> "5.4.0-47-generic/build is version 5.4.55" hmm interesting
15:11:54 <bcafarel> but it is the first time I see it
15:12:24 <slaweq> it failed on dnm patch https://review.opendev.org/#/c/738163/
15:13:20 <slaweq> bcafarel: please keep me updated about this, I think we should speed this work up now
15:13:30 <slaweq> I will also try to focus on it tomorrow
15:14:05 <bcafarel> slaweq: sure thing, would be nice to get most fixes in before victoria branch
15:14:13 <slaweq> yes
15:14:20 <slaweq> and lajoskatona thx a lot for help with that too :)
15:14:25 <bcafarel> python-neutronclient neutron-lib os-ken will already need backports to get branch working, let's keep the list short :)
15:14:39 <lajoskatona> slweq, bcafarel: no problem
15:14:48 <slaweq> yes, I know
15:15:03 <slaweq> and we have about 1,5-2 weeks before branching in other repos :/
15:15:15 <lajoskatona> slaweq:by the way:  https://review.opendev.org/739672 this one is for bgpvpn focal migration
15:15:55 <slaweq> lajoskatona: ok, I will review it tonight
15:16:20 <lajoskatona> slaweq: thanks
15:16:46 <slaweq> I think we can move on
15:16:58 <slaweq> to the next topic now
15:17:05 <slaweq> #topic Stadium projects
15:17:22 <slaweq> do You have anything related to stadium projects?
15:17:55 <lajoskatona> not really
15:18:20 <lajoskatona> not stadium but can be interesting that networking-l2gw find its rest place in x/ finally
15:18:21 <bcafarel> already covered in focal topic for me (most need some small fixes)
15:18:35 <lajoskatona> I think I just fixed its zuul jobs for focal
15:18:42 <slaweq> lajoskatona++
15:18:58 <bcafarel> nice, I think I saw it mentioned in networking-midonet fixes
15:19:24 <slaweq> yeah, and networking-midonet ci is also getting better recently :)
15:19:29 <lajoskatona> yes, midonet depends on l2gw and on tap-as-a-service as well (like odl do the same)
15:20:15 <tosky> they somehow manage to port the zuulv3 jobs
15:21:14 <slaweq> tosky: yes, I need to review https://review.opendev.org/#/c/749857/ now
15:22:01 <tosky> the problem is  that review depends on two reviews which may need to be merged together, or at least the initial one should be fixed
15:23:08 <slaweq> tosky: are You talking about https://review.opendev.org/#/c/751652/ ?
15:23:45 <lajoskatona> and I think at some point they needed this one as well: https://review.opendev.org/752058
15:23:46 <tosky> which depends on https://review.opendev.org/#/c/738046/
15:23:51 <lajoskatona> but this one on its way to be merged
15:24:31 <tosky> uh, is https://review.opendev.org/#/c/738046/ merged with a -1?
15:24:37 <tosky> was it forcibly merged?
15:25:00 <slaweq> tosky: it seems so
15:25:04 <tosky> oh
15:25:09 <tosky> then it's easier
15:25:12 <bcafarel> \o/
15:25:12 <slaweq> :)
15:25:49 <tosky> it's just https://review.opendev.org/#/c/751652 + https://review.opendev.org/#/c/749857
15:26:22 <tosky> please try to help with them before next week, it will avoid painful backports to victoria
15:26:48 <slaweq> tosky: yes, I will review tonight or tomorrow morning
15:26:55 <tosky> thanks slaweq
15:27:02 <slaweq> I hope that now I will have a bit more time for that :)
15:27:49 <tosky> and sorry if I moved from your friendly neighbour zuulv3 helper to being a nightmare :)
15:27:49 <slaweq> ok, with that I think we can move on to the next topic
15:28:06 <slaweq> tosky: no, You are not a nightmare for sure ;)
15:28:24 <slaweq> thx for moving this work forward and forcing us to do so
15:28:40 <slaweq> #topic Stable branches
15:28:51 <slaweq> Ussuri dashboard: http://grafana.openstack.org/d/pM54U-Kiz/neutron-failure-rate-previous-stable-release?orgId=1
15:28:53 <slaweq> Train dashboard: http://grafana.openstack.org/d/dCFVU-Kik/neutron-failure-rate-older-stable-release?orgId=1
15:29:27 <bcafarel> I did not check them much recently, nothing major from what I know
15:30:43 <slaweq> from dashboards it looks like there wasn't many patches in queues recently
15:31:38 <bcafarel> no I have a few in my backlog, but it was relatively quiet (everybody busy with focal and victoria I guess)
15:32:01 <slaweq> and downstream ;)
15:32:10 <bcafarel> that too :)
15:32:15 <slaweq> ok, so lets move on
15:32:19 <slaweq> next topic
15:32:21 <slaweq> #topic Grafana
15:32:29 <slaweq> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:33:40 <slaweq> from the dashboard I see that many jobs are on high failure rate recently
15:34:10 <slaweq> but when I was checking specifc patches I noticed many DNM patches with e.g. migration to focal or some other things and those jobs were failed mostly there
15:34:25 <slaweq> so I personally didn't saw any real big problems there
15:34:34 <slaweq> but maybe You know about something what I missed :)
15:35:59 <bcafarel> if there is, probably only minor issues - we got patches merged recently without xx rechecks
15:36:15 <slaweq> yes
15:36:31 <slaweq> and I also didn't found any new serious issues when I was checking specific jobs
15:36:46 <slaweq> so I don't really have any specific failures to discuss today
15:37:41 <slaweq> just one thing about periodic job neutron-ovn-tempest-ovs-master-fedora
15:37:50 <slaweq> since 4 days it is failing with RETRY_LIMIT
15:37:55 <slaweq> like https://zuul.openstack.org/build/19c0388f912648768375c06c40cfce6b
15:38:00 <slaweq> anyone wants to check it?
15:38:41 <bcafarel> sigh this job switching to RETRY_LIMIT is more regular than a Swiss clock
15:38:49 <slaweq> LOL
15:38:52 <bcafarel> I can add it to my pile
15:39:01 <slaweq> you made my day with that comment :D
15:39:10 <bcafarel> happy to help :)
15:39:38 <lajoskatona> :-)
15:40:05 <slaweq> I'm lucky that I didn't drink coffee now, otherwise I would need to clean my desk :P
15:40:15 <slaweq> thx for taking care of it
15:40:26 <slaweq> #action bcafarel to check failing periodic fedora job
15:40:57 <slaweq> ok, and with that I don't have anything else for today
15:41:13 <slaweq> do You want to discuss something or I can give You 20 minutes back
15:41:33 <bcafarel> second option is tempting
15:41:40 <slaweq> I knew it :P
15:41:47 <slaweq> ok, thx for attending the meeting
15:41:53 <slaweq> and see You online
15:41:55 <slaweq> #endmeeting