15:01:24 <slaweq> #startmeeting neutron_ci
15:01:25 <openstack> Meeting started Tue May 18 15:01:24 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:26 <slaweq> hi
15:01:28 <openstack> The meeting name has been set to 'neutron_ci'
15:01:35 <bcafarel> o/
15:01:36 <slaweq> please give me 5 minutes before we will start
15:01:42 <bcafarel> looks like bot is back :)
15:02:22 <lajoskatona> Hi
15:02:23 <ralonsoh> hi
15:02:25 <obondarev> o/
15:03:50 <ralonsoh> in the meantime: https://review.opendev.org/c/openstack/neutron/+/791983
15:04:07 <ralonsoh> this is not triggering FTs, so I pushed another patch on top of it
15:04:12 <ralonsoh> https://review.opendev.org/c/openstack/neutron/+/791991/1
15:04:13 <slaweq> ok, I'm back and we can start
15:05:14 <slaweq> ralonsoh: +2 :)
15:05:16 <slaweq> thx for that patch
15:05:19 <bcafarel> not too hard to review :)
15:05:24 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:05:34 <slaweq> please open that link and we can move on
15:05:50 <openstack> slaweq: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
15:05:54 <slaweq> #topic Actions from previous meetings
15:06:03 <slaweq> ralonsoh to skip test_keepalived_spawns_conflicting_pid_vrrp_subprocess functional tests
15:06:17 <ralonsoh> https://review.opendev.org/c/openstack/neutron/+/791986
15:06:24 <ralonsoh> I've tried pushing another patch
15:06:31 <ralonsoh> https://review.opendev.org/c/openstack/neutron/+/791243
15:06:39 <ralonsoh> but that seems to be a bad idea
15:06:45 <ralonsoh> still trying to properly fix this problem
15:07:26 <slaweq> so let's go with skip for now and we can revert it when bug will be fixed
15:08:19 <slaweq> ok, next one
15:08:21 <slaweq> lajoskatona to switch networking-sfc jobs to ovs explicitly
15:09:05 <lajoskatona> yeah, I changed zuul where it was necessary and to be sure set explicitly even in places like odl:
15:09:17 <lajoskatona> https://review.opendev.org/q/topic:%22ovn-devstack%22+(status:open%20OR%20status:merged)
15:09:32 <lajoskatona> oh, this is all ovn change related patch
15:09:50 <lajoskatona> https://review.opendev.org/q/topic:%2522ovn-devstack%2522+(status:open+OR+status:merged)+owner:self
15:11:26 <slaweq> thx lajoskatona
15:11:33 <slaweq> I will check those patches which I didn't yet
15:12:02 <slaweq> ok, next one
15:12:04 <lajoskatona> thanks
15:12:05 <slaweq> slaweq to pin neutron-tempest-plugin for train after it will be EM
15:12:12 <slaweq> patch https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/791494
15:12:35 <slaweq> please check it when You will have few minutes
15:13:09 <slaweq> and the last one
15:13:11 <slaweq> slaweq to check functional job timeouts
15:13:19 <slaweq> I was checking some of the jobs from last week
15:13:35 <slaweq> there is already a bug https://bugs.launchpad.net/neutron/+bug/1889781 opened for that since some time
15:13:35 <openstack> Launchpad bug 1889781 in neutron "Functional tests are timing out" [High,Confirmed]
15:13:48 <slaweq> today I added new comment there https://bugs.launchpad.net/neutron/+bug/1889781/comments/5
15:14:21 <slaweq> in most of that cases when it is timing out, it looks that all works fine, tests are passing, and then there is e.g. 1h gap in the logs
15:14:32 <slaweq> and job is timing out
15:14:39 <slaweq> so it hangs for 1h or more
15:14:56 <lajoskatona> I remember similar logs
15:14:59 <slaweq> and next logs appears when it is already timed out and cleaned by zuul
15:15:35 <slaweq> I remember that we had similar issue with unit tests (or functional tests, I don't remember exactly) and stestr long time ago
15:15:52 <slaweq> and the workaround for that was to limit generated output from the tests
15:16:29 <slaweq> if that is still the case here, maybe https://review.opendev.org/c/openstack/neutron-lib/+/779720 will help with that
15:16:30 <lajoskatona> You mean changing log level or similar?
15:16:46 <bcafarel> or filtering some verbose logs out
15:16:51 <bcafarel> (if I recall correctly)
15:16:53 <slaweq> lajoskatona: I mean, not logging e.g. warnings or things like that to the stderr/stdout
15:16:59 <lajoskatona> ok
15:17:20 <slaweq> but maybe I'm wrong and maybe there is something else happening there
15:17:27 <slaweq> I really don't know :/
15:20:25 <slaweq> ok, I think we can move on
15:20:48 <slaweq> if You will have any ideas about that time outs issue, feel free to send patches :)
15:20:58 <slaweq> #topic Stadium projects
15:21:07 <slaweq> lajoskatona: any updates about stadium's ci?
15:22:24 <lajoskatona> there is the ovn-change topic, which we covered
15:22:57 <lajoskatona> and I have seen doc build issues, I have os-ken as example , have to check if it appears elsewhere
15:23:16 <slaweq> ouch, I didn't notice that
15:23:18 <lajoskatona> this is for os-ken: https://review.opendev.org/c/openstack/os-ken/+/791682
15:23:40 <ralonsoh> good catch
15:24:20 <slaweq> +2 and +W now :)
15:24:24 <slaweq> that was fast
15:24:32 <slaweq> thx lajoskatona for fixing that
15:24:38 <bcafarel> ah that's why I saw quite a few sphinx/"doc fix" patches pop in my gerrit mails then
15:26:17 <slaweq> anything else or can we move on?
15:26:40 <lajoskatona> ohh, one more small thing
15:27:12 <lajoskatona> I think today on infra meeting the taas rename will be topic (https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Upcoming_Project_Renames )
15:27:46 <ralonsoh> are we going back to quantum?
15:27:54 <lajoskatona> Infra need a gerrit maintenance window for the rename, but otherwise no issue with it
15:28:27 <slaweq> lajoskatona: do You need my +1 for that?
15:28:31 <lajoskatona> :-Dno, but tap-as-a-service will be moved from x/ to openstack/
15:28:44 <slaweq> so it will be part of the neutron stadium now, right?
15:28:47 <bcafarel> that was fast
15:28:48 <lajoskatona> this is the patch for the rename : https://review.opendev.org/c/openstack/project-config/+/790093
15:29:00 <lajoskatona> don't know, you can vote on it
15:29:15 <slaweq> I will, sure
15:29:37 <lajoskatona> For stadium we have to go through the checklist, but the 1st step is to have the repo under openstack/
15:30:20 <lajoskatona> It is the etherpad for it: https://etherpad.opendev.org/p/make_taas_stadium
15:30:46 <slaweq> lajoskatona: I added one comment there
15:30:54 <lajoskatona> it is just a note list what to check, there are interesting things like introduce ovo for taas and similar
15:30:58 <slaweq> but it can be done as follow up also
15:31:04 <lajoskatona> thanks, that's it for stadium
15:31:18 <slaweq> thx
15:31:23 <slaweq> next topic then
15:31:25 <slaweq> #topic Stable branches
15:31:33 <slaweq> bcafarel: anything new worth to discuss here?
15:32:07 <bcafarel> I took a few days off so still catching up, but from what I've seen CI is good
15:32:30 <bcafarel> I have a few tempest plugin patches for stable branches for your eyes, 1 sec
15:33:01 <bcafarel> https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/786657 to fix rocky and clean pending backports, https://review.opendev.org/c/openstack/neutron/+/786010 to drop -master jobs from wallaby (and save some CI ressources)
15:33:47 <slaweq> ahh, I can't approve any of them :/
15:33:55 <slaweq> so You need ralonsoh and lajoskatona for that
15:33:59 <ralonsoh> sure
15:34:14 <bcafarel> :)
15:34:46 <slaweq> ok, so next topic then
15:34:48 <slaweq> #topic Grafana
15:34:53 <slaweq> #link https://grafana.opendev.org/d/BmiopeEMz/neutron-failure-rate?orgId=1
15:37:27 <slaweq> I don't see anything bad from the grafana
15:38:15 <slaweq> do You see anything wrong what You want to talk about?
15:39:14 <ralonsoh> no thanks
15:39:48 <slaweq> so lets move on to the next topic
15:39:54 <slaweq> #topic fullstack/functional
15:40:04 <slaweq> regarding functional jobs we already discussed about timeouts
15:40:14 <slaweq> and I don't really found any other new issues worth to discuss now
15:40:30 <slaweq> but regarding fullstack I found one new issue which repeats pretty often
15:40:35 <slaweq> https://bugs.launchpad.net/neutron/+bug/1928764
15:40:35 <openstack> Launchpad bug 1928764 in neutron "Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing often with LB agent" [Critical,Confirmed]
15:40:48 <slaweq> can You check if it maybe will rings a bell for You? :)
15:40:59 <slaweq> it's related to linuxbridge
15:43:08 <ralonsoh> always with flat network
15:43:16 <slaweq> nope
15:43:22 <slaweq> sometimes with vxlan iirc
15:43:26 <ralonsoh> right
15:43:30 <slaweq> but always same test and LB agent
15:45:31 <slaweq> anyway, if You would have some cycles to look at it, that would be great
15:45:42 <slaweq> if not, I will probably mark that test as unstable temporary
15:47:47 <slaweq> ok, and that's all what I had for today's meeting
15:48:02 <slaweq> regarding other jobs, I didn't found any new, serious issues
15:48:09 <bcafarel> that's nice
15:48:19 <slaweq> in the periodic jobs there is only tobiko job failing, I will check that when I will have some time
15:48:27 <slaweq> but that's nothing critical for sure
15:48:46 <slaweq> so if You don't have anything else for today, I will give You few miutes back now
15:50:24 <slaweq> ok, thx for attending the meeting today
15:50:27 <ralonsoh> bye!
15:50:30 <slaweq> have a nice evening!
15:50:32 <slaweq> bye
15:50:35 <bcafarel> enjoy the minutes back :) o/
15:50:37 <slaweq> #endmeeting