16:00:29 <mlavalle> #startmeeting neutron_ci
16:00:30 <openstack> Meeting started Tue Jun 11 16:00:29 2019 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:31 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:34 <openstack> The meeting name has been set to 'neutron_ci'
16:00:45 <mlavalle> Hi there
16:00:58 <haleyb> o/
16:01:28 <mlavalle> Today we are going to have a brief meeting, since I have a hard stop at 30 minutes after the hour
16:02:04 <mlavalle> #topic Actions from previous meetings
16:02:09 <njohnston_> o/
16:02:29 <mlavalle> #undo
16:02:30 <openstack> Removing item from minutes: #topic Actions from previous meetings
16:03:03 <mlavalle> before going on, let open Grafana http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:03:22 <mlavalle> #topic Actions from previous meetings
16:03:48 <mlavalle> mlavalle to debug neutron-tempest-plugin-dvr-multinode-scenario failures (bug 1830763)
16:03:49 <openstack> bug 1830763 in neutron "Debug neutron-tempest-plugin-dvr-multinode-scenario failures" [High,Confirmed] https://launchpad.net/bugs/1830763 - Assigned to Miguel Lavalle (minsel)
16:04:59 <mlavalle> I made a little bit of progress here.
16:05:44 <mlavalle> The test case that fails the most is test_connectivity_though_2_routers
16:05:57 <mlavalle> and I can reproduce it in my local environment
16:06:23 <mlavalle> so I am debugging it, although to definite root cause has been determined yet
16:06:32 <mlavalle> so I will continue working on this
16:07:47 <mlavalle> #action mlavalle to debug neutron-tempest-plugin-dvr-multinode-scenario failures (bug 1830763) reproducing most common failure: test_connectivity_through_2_routers
16:07:48 <openstack> bug 1830763 in neutron "Debug neutron-tempest-plugin-dvr-multinode-scenario failures" [High,Confirmed] https://launchpad.net/bugs/1830763 - Assigned to Miguel Lavalle (minsel)
16:08:22 <mlavalle> next is slaweq to check logs with failed test_dvr_ha_router_unbound_from_agents functional test
16:08:51 <mlavalle> he checked logs: http://logs.openstack.org/78/653378/7/check/neutron-functional/c5ac6a3/testr_results.html.gz
16:09:05 <mlavalle> and it seems it was a slow VM the test was running on
16:09:51 <mlavalle> since we have a shortened meeting today, let's go straight to
16:09:54 <mlavalle> #topic Grafana
16:10:22 <njohnston_> looks like the midonet co-gating job is broken as of Sunday
16:10:36 <haleyb> https://review.opendev.org/#/c/664614/
16:10:47 <haleyb> i just proposed that, might help with some of it
16:11:46 <haleyb> doh, and it's going to need an update, wigh
16:11:52 <haleyb> s/wigh/sigh
16:13:26 <haleyb> i will hack at it...
16:13:26 <mlavalle> njohnston_: yeap
16:14:59 <mlavalle> it also seems to me that we had a little bit of misbehavior today with neutron-tempest-plugin-scneario-linuxbridge
16:15:22 <mlavalle> not sure it's really a proble yet, but we shlud probably keep an eye on it
16:16:06 <mlavalle> and then also functional tests seem unhappy
16:16:16 <njohnston_> Another quick question - I see the designate scenario job is voting in the check queue but it is absent from the gate queue, should we think about adding it to gate?  Was it removed for a reason?
16:16:50 <mlavalle> I don't remember if it has ever voted in the gate queue
16:17:27 <mlavalle> I am not sure we need it in the gate
16:17:35 <mlavalle> why do you think it should be there?
16:17:42 <njohnston_> I don't feel strongly about the designate job one way or another but it seemed like an anomalous status so I thought I'd see if others have inions
16:17:52 <njohnston_> *opinions
16:18:24 <mlavalle> I don't see a reason but if other come up with a good one, I wouldn't oppose it
16:18:40 <njohnston_> sounds good to me
16:19:10 <mlavalle> ok, let's look at functional now
16:19:30 <mlavalle> #topic fullstack/functional
16:20:14 <mlavalle> We have a new critical bug here: https://bugs.launchpad.net/neutron/+bug/1832307
16:20:15 <openstack> Launchpad bug 1832307 in neutron "Functional test neutron.tests.functional.agent.linux.test_ip_lib.IpMonitorTestCase. test_add_remove_ip_address_and_interface is failing" [Critical,Confirmed]
16:20:44 <mlavalle> this is with python2.7
16:21:40 <mlavalle> any takers?
16:23:08 <mlavalle> ok
16:23:43 <mlavalle> in the case of fullstack, we have a couple of test cases failing:
16:23:46 <mlavalle> neutron.tests.fullstack.test_l3_agent.TestHAL3Agent. test_ha_router_restart_agents_no_packet_lost
16:24:56 <mlavalle> There is a patch to mark it unstable: https://review.opendev.org/#/c/660592/
16:25:17 <mlavalle> so take a look
16:25:47 <mlavalle> and then we also have neutron.tests.fullstack.test_l3_agent.TestLegacyL3Agent. test_north_south_traffic
16:26:56 <mlavalle> any comments?
16:27:16 <haleyb> i think we should merge the patch for the unstable job at least
16:27:24 <mlavalle> I agree
16:27:37 <njohnston_> agreed
16:28:05 <mlavalle> ok, let's go to
16:28:11 <mlavalle> #topic Open discussion
16:28:22 <mlavalle> anything else we should discuss today?
16:28:48 <haleyb> https://review.opendev.org/#/c/640812/
16:29:15 <haleyb> This was a proposed change by njohnston_ to make the ovn-tempest job voting in the neutron queue
16:29:24 <mlavalle> yes I saw it
16:29:27 <haleyb> i missed we even had it there...
16:29:30 <njohnston_> The OVN co-gating job is much more reliable now that it was
16:29:34 <njohnston_> *than
16:29:46 <haleyb> i looked over 30 days and it was pretty good
16:31:21 <mlavalle> talk to dalvarez to remove his -1?
16:31:47 <haleyb> ack
16:32:14 <haleyb> mlavalle: it's bottom of hour, don't want to keep you
16:32:22 <mlavalle> ok
16:32:26 <mlavalle> closing meeting
16:32:37 <mlavalle> #endmeeting