15:00:26 <slaweq> #startmeeting neutron_ci
15:00:27 <openstack> Meeting started Wed Mar  4 15:00:26 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:28 <slaweq> hi
15:00:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:30 <ralonsoh> hi
15:00:32 <openstack> The meeting name has been set to 'neutron_ci'
15:00:35 <njohnston> o/
15:00:44 <slaweq> please give me 3 minutes before we will start
15:02:08 <bcafarel> that gave me time to find the way to this channel
15:02:52 <slaweq> ok, I'm ready now
15:03:04 <slaweq> first of all
15:03:05 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:03:10 <slaweq> #topic Actions from previous meetings
15:03:21 <slaweq> first one:
15:03:23 <slaweq> slaweq to drop neutron-tempest-dvr job and finally replace it with neutron-tempest-dvr-ha-multinode-full
15:03:28 <slaweq> Done: https://review.opendev.org/#/c/710071/
15:03:52 <slaweq> and the second one
15:03:54 <slaweq> ralonsoh to check periodic neutron-ovn-tempest-ovs-master-fedora job's failures
15:04:16 <ralonsoh> yes, that problem was caused by a OVN test failure
15:04:25 <ralonsoh> actually by an OVN problem
15:04:32 <ralonsoh> this is now solved and released
15:04:50 <slaweq> thx ralonsoh
15:06:31 <slaweq> I saw this test passed e.g. yesterday
15:06:45 <slaweq> and that's all actions from last week
15:06:53 <slaweq> #topic Stadium projects
15:07:05 <slaweq> standardize on zuul v3
15:07:06 <slaweq> Etherpad: https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop
15:07:23 <slaweq> I think that my 2 patches for neutron-vpnaas jobs are ready to review
15:07:49 <njohnston> excellent, I will take a look
15:08:00 <slaweq> njohnston: thx
15:08:38 <slaweq> from other projects which I see we are still missing patches for networking-odl and midonet
15:08:59 <slaweq> and that would be all IMO
15:09:47 <bcafarel> next we wan start tackling W goals then? ;)
15:10:23 <slaweq> but community seems to slow for us :P
15:10:31 <slaweq> *to be to slow
15:11:37 <slaweq> anything else related to the stadium projects?
15:11:41 <slaweq> or can we move on?
15:12:12 <bcafarel> nothing from me
15:13:19 <slaweq> ok, so lets move on
15:13:27 <slaweq> #topic Grafana
15:13:33 <slaweq> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:14:45 <slaweq> generally this week it isn't bad
15:15:16 <slaweq> functional jobs went down a bit
15:15:46 <slaweq> non-voting ovn jobs are going down nicely too
15:16:16 <slaweq> neutron-tempest-plugin jobs are below 10% of failures (except this dvr multinode one which is still non-voting)
15:16:50 <slaweq> but I again collected some stats about number of rechecks from last few weeks
15:16:51 <slaweq> week 5 of 2020 - 3.9
15:16:53 <slaweq> week 6 of 2020 - 5.8
15:16:55 <slaweq> week 7 of 2020 - 1.75
15:16:57 <slaweq> week 8 of 2020 - 5.4
15:16:59 <slaweq> week 9 of 2020 - 5.6
15:17:01 <slaweq> week 10 of 2020 - 3.3
15:17:03 <slaweq> and tbh that sucks :/
15:17:40 <bcafarel> master only I suppose?
15:18:26 <slaweq> bcafarel: yes, master branch only
15:18:42 <slaweq> but I'm affraid that for stable branches it may be even worst
15:18:51 <bcafarel> oh yes
15:19:00 <bcafarel> so if master-only yes these are not nice numbers
15:19:16 <bcafarel> stable branches were quite unstable so far in 2020, but well most for known reasons
15:19:32 <slaweq> yes, I know about reasons
15:19:39 <slaweq> but still - that sucks :/
15:19:50 <slaweq> I don't know how but we should do better
15:19:54 <slaweq> :)
15:20:26 <slaweq> ok, anything else related to grafana?
15:22:01 <slaweq> ahh, one small thing - we need to remove neutron-tempest-dvr job from grafana now, I will send patch for that
15:22:13 <slaweq> #action slaweq to remove neutron-tempest-dvr job from grafana
15:22:29 <slaweq> ok, so lets go to the next topics
15:22:31 <slaweq> #topic fullstack/functional
15:22:46 <slaweq> as usually I went through failures from last few days
15:22:57 <slaweq> and I didn't found too many (new) problems in fact
15:23:16 <slaweq> for functional job there is bug https://bugs.launchpad.net/neutron/+bug/1865453
15:23:17 <openstack> Launchpad bug 1865453 in neutron "neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before fails randomly" [High,Confirmed]
15:23:24 <slaweq> failures like:
15:23:25 <slaweq> https://1af8099e455ea1c4b83e-70dbf706fcc5aea923a17246b1623369.ssl.cf1.rackcdn.com/710558/4/check/neutron-functional/a4c7cdd/testr_results.html
15:23:27 <slaweq> https://87ace9534a53ef030fdb-5ceaba4a59a2578b108bb2f456b59bef.ssl.cf5.rackcdn.com/710782/3/check/neutron-functional/fcf5a14/testr_results.html (different test from the same module)
15:23:31 <ralonsoh> this is already solved
15:23:37 <ralonsoh> that was a problem in OVN
15:23:39 <maciejjozefczyk> ralonsoh, no, this is different case
15:23:47 <ralonsoh> ah ok sorry
15:23:57 <ralonsoh> ah yes
15:24:07 <maciejjozefczyk> the case about port type has been fixed, but slaweq found other logs from the same class ;(
15:24:09 <slaweq> ralonsoh: it's error on assertion of uuids
15:24:15 <maciejjozefczyk> yes
15:24:21 <maciejjozefczyk> I'm gonna take a look on this one
15:24:21 <slaweq> maciejjozefczyk: you welcome :P
15:24:25 <maciejjozefczyk> slaweq, ;)
15:24:36 <slaweq> thx maciejjozefczyk
15:24:46 <slaweq> #action maciejjozefczyk to take a look at https://bugs.launchpad.net/neutron/+bug/1865453
15:24:47 <openstack> Launchpad bug 1865453 in neutron "neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before fails randomly" [High,Confirmed]
15:25:06 <slaweq> now You are really volunteered for it for this week ;)
15:25:33 <ralonsoh> maybe this problem is due to a race condition, creating several ports at the same time in different tests
15:25:42 <ralonsoh> but we can discuss this in launchpad
15:26:08 <slaweq> sure ralonsoh, that can works too for me
15:27:27 <maciejjozefczyk> slaweq, we'll figure out whats wrong, potentially race condition
15:27:36 <slaweq> thx maciejjozefczyk and ralonsoh :)
15:27:39 <slaweq> ok, lets move on
15:27:57 <slaweq> Deletion of namespace issue (again)
15:27:58 <slaweq> https://97c37a5ba6a0beef442c-e131daf8cc02ece8b285c96d25e8c4eb.ssl.cf2.rackcdn.com/709110/2/check/neutron-functional/dad905e/testr_results.html
15:28:06 <slaweq> ralonsoh: You should be familiar with this :/
15:28:27 <ralonsoh> I'll take a look at this again
15:28:39 <slaweq> it's from today morning
15:29:28 <slaweq> and that's all about functional job from me for today
15:29:36 <slaweq> do You have anything else You want to discuss?
15:29:57 <ralonsoh> no
15:30:20 <njohnston> I saw one failure today in neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts, cecking to see if it happens elsewhere https://87ace9534a53ef030fdb-5ceaba4a59a2578b108bb2f456b59bef.ssl.cf5.rackcdn.com/710782/3/check/neutron-functional/fcf5a14/testr_results.html
15:30:35 <njohnston> otherwise nothing
15:30:48 <slaweq> njohnston: I think it's the same issue like in https://bugs.launchpad.net/neutron/+bug/1865453
15:30:49 <openstack> Launchpad bug 1865453 in neutron "neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before fails randomly" [High,Confirmed] - Assigned to Maciej Jozefczyk (maciej.jozefczyk)
15:31:03 <ralonsoh> most likely
15:31:36 <slaweq> so if there is nothing else, lets move on
15:31:38 <slaweq> #topic Tempest/Scenario
15:31:56 <slaweq> neutron_tempest_plugin.scenario.test_multicast.MulticastTestIPv4 failing often on deletion of server
15:31:57 <slaweq> https://e485228b4bd099d97917-b4e19c98e06eb23bd3ecd1208eab0884.ssl.cf2.rackcdn.com/703376/14/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/35cab23/testr_results.html
15:32:07 <slaweq> I saw it already couple of times
15:32:20 <slaweq> is there anyone who wants to take a look at this?
15:32:32 <ralonsoh> I can
15:32:38 <slaweq> thx ralonsoh
15:32:43 <slaweq> will You also open LP for that?
15:32:51 <ralonsoh> (I think lucas sent a patch for this one, I need to check)
15:32:55 <ralonsoh> slaweq, sure
15:33:01 <slaweq> would be great, thx
15:33:12 <slaweq> #action ralonsoh to check "neutron_tempest_plugin.scenario.test_multicast.MulticastTestIPv4 failing often on deletion of server"
15:33:25 <slaweq> and another issue which I found is
15:33:28 <slaweq> Problem with logging of console output:
15:33:29 <slaweq> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_dc4/706152/11/check/neutron-tempest-plugin-scenario-openvswitch/dc49892/testr_results.html
15:33:40 <slaweq> and that's probably caused by my patch which was merged recently
15:33:45 <slaweq> so I will take a look at it
15:34:07 <ralonsoh> ups, that's mine
15:34:16 <slaweq> yours?
15:34:29 <ralonsoh> I think so, I'll check that
15:34:34 <slaweq> ok, thx a lot
15:34:44 <slaweq> that's probably some trivial issue to fix
15:34:52 <slaweq> and it's not reason why test failed
15:35:05 <slaweq> but that error happend after failure of ssh or ping
15:35:22 <slaweq> but would be good to get it fixed to see this console output
15:36:03 <slaweq> according to random ssh issues in dvr jobs I still didn't had time to check it locally
15:36:17 <ralonsoh> (ok, not mine, I added the next method, not this one)
15:36:26 <slaweq> ralonsoh: ok, I will check that
15:36:44 <slaweq> #action slaweq to check problem with console output in scenario test
15:37:11 <slaweq> ok, and that's all what I have for today actually
15:37:29 <slaweq> do You have anything else regarding scenario jobs, or ci in generall?
15:37:38 <ralonsoh> no
15:38:20 <bcafarel> nope
15:38:30 <slaweq> great
15:38:36 <slaweq> so lets finish a bit earlier today
15:38:46 <slaweq> thx for attending and see You next week
15:38:48 <slaweq> o/
15:38:51 <slaweq> #endmeeting