16:00:24 #startmeeting neutron_ci 16:00:25 Meeting started Tue Jun 19 16:00:24 2018 UTC and is due to finish in 60 minutes. The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:29 The meeting name has been set to 'neutron_ci' 16:00:35 Hi there! 16:02:19 o/ 16:02:25 hi 16:02:28 hey 16:02:42 so you guys are stuck with me today 16:03:02 Poland is playing Senegal and slaweq is watching the game 16:03:07 Go Poland! 16:03:43 #topic Actions from previous meeting 16:04:09 slaweq to debug failing test_ha_router_restart_agents_no_packet_lost fullstack test 16:04:41 slaweq reported a bug: https://bugs.launchpad.net/neutron/+bug/1776459 16:04:42 Launchpad bug 1776459 in neutron "TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost fullstack fails" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 16:04:55 and hasn't been able to get to the root cause 16:06:10 so the test was marked as unstable for the time being 16:06:36 and there is a patch to help debugging the problem: https://review.openstack.org/#/c/575710/ 16:06:37 patch 575710 - neutron - [Fullstack] Ensure connectivity to ext gw before a... 16:09:46 mlavalle was goint to talk to the OVO team about the constant time test case 16:09:54 I did ;-) 16:10:11 lujinluo agreed to take it over and fix it 16:10:21 she is out on vacation this week 16:10:32 but will work on it as soon as she comes back 16:11:31 #topic Grafana 16:11:48 Let's look at Grafana: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:15:13 question about gate and check queue, is gate queue failure rate implies after the patch is approved ? 16:15:27 One thing that stands out is the 50% failure of unit tests in the gate queue. Is it the OVO test mentioned above^^^^? 16:16:23 I have noticed in the past week a lot of random failures in unit tests in the gate queue; often a recheck would result in a failure in a different set of tests 16:17:00 for example look at how many times https://review.openstack.org/#/c/570244/ had to be rechecked 16:17:01 patch 570244 - neutron - Use OVO in ml2/test_db (MERGED) 16:17:59 yeah 16:18:17 slaweq sent me an email with some of his comments for this meeting 16:18:22 and he mentions the same 16:18:38 he asks that if we see an issue more that two times 16:18:49 let's file a bug so we can work on it 16:19:37 sounds good 16:20:41 I am also noticing that we have a couple of panels with no data in Grafana 16:21:07 that's odd, I have data in all of mine 16:22:03 ahh great 16:22:10 so that's my connection 16:22:16 I think i see data in all panels too 16:22:18 good 16:22:36 #topic Stable branches 16:23:28 It seems we have a few issues with the stable branches 16:24:00 In Queens: https://bugs.launchpad.net/neutron/+bug/1777190 16:24:01 Launchpad bug 1777190 in neutron "All neutron-tempest-plugin-api jobs for stable/queens fails" [Critical,Fix committed] - Assigned to Slawek Kaplonski (slaweq) 16:24:30 I W+ the fix for this one yesterday 16:25:33 In Pike and Ocata we have https://bugs.launchpad.net/neutron/+bug/1777506 16:25:34 Launchpad bug 1777506 in neutron "neutron-rally job failing for stable/pike and stable/ocata" [Critical,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 16:26:59 There are a couple of fixed proposed: 16:27:03 https://review.openstack.org/#/c/576451/ 16:27:03 patch 576451 - neutron (stable/pike) - Use rally 0.12.1 release for stable/pike patches 16:27:19 https://review.openstack.org/576567 16:27:19 patch 576567 - neutron (stable/ocata) - Use rally 0.12.1 release for stable/pike patches 16:27:35 Let's take a look at them, haleyb if you have some time 16:28:31 ok 16:29:25 slaweq also indicated that we should probably take a closer look at the stable branches queues to make sure they are in working order 16:30:07 right now we have haleyb and me as stable revieiwers 16:30:18 I think we need another one 16:30:37 mlavalle: garyk has powers too 16:31:07 but we do need another 16:31:20 haleyb: I noticed that one patch of yours got a +2 from me several weeks ago 16:31:29 and it has been sitting there since then 16:31:46 ok, let's fix that situation 16:31:53 +1 16:32:08 and let's keep a closer eye on the stable queues 16:32:34 #topic Open Discussion 16:32:43 Anything else to discuss today? 16:33:04 would it make sense to set up a grafana for the stable jobs? 16:33:37 yet another checklist for releasing, but it would definitely give more visibility to error trends for stable releases 16:33:45 njohnston: not a bad idea 16:33:55 what do you think haleyb? 16:34:36 may be not since the patches for stable branches are limited, I doubt it could be maintenance overhead, but that;'s just my opinion 16:35:13 yes, that could be a good thing, the data must all be there somewhere in graphite 16:35:40 ok, I'll look at adding that 16:35:47 njohnston: thanks! 16:36:13 #action njohnston to look into adding Grafana dasboard for stable branches 16:36:22 anything else? 16:36:59 ok, Thanks for attending 16:37:03 #endmeeting