16:00:11 #startmeeting neutron_ci 16:00:12 Meeting started Tue Aug 20 16:00:11 2019 UTC and is due to finish in 60 minutes. The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:15 The meeting name has been set to 'neutron_ci' 16:00:21 hello everyone 16:00:42 both Miguel and Slawek are on PTO/other meetings 16:00:57 I'll need your help here today! 16:01:44 o/ 16:01:59 hi njohnston ! 16:02:05 not sure how many others are going to be here... :-) 16:02:20 Miguel asked me to do the meeting 16:02:21 but I'll help however I can 16:02:26 thanks! 16:02:29 https://etherpad.openstack.org/p/neutron-ci-meetings 16:02:33 ok, let's go 16:02:42 #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:02:51 #topic Actions from previous meetings 16:03:15 according to the log, Miguel should be debbging this 16:03:22 #link https://bugs.launchpad.net/neutron/+bug/1838449 16:03:23 Launchpad bug 1838449 in neutron "Router migrations failing in the gate" [Medium,Confirmed] - Assigned to Miguel Lavalle (minsel) 16:03:41 I'll try to ping him this evening (for me) or tomorrow 16:03:58 next topic 16:04:02 #topic Stadium projects 16:04:13 Python 3 migration 16:04:23 #link https://etherpad.openstack.org/p/neutron_stadium_python3_status 16:04:36 njohnston, any update? 16:04:40 so the last job for bagpipe merged 16:04:59 but in the neutron team meeting amotoki said he would doublecheck other jobs 16:05:03 just to make sure 16:05:13 did you delete it from the etherpad? 16:05:16 I believe I did that, but things might have changed in the intervening months, not sure 16:05:25 I'd like to leave it until amotoki is happy 16:05:58 we can wait until the next week for the feedback from amotoki 16:06:03 I don't see either yamamoto or lajoskatona online 16:06:13 for odl and midonet feedback (respectively) 16:06:14 njohnston: AFAIK bagpipe tox.ini has some jobs with basepython as python27. that's what I would like to clean up. 16:06:25 I will add info to the etherpad. 16:06:33 thank you very much amotoki 16:06:36 thanks amotoki ! 16:07:01 so today no feedback from midonet or odl 16:07:02 yw 16:07:52 ok, I think we can move to tempest-plugins migration 16:07:59 #link https://etherpad.openstack.org/p/neutron_stadium_move_to_tempest_plugin_repo 16:08:22 fwaas tempest plugin migration is done, three cheers for slaweq for pushing that over the finish line while I was distracted 16:08:22 we have two pending patches 16:08:30 did the midonet test jobs end up getting fixed? I noticed them because they were a large chunk of e-r failures for package installations 16:09:07 clarkb, I think midonet is still failing in the CI 16:09:15 I'll ping yamamoto 16:09:33 ok so we have two patches 16:09:36 #link https://review.openstack.org/#/c/652099 16:09:36 https://review.opendev.org/#/c/652099/ - neutron-tempest-plugin - Move neutron-dynamic-routing BGP tests from stadium - 32 patch sets 16:09:48 for dynamic routing 16:09:56 and 16:09:59 #link https://review.openstack.org/#/c/649373 16:09:59 https://review.opendev.org/#/c/649373/ - neutron-tempest-plugin - Migrate neutron-vpnaas tests to neutron-tempest-pl... - 6 patch sets 16:10:05 for vpnaas 16:10:14 regarding midonet, I don't know if there was a successor to https://review.opendev.org/674313 after it was abandoned 16:10:14 patch 674313 - neutron - Fix networking-midonet CI job run on Neutron check... (ABANDONED) - 4 patch sets 16:11:11 tidwellr: can you comment on the neutron-dynamic-routing change? 16:11:24 njohnston: sure 16:11:40 the vpnaas one has not been updated in 4 months so I think that is getting stale 16:12:05 for some reason we are uncovering some race conditions that cause test failures 16:12:13 njohnston, I would like to take care of the last one, but not this week 16:12:54 but let's ping mlavalle for this, because he is the owner 16:13:27 ok, something else in this topic (Stadium projects)? 16:13:40 there are also some weird setup and tear down things going on with the neutron-dynamic-routing change. I've tried to avoid creating separate jobs to address this since that adds a lot of overhead, but that may be the best way forward for now 16:14:47 tidwellr, but what is still failing in this patch? https://review.opendev.org/#/c/652099 16:14:48 patch 652099 - neutron-tempest-plugin - Move neutron-dynamic-routing BGP tests from stadium - 32 patch sets 16:15:18 these failures are related to those race conditions in the tests themselves 16:16:13 ok, I see 16:16:18 there is an auto-scheduling mechanism that the tests asserting unscheduling of BGP speakers don't account for 16:17:05 not sure why it's an issue all of a sudden, we haven't had problems with this in the neutron-dynamic-routing jobs before 16:17:29 the stars have aligned I suppose 16:17:53 as you said, the test execution ordering maybe 16:18:14 that wouldn't cause the things I'm seeing 16:19:08 the test unschedules a BGP speaker and the auto-scheduler puts it back before the test gets to assert peering status if offline 16:19:27 that's not related to test ordering, that's a problem with the tests themselves 16:19:54 you are talking about https://logs.opendev.org/99/652099/32/check/neutron-tempest-plugin-dynamic-routing-bgp-basic/f5ab227/testr_results.html.gz 16:19:59 test_check_advertised_multiple_tenant_network 16:20:20 correct 16:20:39 that race condition is what is causing this failure 16:21:42 tidwellr, I'll try to take a look at this tomorrow morning 16:21:50 I think I can fix it,it involves changing the assertions made by the test 16:22:04 I've been on vacation and drowning in other things recently 16:22:25 no problem 16:22:30 but I'm getting back on track this week so I can move this forward 16:22:41 thank you! 16:23:30 ok, so no more agenda on this topic, let's move to the next one 16:23:37 #topic Grafana 16:23:43 #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:23:58 I was on PTO last week 16:24:19 are you aware of something abnormal last week? 16:24:38 wow, has nothing run through the gate since Sunday? That seems unlikely. 16:25:40 hmmm or we didn't merge anything or the jobs are stuck 16:25:40 looks like the postgres periodic job is having an issue 16:26:13 also I don't like how 25% to 50% is the new normal for the functional jobs these days 16:26:24 at least since 8/16 16:26:58 njohnston, you are right, it's quite high 16:27:18 the tox-cover job shows the same curve I believe 16:27:55 njohnston, the functional failures are py27 16:27:59 not py3 16:28:48 so right now the neutron-functional-python27 job is at 38% fail but the py3 neutron-functional job is at 35% 16:28:59 so I disagree that this is py27-only 16:29:55 let me see the historical of neutron patches and CI results 16:30:06 if I see a pattern there, I'll open a bug 16:30:19 thanks! 16:30:22 (sorry but I didn't review that in advance) 16:30:33 how do I write an action?? 16:30:44 np, I didn't either (although I was surprised we were having a meeting) 16:30:59 do #action ralonsoh to do this and that 16:31:24 #action ralonsoh to review the CI (functional tests), search for error patterns and open bugs if needed 16:32:37 njohnston, do you have time today (or tomorrow) to take a look at the postgress-full errors? 16:34:21 ok, let's move to the next topic then 16:34:25 #topic fullstack/functional 16:34:31 there is nothing in the agenda 16:34:39 do you want to add something here? 16:35:00 I think we covered it already 16:35:05 perfect 16:35:11 I'll try to get a look at the postgres errors yes 16:35:19 next topic then 16:35:21 #topic Tempest/Scenario 16:35:38 #action njohnston to look at errors in the neutron-tempest-postgres-full periodic job 16:35:54 slawek opened a bug for nova last week 16:35:56 #link https://bugs.launchpad.net/nova/+bug/1839961 16:35:58 Launchpad bug 1669468 in devstack "duplicate for #1839961 tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc fails intermittently in neutron multinode nv job" [Medium,Fix released] - Assigned to melanie witt (melwitt) 16:36:27 looks like this a duplicate of https://bugs.launchpad.net/devstack/+bug/1669468 16:36:28 Launchpad bug 1669468 in devstack "tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc fails intermittently in neutron multinode nv job" [Medium,Fix released] - Assigned to melanie witt (melwitt) 16:36:47 and there is a patch merged 16:36:49 #link https://review.opendev.org/#/c/675721/ 16:36:50 patch 675721 - devstack - Set console server host/address in nova-cpu.conf f... (MERGED) - 4 patch sets 16:36:58 I'll comment this in slawek's patch 16:37:15 and multinode scenario failure are down to below 50% which is great 16:38:31 you are better than me reviewing this grafana dashboard (I usually wait for slawek to send me the bugs hehehehe) 16:38:53 :-) long years of ops experience 16:39:29 so, something else on the dashboard? 16:39:38 nothing from me 16:39:45 perfect! 16:39:57 ok, so last topic 16:39:59 #topic Open discussion 16:40:14 I had a fantastic PTO! 16:40:17 last week 16:40:29 that's all from me 16:40:30 congrats! I had a fantastic one the week before. 16:40:37 hehehe 16:41:10 So you and I, neither of us is going to Shanghai, right? 16:41:17 right 16:41:21 I'll ping mlavalle again for arrangements for remote participation 16:41:41 yes, that will be very useful 16:42:19 ok that's all from me 16:42:22 ok, sorry for being so newbie chairing this meeting, that's my first time 16:42:26 thank you all 16:42:31 o/ 16:42:40 see you in #openstack-neutron 16:42:43 #endmeeting