16:00:24 #startmeeting neutron_ci 16:00:27 Meeting started Tue Mar 19 16:00:24 2019 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:30 The meeting name has been set to 'neutron_ci' 16:00:33 o/ 16:00:34 hello 16:00:51 o/ 16:00:53 o/ 16:01:17 ok, lets start 16:01:25 first of all Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:01:27 Please open now :) 16:01:44 LOL 16:02:25 #topic Actions from previous meetings 16:02:36 first action: 16:02:38 ralonsoh to take a look at fullstack dhcp rescheduling issue https://bugs.launchpad.net/neutron/+bug/1799555 16:02:40 Launchpad bug 1799555 in neutron "Fullstack test neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent timeout" [High,Confirmed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez) 16:03:01 I know that ralonsoh was looking into this one 16:03:27 but there was no data why network wasn't rescheduled to new dhcp agent after one agent was down 16:03:49 so he send some dnm patch https://review.openstack.org/#/c/643079/ to get some more logs 16:03:58 * mlavalle has to leave 45 minutes after the hour 16:04:23 so he will probably continue this work as he is assigned to the bug 16:04:34 next one was: 16:04:36 slaweq to talk with tmorin about networking-bagpipe 16:04:53 you sent an email, didn't you? 16:04:57 I sent email to Thomas today because I couldn't catch him on irc 16:05:30 if he will not respond, I will probably start this work for bagpipe project - it shouldn't be a lot of work to do 16:06:00 ok, and the last one from last week was: 16:06:02 ralonsoh to take a look at update_revises unit test failures 16:06:39 IIRC this patch should address this issue https://review.openstack.org/#/c/642869/ 16:06:55 so thx ralonsoh we should be good with this :) 16:07:30 yeap, it looks like it 16:07:42 and that was all actions from previous week 16:07:50 any questions/comments? 16:08:01 not rom me 16:08:04 from^^^ 16:08:20 ok, lets go then to the next topic 16:08:22 #topic Python 3 16:08:27 Stadium projects etherpad: https://etherpad.openstack.org/p/neutron_stadium_python3_status 16:08:31 njohnston: any updates? 16:08:52 no updates on this for this week. 16:09:13 ok 16:09:15 I hope to spend more time with it later this week 16:09:28 sure, that is not urgent for now 16:09:31 so I volunteered last time 16:09:41 we have more important things currently IMO :) 16:09:41 for one of them 16:09:51 but the etherpad seems changed 16:10:05 mlavalle: wasn’t that the tempest plugin work? 16:10:09 mlavalle: I think You voluntereed for something else, trust me ;) 16:10:17 :) 16:10:27 yeah, you are right 16:10:42 mlavalle: You should tell that to my wife :P 16:10:50 LOL 16:10:50 LOL 16:11:13 ok, lets move on to the next topic 16:11:15 #topic Ubuntu Bionic in CI jobs 16:11:32 last week I think patch which switched all legacy jobs to be run on Bionic was merged 16:12:26 yep, it's here: https://review.openstack.org/#/c/641886/ 16:12:39 in Neutron we are good with it 16:13:05 as for stadium projects, we need https://review.openstack.org/#/c/642456/ for fullstack job in networking-bagpipe 16:13:25 slaweq: I still see ubuntu-xenial-2-node in .zuul.yaml ? 16:13:29 but this job is there non-voting, and even with this patch it's failing because of some other reason 16:13:37 bcafarel: where? 16:14:13 https://github.com/openstack/neutron/blob/master/.zuul.yaml#L234 16:14:14 ahh, right bcafarel 16:14:32 so we need to switch our grenade jobs to be run on bionic too 16:14:58 it's like that because we have specified nodeset in our .zuul.yaml file for them 16:15:18 any volunteer to switch that to bionic? 16:15:25 if no, I can do that 16:15:38 since I raised it I can try to fix it :) 16:15:43 thx bcafarel 16:16:05 #action bcafarel to switch neutron-grenade multinode jobs to bionic nodes 16:16:29 from other stadium projects there is also issue with networking-midonet 16:17:06 should we ask yamamoto? 16:17:11 but they are aware of it: https://midonet.atlassian.net/browse/MNA-1344 so I don't think we should bother a lot with that 16:17:29 cool 16:17:30 mlavalle: yamamoto know about this issues, I already mailed him some time ago 16:17:45 yes there was an ML thread 16:18:06 and that's basically all about switch to bionic 16:18:21 questions/comments? 16:18:31 not from me 16:18:35 nope 16:18:47 ok, next topic then 16:18:49 #topic tempest-plugins migration 16:18:54 Etherpad: https://etherpad.openstack.org/p/neutron_stadium_move_to_tempest_plugin_repo 16:19:01 mlavalle: here You volunteered :) 16:19:14 any updates about that? 16:19:45 I intend to work on this towards the end of the week 16:19:48 I pushed a couple of changes for fwaas 16:19:49 I think njohnston is the most ahead there 16:20:19 I need to work on the zuul job definitions 16:20:26 yep, I saw today Your "super-WIP" patch :) 16:20:33 it's pretty red 16:20:48 yeah 16:21:22 I’ll fiddle with it later in the week 16:21:26 but great that You started this work already :) 16:21:31 thx njohnston 16:22:22 ok, so lets move on to the next topic then 16:22:24 #topic Grafana 16:22:39 #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:22:44 You have it already :) 16:23:59 worst thing is IMO fullstack job in check queue 16:24:06 which is on quite high numbers again 16:24:31 but today there is also some spike on neutron-tempest-dvr-ha-multinode-full job 16:25:14 yeah, fullstack is where the action / problem is 16:25:25 mlavalle: yes 16:25:51 according to this neutron-tempest-dvr-ha-multinode-full I would say - lets wait and see how it will be 16:27:07 there wasn't many jobs run today so this spike may be just because of some bad coincidence 16:27:53 from good things: functinal jobs looks good currently finally :) 16:28:02 any other comments on that? 16:28:08 is it not from me 16:28:12 or can we move on to talk about fullstack? 16:28:38 I menat not from me 16:28:43 ok, lets move on then 16:28:44 let's move on 16:28:45 #topic fullstack/functional 16:28:59 I was looking into some fullstack failures today 16:29:11 and I identified basically 2 new (for me at least) issues 16:29:19 https://bugs.launchpad.net/neutron/+bug/1820865 16:29:20 Launchpad bug 1820865 in neutron "Fullstack tests are failing because of "OSError: [Errno 22] failed to open netns"" [Critical,Confirmed] 16:29:29 this one is problem with open netns 16:29:47 and second is https://bugs.launchpad.net/neutron/+bug/1820870 16:29:48 Launchpad bug 1820870 in neutron "Fullstack tests are failing with error connection to rabbitmq" [High,Confirmed] 16:30:03 that one is related to some issues with connectivity from agents to rabbitmq 16:30:30 both are quite often now so I set them as Critical and High priority for now 16:30:54 but I think we need some volunteers for them as I will not have cycles for both during this week 16:31:01 do we need manpower for them? 16:31:11 mlavalle: yes, definitely 16:31:17 assigne me one 16:31:37 the one you think is more important 16:31:50 please 16:32:01 I marked bug/1820865 as Critical because I think it happens more often 16:32:10 ok, I take it 16:32:18 great, thx mlavalle 16:32:43 * mlavalle assigned it to himself 16:32:56 if I will have some time, I will try to take a look at second one but I can't promise that 16:33:13 so I will not assign it to myself yet 16:33:16 if I have time I will also try to get to the second one 16:33:31 I'll ping you if I get there 16:33:38 #action mlavalle to check https://bugs.launchpad.net/neutron/+bug/1820870 16:33:39 Launchpad bug 1820870 in neutron "Fullstack tests are failing with error connection to rabbitmq" [High,Confirmed] 16:33:58 #undo 16:33:59 Removing item from minutes: #action mlavalle to check https://bugs.launchpad.net/neutron/+bug/1820870 16:34:11 yeap, it's the other one 16:34:18 #action mlavalle to check https://bugs.launchpad.net/neutron/+bug/1820865 16:34:19 Launchpad bug 1820865 in neutron "Fullstack tests are failing because of "OSError: [Errno 22] failed to open netns"" [Critical,Confirmed] - Assigned to Miguel Lavalle (minsel) 16:34:30 #action slaweq/mlavalle to check https://bugs.launchpad.net/neutron/+bug/1820870 16:34:33 now it's good :) 16:34:37 yes 16:34:38 thx mlavalle for help 16:35:15 I hope that when those 2 will be fixed, we will be in better shape with fullstack too 16:35:20 any questions/comments? 16:35:37 nope 16:35:59 ok, lets move to the next topic then 16:36:01 #topic Tempest/Scenario 16:36:20 mlavalle: as https://review.openstack.org/#/c/636710/ is merged, did You send patch to unmark tests from https://bugs.launchpad.net/neutron/+bug/1789434 as unstable? 16:36:21 Launchpad bug 1789434 in neutron "neutron_tempest_plugin.scenario.test_migration.NetworkMigrationFromHA failing 100% times" [High,In progress] - Assigned to Miguel Lavalle (minsel) 16:36:39 slaweq: I'll do it today 16:37:08 mlavalle: thx 16:37:44 other thing related to this is that still neutron-tempest-plugin-dvr-multinode-scenario is failing almost always. 16:38:12 I was checking couple of results of such failed jobs today 16:38:29 and there is no one single reason. We probably need someone who will go through some failed tests and report bugs for them. 16:38:36 any volunteers? 16:38:44 I want to do that 16:38:52 thx mlavalle :) 16:38:56 as long as I'm allowed to do it slowly 16:39:22 maybe You will be able to identify some groups of failures and report them as bugs that we can track them later 16:39:31 my goal is a little broader 16:39:41 I want to make that job stable generally 16:39:48 :) 16:39:59 that would be great definitelly 16:40:40 #action mlavalle to debug reasons of neutron-tempest-plugin-dvr-multinode-scenario failures 16:41:04 ok, and that's all from me regarding scenario jobs 16:41:08 questions/comments? 16:41:56 none here 16:42:02 ok 16:42:12 so one last thing we I wanted mention is 16:42:32 thx njohnston we have fixed our first os-ken bug: https://storyboard.openstack.org/#!/story/2005142 - thx a lot njohnston :) 16:42:41 ++ 16:43:05 and with this optimistic accent I think we can finish our meeting a bit earlier today :) 16:43:15 Thanks everybody 16:43:15 and let mlavalle to go where he need to go 16:43:17 :) 16:43:21 o/ 16:43:21 thanks for attending 16:43:26 #endmeeting