16:00:12 #startmeeting neutron_ci 16:00:13 Meeting started Tue Nov 19 16:00:12 2019 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:16 The meeting name has been set to 'neutron_ci' 16:00:16 hi (again) 16:00:17 o/ 16:00:25 hi 16:01:21 ok, lets start 16:01:30 first of all 16:01:31 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:01:46 please open it and we will move on 16:01:48 #topic Actions from previous meetings 16:01:55 slaweq to investigate failed neutron.tests.fullstack.test_qos.TestDscpMarkingQoSOvs 16:02:06 Bug reported https://bugs.launchpad.net/neutron/+bug/1852724 16:02:06 Launchpad bug 1852724 in neutron "Fullstack test for dscp_marking_packets fails if first icmp is not send properly" [Medium,Fix released] - Assigned to Slawek Kaplonski (slaweq) 16:02:07 Patch proposed: https://review.opendev.org/694505 16:02:35 it's even merged already 16:02:46 nice 16:03:02 so next one 16:03:04 njohnston prepare etherpad to track stadium progress for zuul v3 job definition and py2 support drop 16:03:44 yeah, I did not get to that, I'll work on it today 16:03:50 ok 16:03:55 somehow I missed it in my todo list 16:03:59 so, just as a reminder 16:04:05 #action njohnston prepare etherpad to track stadium progress for zuul v3 job definition and py2 support drop 16:04:08 :) 16:04:19 and we will get back to this in next meeting 16:04:29 next one 16:04:31 slaweq to take a look at connectivity issues after resize/migration 16:04:41 I did some analysis and described it in comment https://bugs.launchpad.net/neutron/+bug/1850557 16:04:41 Launchpad bug 1850557 in neutron "DHCP connectivity after migration/resize not working" [Medium,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 16:04:48 but I don't know exactly why it happens like that 16:05:21 as it happend at least few times in tempest-slow job, which runs tests in serial, I will try to reproduce this issue locally 16:05:59 but tbh I didn't saw it this week in ci results 16:07:12 ok, next one 16:07:18 njohnston delete old tests from neutron-dynamic-routing repo 16:07:51 https://review.opendev.org/695014 16:08:10 up for review 16:08:21 thx njohnston 16:08:32 I will review it as soon as zuul will be happy :) 16:09:06 ok, next one 16:09:08 slaweq to move neutron-tempest-with-os-ken-master to zuulv3 syntax and switch to run neutron related tests only 16:09:15 Patch https://review.opendev.org/694770 16:09:21 ready for review :) 16:09:26 late o/ 16:09:35 hi bcafarel :) 16:09:53 and this is in fact last "non-grenade" legacy job in neutron repo IIRC 16:10:08 nice 16:10:31 next one was 16:10:33 slaweq to fix python version in tempest-slow-py3 job 16:10:42 Bug reported: https://bugs.launchpad.net/tempest/+bug/1853004 16:10:42 Launchpad bug 1853004 in tempest "tempest-slow-py3 job uses python 2.7 on subnodes" [Undecided,In progress] - Assigned to Slawek Kaplonski (slaweq) 16:10:48 Patch https://review.opendev.org/694768 16:11:03 but I found out today that this is not only related to tempest-slow-py3 16:11:14 also tempest-multinode-full-py3 has this issue 16:11:24 and it is also addressed by patch 694768 16:11:42 but also neutron-tempest-plugin-multinode-dvr job has got same issue 16:11:48 and patch to fix it is here: 16:12:09 https://review.opendev.org/#/c/695013/ 16:12:19 so please review :) 16:12:56 ok, and the last one from last week is 16:12:57 njohnston to check failing NetworkMigrationFromHA in multinode dvr job 16:13:01 * bcafarel tries to keep up with the flurry of review links 16:13:08 I did not have a chance to look at that 16:13:52 will You have time to look at it this week maybe? 16:14:33 I will do my best 16:14:39 thx njohnston :) 16:14:48 I know You will 16:14:54 #action njohnston to check failing NetworkMigrationFromHA in multinode dvr job 16:15:14 that was all actions from last week 16:15:26 anything else You want to add/ask maybe? 16:16:19 nope 16:16:27 ok, so lets move on 16:16:30 #topic Stadium projects 16:16:50 we already talked about neutron-dynamic-routing patch 16:17:02 and I don't think we have anything else to talk about here today 16:17:25 agreed 16:17:27 in next weeks we can track here progress of migrating jobs to zuulv3 and dropping py2 jobs 16:17:35 but for now I don't have anything else 16:18:06 but maybe You have something to talk about here? 16:18:23 Hi, I just realized that CI meeting is in progress 16:18:29 hi lajoskatona :) 16:18:32 hello lajoskatona 16:18:39 I have one networkingüodl small bug: https://review.opendev.org/668904 16:19:04 It is a unit test that fails sporadically 16:19:36 If you have some time to check it that would be helpful, sorry for adding new things to the piles 16:20:16 sorry, in this patch I don't see the UTs failing 16:20:41 ralonsoh: I think that this patch fixes failing UT 16:20:42 :) 16:20:54 I need to read better..... 16:21:31 ralonsoh: yes, soryy for bad wording, the ut that fails is networking-odl: test_feature_configs_does_not_mutate_default_features 16:21:44 ralonsoh: thanks anyway 16:21:44 thanks! 16:21:53 lajoskatona: I'm not odl expert but patch seems reasonable for me 16:22:19 one-liner to add init/reset is reasonable yeah 16:22:36 slaweq: yeah the good thing is that this is in python, not in Java at least :-) 16:23:02 lajoskatona: true :) 16:23:32 lajoskatona: I just +W your patch 16:23:59 slaweq: thanks, a lot less recheck from this day 16:24:10 :) 16:24:15 ok, lets move on 16:24:17 #topic Grafana 16:24:25 #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:24:47 as You may see there, empest-slow-py3 and tempest-multinode-full-py3 high failure rate today 16:24:49 same for neutron-grenade 16:25:10 and all that issues are caused by nova's patch https://review.opendev.org/#/c/687954/ 16:25:39 nova went too far with dropping py2 support 16:26:01 I know they were talking today how to fix this for now 16:26:07 so hopefully it will be fine soon 16:26:51 hopefully 16:27:01 yep 16:27:02 slaweq: so let's wait with rechecks? 16:27:10 lajoskatona: for now, yes 16:27:14 slaweq: ok 16:27:36 for sure neutron-grenade job, tempest-slow-py3 and tempest-multinode-full-py3 will fail 16:27:58 tempest-slow-py3 and tempest-multinode-full-py3 should be fixed when my patch to switch it "fully" to py3 will be merged 16:28:13 and for neutron-grenade we have already patch to remove it from neutron queue 16:28:35 so in fact if all those patches will be merged, we should be good even without fix on nova's side 16:29:06 if they need any more reviews, can you post the URLs? 16:29:07 btw, with one of those patches (devstack) USE_PYTHON3 will be True by default 16:29:08 https://review.opendev.org/#/c/649097/10/stackrc 16:29:49 excellent 16:30:07 njohnston: thx, I already sent links to those patches before 16:30:16 ah ok 16:30:18 nm 16:30:22 :) 16:31:18 other than that I think we are fine 16:31:33 do you think we will need to squash the changes in order to get them through the gate? Or should we wait for Nova to clean up? 16:31:46 njohnston: no, we don't need to squash anything 16:32:08 ok 16:32:09 as fix for tempest-slow-py3 and tempest-multinode-full-py3 is in tempest repo: https://review.opendev.org/694768 16:32:15 ah 16:33:09 and remove of neutron-grenade is in neutron repo: https://review.opendev.org/#/c/694039/ 16:33:34 so first patch in tempest should land and than our patch in neutron will be able to land 16:33:42 and than we should be good probably 16:34:02 but I'm not sure which version will be faster, that one or fix on nova's side 16:34:50 ok, I think we can move on 16:35:10 basically this issue is biggest issue for now, so I don't have prepared any other issues for today 16:35:22 but I want to talk a bit about scenario jobs 16:35:28 #topic Tempest/Scenario 16:36:00 few days ago I sent email http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010838.html 16:36:13 with some comparison of single/multinode jobs 16:36:29 and also with comparison of tempest- and neutron-tempest-plugin- jobs 16:36:59 and I would like to discuss here a bit about what do You think should we do with those jobs? 16:37:36 first, what are we going to do with LB? 16:37:57 because the future (or not) of LB could decide about those jobs 16:38:02 some job with LB should be still there 16:38:23 if we remove Lb too? 16:38:34 ralonsoh: for now we will not remove it for sure 16:38:50 we are starting some discussion about deprecating it 16:38:55 I know 16:38:56 but nothing else for now 16:39:08 and for now we have 2 jobs: neutron-tempest-plugin-scenario-linuxbridge and neutron-tempest-linuxbridge 16:39:23 maybe we could merge them into one job? 16:39:23 then we should keep it 16:39:42 we are doing different things there, but could work 16:40:14 I'll check this 16:40:46 ralonsoh: what do You mean by "different things"? 16:41:08 the tests are different 16:41:34 yes, tests are different 16:41:56 that's why I think that maybe we can run one job which will run tests from both tempest and than neutron-tempest-plugin repo 16:42:20 that could speed up the CI 16:42:35 yes 16:42:45 and would be one less job which can fail maybe :) 16:43:26 but the second question is: do we really need to run all tempest tests for it? 16:43:42 as it is single node job, it don't test things like live-migration for example 16:44:34 so basically it runs a lot of nova related tests and neutron related tests 16:44:47 neutron related things we can cover in neutron-tempest-plugin job as well 16:45:27 and from nova point of view - what we really need to test, if it's only if vm can be spawned and ssh than we have this covered in neutron-tempest-plugin tests also 16:45:43 so maybe we could stay only with neutron-tempest-plugin-scenario-linuxbridge job? 16:46:05 and exactly same situation is for neutron-tempest-iptables_hybrid 16:46:18 (I would need to check both jobs and the tests executed) 16:46:42 ralonsoh: sure, please check them and maybe reply to my email on ML 16:46:47 slaweq, 16:46:49 ok 16:46:50 we can continue this discussion there 16:46:52 thx a lot 16:47:12 next thing are grenade jobs 16:47:29 we have grenade-py3 and neutron-grenade-multinode in check and gate queue 16:47:38 those jobs are the same 16:47:50 only difference between them is that grenade-py3 is single node job 16:48:07 can we maybe drop this one? and use only neutron-grenade-multinode job? 16:48:20 single node is not used in production 16:48:21 so yes 16:48:29 and I suppose we don't have any tests that *requires* single node :) 16:48:37 (so also yes) 16:48:40 bcafarel: exactly :) 16:48:58 ok, so as I see agreement here, I will propose patch to drop single node job 16:49:00 thx 16:49:39 that's all from me about this topic 16:50:05 if You would have any other opinions/ideas about it, please reply to the email and we can continue this discussion there 16:50:13 from other things 16:50:30 ok 16:50:50 I sent patch https://review.opendev.org/#/c/694049/ which will start running queens jobs using neutron-tempest-plugin with some specific tag 16:51:10 so we can drop those jobs from master branch queues 16:51:15 please review it also :) 16:51:45 ooh nice 16:52:05 and one last thing which I want to mention 16:52:21 I have patch to switch default deployment to uwsgi https://review.opendev.org/#/c/694042/ 16:52:30 but it has some problems with grenade jobs 16:52:40 so I will investigate that and I hope it will be ready soon 16:52:46 cool 16:53:36 slaweq, https://review.opendev.org/#/c/694266/ 16:53:46 which one is failing? 16:54:07 or you mean in https://review.opendev.org/#/c/694042/ 16:54:21 the second one 16:54:33 perfect 16:54:46 ralonsoh: yes, it's failing in https://review.opendev.org/#/c/694042/ 16:55:25 but I think I know why it is failing 16:55:34 I will just need to test that in gate :) 16:55:49 ok, that's all from me for today 16:55:58 anything else You want to talk about today? 16:56:33 sounds like all action points from ptg have patches in progress then? 16:56:41 nice work :) 16:56:41 bcafarel: I hope so 16:56:51 but if I missed anything, forgive me :) 16:57:04 no mercy 16:57:08 lol 16:57:26 ok, we are almost on time 16:57:33 thx for attending the meeting guys 16:57:40 and see You online o/ 16:57:42 thanks! 16:57:43 o/ 16:57:43 #endmeeting