16:00:47 #startmeeting neutron_ci 16:00:48 Meeting started Tue Oct 9 16:00:47 2018 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:51 The meeting name has been set to 'neutron_ci' 16:00:54 welcome (again) :) 16:00:55 o/ 16:01:02 o/ 16:01:30 #topic Actions from previous meetings 16:01:43 njohnston Reduce to a single fullstack job named 'neutron-fullstack' that calls python3 in tox.ini and runs on bionic nodeset 16:02:39 so for that I hijacked bcafarel's change https://review.openstack.org/#/c/604749/ 16:03:21 I am tinkering with it to figure out the issues with neutron-fullstack job 16:04:00 what issue exactly? 16:05:09 http://logs.openstack.org/49/604749/3/check/neutron-fullstack/612006a/job-output.txt.gz#_2018-10-04_01_32_01_503103 16:06:08 but do we need still compile ovs on those tests? 16:06:18 looks like it is trying to compile OVS 16:06:30 I don't think so - that is not a change I introduced 16:06:37 I know 16:06:56 IIRC we had something like that because of some missing patch(es) in Ubuntu ovs package 16:07:46 https://github.com/openstack/neutron/blob/master/neutron/tests/contrib/gate_hook.sh#L82 16:07:49 it's here 16:07:59 ah! 16:08:32 maybe we should check if this commit is already included in version provided by Bionic/Xenial and then get rid of this 16:08:43 agreed 16:08:46 that's a good point 16:08:49 I can look into that 16:08:55 thx njohnston :) 16:09:12 #action njohnston look into fullstack compilation of ovs still needed on bionic 16:09:30 thanks for a great memory slaweq! 16:09:38 yw njohnston :) 16:09:50 ok, lets move to the next one 16:09:53 njohnston Ask the infra team for what ubuntu release Stein will be supported 16:10:04 the answer is: bionic 16:10:09 very good email summary on that one 16:10:16 yes, thx njohnston 16:10:29 I didn't send out that email to all, so let me summarize it here. 16:11:04 I brought this up to the TC, noting the point that we should be moving on to bionic based on our governance documents 16:11:38 Given how zuulv3 has pushed the onus of CI job definition maintenance to the project teams, I asked if the TC would be announcing this change or otherwise managing it 16:12:04 This prompted a bit of discussion; the conclusion of which was that the TC doesn't have a ready answer - yet 16:12:33 #link http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-03.log.html#t2018-10-03T16:46:21 16:13:00 ok then, so what if e.g. we will switch all our jobs to bionic but other projects will still run xenial at the end of Stein? is it possible? 16:13:49 should we just go on and try to switch all our jobs or should we wait for some more "global" action plan? 16:13:58 The TC will need to mitigate that risk. They are still smarting after the last time this happened, and everyone was running xenial except Trove, which was back on trusty. So they know the danger, better than we do. 16:14:22 I think we should go ahead andm switch our jobs 16:14:27 yes 16:14:32 we do our bit 16:14:47 and let the TC worry about the global picture 16:14:56 we can warn them in the ML 16:15:04 when we are about to do it 16:15:06 so maybe we can try first to clone our jobs to experimental queue and use them with Stein there and see how it will looks like 16:15:20 what You think about it? 16:15:29 reasonable 16:15:35 note that we already have tests running on bionic - you can't test on py36 without bionic, so the openstack-toc-py36 job is already there 16:16:01 yes, but I think more about all scenario/tempest/grenade jobs 16:16:07 but yes, that is a reasonable plan, especially with complicated things like scenario tests 16:16:19 and btw. what about grenade? should it run Rocky on Bionic and upgrade to Stein? 16:16:32 do You know maybe? 16:16:41 right, because we're not dealing with 100%pass vs 100% fail, we're dealing with fluctuations in fail trends a lot of the time 16:17:11 the QA team is working on grenade because of the python3 challenge, so I think we can wait for the fruits of their labor 16:17:54 I don't know the specific answer, but if I am not mistaken the QA PTL is on the TC 16:18:18 so it should not escape notice 16:18:19 yes, he is 16:18:28 gmann is QA PTL IIRC 16:18:33 yep 16:18:35 yes 16:18:44 ok then 16:18:54 so I think we should now create copies of our scenario/tempest/fullstack/functional jobs running on Bionic in experimental queue, right? 16:19:10 yes 16:19:15 and then we will see what issues we have so we will be able to plan this work somehow 16:19:23 +1 16:19:38 I can do this first step then :) 16:20:03 ok? 16:20:07 yes 16:20:14 thx mlavalle :) 16:20:18 I am grateful, thanks 16:20:20 #action slaweq will create scenario/tempest/fullstack/functional jobs running on Bionic in experimental queue 16:20:36 ok, then move on 16:20:38 njohnston to check and add missing jobs to grafana dashboard 16:21:17 https://review.openstack.org/#/c/607586/ merged 16:21:34 thx njohnston :) 16:21:48 ok, next one 16:21:49 * slaweq to prepare etherpad with list of jobs to move to Bionic 16:21:57 I start doing this today: 16:22:04 https://etherpad.openstack.org/p/neutron-ci-bionic 16:22:22 but as I will do those experimental jobs I will also update this etherpad 16:22:30 sounds good 16:23:31 next was: 16:23:33 * slaweq will report bug about failing trunk scenario test 16:23:39 Done: https://bugs.launchpad.net/neutron/+bug/1795870 16:23:39 Launchpad bug 1795870 in neutron "Trunk scenario test test_trunk_subport_lifecycle fails from time to time" [High,Confirmed] - Assigned to Miguel Lavalle (minsel) 16:23:48 sorry mlavalle that I forgot to ping You with this :) 16:23:53 np 16:24:01 and the last one: 16:24:01 I'll work on it next 16:24:03 * mlavalle will check failing trunk scenario test 16:24:13 I guess it's not done, right? :P 16:24:16 yes 16:24:30 #action mlavalle will check failing trunk scenario test 16:24:42 I will add it for next week then 16:24:45 thx mlavalle 16:24:57 ok, so that was all from last week 16:25:00 next topic 16:25:02 #topic Grafana 16:25:07 http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:26:34 looking at grafana it looks that we are generally quite good 16:26:53 do You want to talk about something specific? 16:26:53 neutron-grenade-dvr-multinode falling fast in the check queue \o/ 16:27:00 there's been a slow down of activity over the past few days 16:27:22 njohnston: about this grenade job I want to talk a bit later :) 16:27:58 mlavalle: yes, it looks that there wasn't anything in gate recently, most work in check queue 16:28:20 maybe also a bit of the Columbus Day holiday in the US 16:28:34 +1 16:28:35 might be 16:29:18 but basically it looks quite good in overall 16:29:34 \o/ 16:29:43 so lets talk about some specific jobs issues 16:29:51 (which are already well known) 16:29:53 #topic fullstack/functional 16:30:29 I don't know why but I think we are again more often hit with issue like in https://bugs.launchpad.net/neutron/+bug/1687027 in functional tests 16:30:29 Launchpad bug 1687027 in neutron "test_walk_versions tests fail with "IndexError: tuple index out of range" after timeout" [High,Confirmed] - Assigned to Miguel Lavalle (minsel) 16:30:40 I saw it couple of times during this week 16:30:48 e.g. http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master/neutron-functional/e56b690/logs/testr_results.html.gz 16:31:00 it's periodic job from today 16:32:38 should I prioritize this one over the previous one? 16:32:38 did You saw it also recently? 16:32:49 I ahven't seen it 16:32:54 lately 16:33:30 mlavalle: I think that this issue with trunk tests is more important 16:33:40 this one is IMO stricly test only issue 16:33:46 yes, that's also my impression 16:33:53 and trunk tests issue - we don't know, maybe it's "real" bug 16:34:39 I will try to find few minutes to maybe reproduce this one 16:34:56 if I will find something, I will let You know on next meeting :) 16:35:14 ok 16:35:24 #action slaweq will try to reproduce and triage https://bugs.launchpad.net/neutron/+bug/1687027 16:35:24 Launchpad bug 1687027 in neutron "test_walk_versions tests fail with "IndexError: tuple index out of range" after timeout" [High,Confirmed] - Assigned to Miguel Lavalle (minsel) 16:35:41 speaking about scenario jobs, 16:35:47 #topic Tempest/Scenario 16:35:56 manjeets: any update on https://bugs.launchpad.net/neutron/+bug/1789434 ? 16:35:57 Launchpad bug 1789434 in neutron "neutron_tempest_plugin.scenario.test_migration.NetworkMigrationFromHA failing 100% times" [High,Confirmed] - Assigned to Manjeet Singh Bhatia (manjeet-s-bhatia) 16:37:32 I guess he isn't here 16:37:41 nope 16:38:09 ok, let's wait one more week and maybe we will have to start working on this 16:38:50 yes 16:39:22 so, lets move on to the next topic 16:39:25 #topic grenade 16:39:37 (my "favorite" one recently) 16:39:46 LOL 16:40:09 I spent about 3 days on debugging https://bugs.launchpad.net/neutron/+bug/1791989 16:40:09 Launchpad bug 1791989 in neutron "grenade-dvr-multinode job fails" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 16:40:13 and I found nothing :/ 16:40:21 well, almost nothing 16:41:16 I have some patch which prints me a lot of informations like, openflow rules, bridges in ovs, namespaces, network config in namespaces, iptables and so on 16:41:33 and I was comparing it from moment when ping was not working and when it starts working 16:41:37 and all was the same 16:42:15 haleyb proposed some patch to add permanent neighbors in fip and qrouter namespaces: https://review.openstack.org/#/c/608259/ 16:42:19 but this didn't helps 16:42:49 so I started ssh to those nodes with running job and I started looking for tcpdump when ping was not working 16:43:06 and I found that icmp requests are comming to vm's tap device 16:43:20 and vm sends icmp reply but this reply is gone somewhere in br-int 16:44:00 that sounds like flows not properly setup maybe 16:44:06 so today I added another thing to my dnm patch (https://review.openstack.org/#/c/602204/) to flush fdb entries from br-int when ping will not work first time and check then if after this ping will start working 16:44:19 mlavalle: all flows were the same 16:44:32 openflow rules I mean 16:44:36 ok 16:45:00 so, I am trying to spot this issue again in my dnm patch again and see if this flush will help or not 16:45:22 but since yesterday evening it looks like this job is working fine again 16:45:36 and it eventually starts working, which is why we started looking at arp. i'll await the testing... 16:45:44 * haleyb is late for the party 16:46:03 I can't spot it in my DNM patch - ping works every time before in first attempt 16:47:04 and looking at kibana: http://logstash.openstack.org/#/dashboard/file/logstash.json?query=build_name:%5C%22neutron-grenade-dvr-multinode%5C%22%20AND%20build_status:FAILURE%20AND%20message:%5C%22die%2067%20'%5BFail%5D%20Couldn'%5C%5C''t%20ping%20server%5C%22 16:47:04 down to only 37% failure - makes me wonder if something was fixed elsewhere and our issue was just a manifestation http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?panelId=22&fullscreen&orgId=1 16:47:40 it looks that it really didn't happen since yesterday 16:48:19 njohnston: yes, that's exactly what I think now 16:48:32 but TBH I'm really puzzled with this one :/ 16:49:00 so, any questions/comments about this? 16:49:01 would it be useful to ping infra and ask i fthey fixed anything networking-related? 16:49:28 or moved their capacity around - drained all jobs from a cloud, perhaps? 16:49:29 njohnston: yes, I can ask them 16:49:45 interesting 16:49:47 thats all I can think of 16:51:01 I think that I can only ask them if they maybe did/fixed something which might be related recently and wait to see how results will go in grafana for next few days 16:51:46 I was also trying to reproduce it locally on similar 2 nodes dvr env with rocky but I couldn't - all worked fine 16:53:28 ok, I think we can then move on to the next topic 16:53:34 #topic periodic 16:54:12 I just wanted to mention that last week we had issue https://bugs.launchpad.net/neutron/+bug/1795878 which I found after CI meeting so I didn't mention it then :) 16:54:12 Launchpad bug 1795878 in neutron "Python 2.7 and 3.5 periodic jobs with-oslo-master fails" [High,Fix released] - Assigned to Bernard Cafarelli (bcafarel) 16:54:20 it's now fixed https://review.openstack.org/607872 16:54:52 cool 16:55:13 and that's all related to periodic jobs from me 16:55:16 #topic Open discussion 16:55:26 I want also to mention one more thing 16:56:04 during the PTG I was on some QA team sessions and they asked us that we should move templest_plugins from neutron stadium projects to separate repos or to neutron_tempest_plugin repo 16:56:23 mlavalle: which option would be better in Your opinion? 16:56:55 list of plugins is on https://docs.openstack.org/tempest/latest/plugin-registry.html 16:57:24 I don't have a strong opinion 16:57:48 me neighter 16:57:50 we share a lot of tests with the Stadium, don't we? 16:57:59 I don't know if a lot 16:58:15 since it is testing based on the API 16:58:17 maybe we should discuss about it on ML first then? 16:58:29 yes, let's ask them 16:58:42 mlavalle: will You send an email or should I? 16:58:44 That makes sense, since the stadium projects may not have a uniform way they approach this 16:59:13 I suspect many of them may not have man power to handle yet another repo 16:59:25 let's see what they say 16:59:32 yes, I'll send the message 16:59:38 ok, thx mlavalle 17:00:03 #action mlavalle to send an email about moving tempest plugins from stadium to separate repo 17:00:08 ok, we are out of time 17:00:12 thx for attending 17:00:15 #endmeeting