16:00:33 #startmeeting neutron_ci 16:00:34 Meeting started Tue Oct 31 16:00:33 2017 UTC and is due to finish in 60 minutes. The chair is ihrachys. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:37 The meeting name has been set to 'neutron_ci' 16:00:40 o/ 16:00:50 wassuuuup 16:01:05 hi 16:01:09 Hi 16:01:24 #topic Actions from prev meeting 16:01:27 o/ 16:01:50 "haleyb to follow up with infra on missing grafite data for zuulv3 jobs" 16:02:03 grafana is still empty 16:02:14 done - board is happy and neutron-lib merged as well 16:02:25 huh? 16:02:25 \o/ 16:02:34 ihrachys: there are some that have no data, was told due to no failures (yet) 16:02:40 oh ok 16:02:50 http://grafana.openstack.org/dashboard/db/neutron-failure-rate ? 16:03:07 yeah, that one. I was misled by empty boards 16:03:30 thanks for getting it done! 16:03:32 i put a note at the top since i was confused as well, but graphite data seems to match 16:03:49 if we see a failure but don't see it in the boards then we have a problem 16:04:39 next AI was "ihrachys to follow up with Chandan on progress to split off tempest plugin" 16:04:40 does it mean it can't show 0 with zuulv3? 16:05:22 weird that periodic functional works even without failure 16:06:17 jlibosva: i think it can, just that it needs a FAILURE stat to do the math, and it's not present 16:06:30 ok, I trust you :) 16:07:02 ok. back to tempest plugin 16:07:03 i was told if i found a discrepancy to ping infra, that's all i nkow 16:07:04 sorry for noise, go on 16:07:34 I reached out to Chandar, and he told me he is going to work on it asap. that being said, I haven't seen any updates in gerrit since then 16:08:20 tempest plugin: https://review.openstack.org/#/c/506553/ and project-config: https://review.openstack.org/#/c/507038/ 16:08:31 the latter was abandoned because the job will now live in another repo 16:09:06 I guess I will need to check with him again 16:09:18 at least we could speed up the tempest repo change 16:09:21 I haven't seen updates in gerrit either 16:09:30 #action ihrachys to follow up with Chandan about tempest split again 16:09:46 those are all AIs we had 16:09:49 #topic Grafana 16:09:52 http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:10:01 now that we have the board, let's have a look 16:10:09 are we going to talk about job migration later? 16:10:27 mlavalle, yeah I suppose 16:10:43 I can cover it during open topics if you want 16:10:51 carry on 16:11:18 fullstack and scenarios seem to be the old close-to-100%-failure fellas we all love 16:11:45 there is also high failure rate on dvr-ha and ovsfw jobs but I guess we will let them slide for now 16:12:09 #topic Fullstack 16:12:48 I believe we still have two major issues there: trunk cleanup and rootwrap returning wrong data from other commands 16:13:07 for the former, slaweq has this: https://review.openstack.org/#/c/514586/ 16:13:25 slaweq couldn't attend this meeting so I'm forwarding his message he's going to work on that one asap :) 16:13:55 ok. do we have final decision of the path forward? 16:14:06 I remember there was disagreement the last time 16:14:23 armax pushing for fullstack-patched executables and slaweq for config option 16:14:46 but slaweq has a point that we'd need to run all patched services 16:15:05 so I assume we'll go with the config option 16:15:10 ok that's good. 16:15:17 I'm also working slowly on the namespace isolation in parallel so then we can remove the config option 16:15:54 for the other issue - rootwrap daemon mixing outputs of commands - we have this attempt from toshii: https://review.openstack.org/#/c/514547/ 16:16:44 I think the direction is correct. there were some smaller questions about whether we can avoid eventlet dependency somehow 16:18:33 if someone can have another look, that would be great. also pulling someone from oslo at this point would probably help. I recollect they wanted to see a test case reproducing the issue to move forward. 16:18:38 which we have now 16:18:48 I'll have a look 16:19:25 I guess we can focus on those issues for fullstack and revisit any other failures next time if we make progress 16:19:30 #topic Scenarios 16:20:06 a latest dvr flavor result: http://logs.openstack.org/31/513831/1/check/legacy-tempest-dsvm-neutron-dvr-multinode-scenario/24754c9/logs/testr_results.html.gz 16:20:32 that's covered with https://bugs.launchpad.net/neutron/+bug/1717302 16:20:33 Launchpad bug 1717302 in neutron "Tempest floatingip scenario tests failing on DVR Multinode setup with HA" [High,Confirmed] 16:20:49 haleyb, I recollect l3 subteam was going to discuss that 16:21:13 ihrachys: we have, it's still on swami's plate 16:21:40 "we have our best people working on it" :) 16:22:03 * ihrachys feels relieved now 16:22:08 swami will be in Sydney next week 16:22:25 * ihrachys stops feeling relieved 16:23:14 as for linuxbridge job, here is another result: http://logs.openstack.org/31/513831/1/check/legacy-tempest-dsvm-neutron-scenario-linuxbridge/057e83a/logs/testr_results.html.gz 16:23:42 "Cannot 'detach_interface' instance 7722f257-3946-4a05-baee-4b69739d6547 while it is in vm_state building" in a bunch of failures 16:24:43 nova compute log is quite red: http://logs.openstack.org/31/513831/1/check/legacy-tempest-dsvm-neutron-scenario-linuxbridge/057e83a/logs/screen-n-cpu.txt.gz?level=TRACE 16:24:51 but all seems related to VIF provisioning 16:26:30 oh in q-agt, I see this: "IpTablesApplyException: IPTables Rules did not converge" 16:26:40 I believe it's https://bugs.launchpad.net/neutron/+bug/1719711 16:26:41 Launchpad bug 1719711 in neutron "iptables failed to apply when binding a port with AGENT.debug_iptables_rules enabled" [High,Confirmed] - Assigned to Brian Haley (brian-haley) 16:26:54 haleyb, any updates on it? 16:27:23 ihrachys: i am still looking at that one, had it reproducing then lost power and now it doesn't happen locally 16:28:11 have you tried turning it off and on again? (c) 16:28:48 :) 16:29:05 ok, seems like both scenario issues have best people working on them 16:29:06 yes, that usually helps 16:29:32 #topic zuulv3 job migration 16:29:36 mlavalle, your floor 16:30:20 submiited neutron-dsvm-api move to the Neutron tree: 16:30:36 #link https://review.openstack.org/#/c/516715 16:30:57 submitted removal of job definition to project-config: 16:31:24 #link https://review.openstack.org/#/c/516724/ 16:32:04 and over the next few minutes I will submit a patch to openstac-zuul-jobs removing the old job from our project pipeline 16:32:21 These are the steps prescribed by the zuul V3 migration guide 16:32:42 so let's see if it works 16:32:44 mlavalle, you may want to link patches with needed-by/depends-on for context 16:32:54 I see andreas is puzzled 16:32:58 ihrachys: yes, I will do that soon 16:33:32 he was faster than me 16:33:45 why do we need two infra patches to replace a job? 16:33:54 isn't pipeline definition in a single place? 16:34:03 apparently not 16:34:11 I'm following the guide 16:34:21 hm ok, weird :) 16:34:43 so once we have api job settled, what's next? 16:34:44 I guess the first patch removes the defintion when they migrated the job 16:35:25 and then we take step 2 16:35:41 which is tweking the moved job to take advantage of zuul v3 16:35:48 hasn't gotten there yet 16:36:24 advantage? what do you mean? reworking the script triggered in some way? 16:36:44 well zuul v3 is supposed to have new features 16:36:55 I don't know if we can exploiut any of that 16:36:59 but we have to explore 16:37:31 "Rework the jobs to be native v3 jobs" 16:37:40 it may take some time 16:37:56 if I were you, I would first get all jobs in tree then rework them 16:38:06 https://docs.openstack.org/infra/manual/zuulv3.html#reworking-legacy-jobs-to-be-v3-native 16:38:20 good advice 16:38:38 on similar note, the new tempest plugin repo, it will share jobs with neutron 16:38:48 because they will now need to cross-gate 16:39:05 I believe those jobs will live in neutron tree? 16:39:08 or tempest plugin? 16:39:25 I am moving jobs to the Neutron tree 16:40:07 but those are gating neutron only, no? 16:40:17 yes 16:40:20 the case with tempest plugin jobs is different in that they belong to neutron and the new repo 16:40:51 I will have to take a look at that 16:43:23 ok 16:43:29 #topic Open discussion 16:43:34 anything else to discuss? 16:44:20 I guess no! 16:44:29 thanks everyone fellas 16:44:32 #endmeeting