16:00:26 #startmeeting neutron_ci 16:00:26 Meeting started Tue Oct 2 16:00:26 2018 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:28 hi 16:00:29 The meeting name has been set to 'neutron_ci' 16:00:58 o/ 16:01:00 o/ 16:01:55 lets wait few minutes for haleyb, njohnston and others 16:02:18 no problem 16:02:27 hi 16:02:32 o/ 16:02:55 ok, I think we can start now 16:03:00 #topic Actions from previous meetings 16:03:10 mlavalle will work on logstash query to find if issues with test_attach_volume_shelved_or_offload_server still happens 16:03:18 hi 16:03:30 I did work on that 16:03:48 http://logstash.openstack.org/#/dashboard/file/logstash.json?query=build_status:%5C%22FAILURE%5C%22%20AND%20project:%5C%22openstack%2Fneutron%5C%22%20AND%20message:%5C%22test_attach_volume_shelved_or_offload_server%5C%22%20AND%20tags:%5C%22console%5C%22&from=7d 16:04:10 I watched it over the past 7 days 16:04:23 as of today no hits whatsoever 16:04:40 last hit was https://review.openstack.org/#/c/601336 16:04:51 and as you can see, the problem is the patch 16:05:06 that test creates an instance and the ssh's to it 16:05:25 if we mess something in a patch, sometime we break ssh 16:05:31 and that's what we see 16:05:40 so it's not a bug 16:06:08 ok, so I think we can stop work on this one, at least for now 16:06:14 yes 16:06:17 +1 16:06:25 I hope it will not back in the future :) 16:06:34 I'll preseve the query in case we need it in the future 16:06:42 ok 16:06:47 thx mlavalle for checking that 16:06:54 next one: 16:06:57 njohnston will debug why fullstack-py36 job is using py35 16:07:24 I think bcafarel beat me to the punch a bit on this one 16:07:52 https://review.openstack.org/#/c/604749/ 16:08:29 but I think more change is needed 16:08:52 specifically I need to check this, but my impression was that you had to use the ubuntu-bionic nodeset to get a host that had 3.6 installed 16:09:16 so I 16:09:32 it looks that there is not py36 in xenial indeed: http://logs.openstack.org/49/604749/1/check/neutron-fullstack-python36/490122b/job-output.txt.gz#_2018-09-24_12_36_02_778606 16:09:37 so I will update that and see if we have more success with that nodeset 16:09:54 the review is all yours njohnston :) 16:10:33 I'll get that done during the meeting 16:10:35 maybe we should just do it as fullstack-python3 and not specify exact version of python used in tests? 16:11:23 I would be all in favor of that... except that it is known that python 3.7 includes some non-backwards-compatible changes that may break things 16:11:38 at least that is what I heard from the infra guys 16:11:57 that is my only reservation 16:12:21 yep, but for now py37 is even not released, right? 16:12:51 if it will be and we will have to test on distros with this version, we will add new job then 16:12:53 It was released 2018-06-27 https://www.python.org/downloads/release/python-370/ 16:12:59 not in current CI distros but it will be there at some point 16:13:41 now, many people still uses xenial for e.g. development and locally to run fullstack tests (me for example) and it will not work for me if we specify exactly python36 in tox ini, right? 16:14:29 ok, it's released but is it in any distro which we support? 16:14:41 That is a good point. So we should keep the old tox target around so people can test locally. 16:15:16 isn't the idea to leave generic python3 in tox.ini and set specific versions in gates/zuul? 16:16:08 bcafarel: I think that this would be better than specify exact version in tox.ini 16:16:27 bcafarel: I can alter your change to do that; right now it calls out 3.6 specifically: https://review.openstack.org/#/c/604749/1/tox.ini 16:16:43 bcafarel: but once we switch to bionic probably we can omit the dot-version 16:17:29 maybe we should consider to just replace neutron-fullstack-python36 job with neutron-fullstack job simply and run it with python3 16:17:35 what You think about that? 16:17:48 yes, that makes sense 16:17:58 the exeception now python 2.7 16:18:04 we should get used to it 16:18:51 ok, I will change things so that we have a neutron-fullstack job that calls python3 in tox.ini but that runs on bionic nodeset. Is that an accurate capturing of the consensus? 16:19:04 since we will drop the py2 one soon, makes sense indeed 16:19:39 njohnston: for me yes, if bionic is the release which OpenStack Stein will support of course :) 16:19:52 yes 16:20:08 * slaweq don't knows exactly what versions of Ubuntu should be supported in Stein 16:20:58 slaweq: IIUC it is, I can get clarification on that from... I don't know who, I guess the infra guys. 16:20:58 ok, so let's do that and I hope it will work fine 16:21:13 ok, thx njohnston 16:21:17 njohnston: infra guys are a good source 16:21:23 If infra says that Stein needs to work on both bionic and xenial then we may need two jobs in a few cases 16:21:29 #action njohnston Reduce to a single fullstack job named 'neutron-fullstack' that calls python3 in tox.ini and runs on bionic nodeset. 16:21:55 thx njohnston for writing action, I just wanted to do it :) 16:22:01 #action njohnston Ask the infra team for what ubuntu release Stein will be supported 16:22:15 ok, let's move on to the next one 16:22:24 njohnston will send a patch to remove fullstack py27 job completly 16:23:37 That is in, but it depends on the change to fix the fullstack 3.6 job 16:24:08 ok, in fact it will be done together if You replace python version to py3 in this job :) 16:24:13 So it has been sitting there. Given the previous discussion, I'll probably abandon it since we'll be changing the 2.7 job to 3.x instead 16:24:21 right-o 16:24:24 njohnston++ 16:24:38 ok, lets move to the next topic then 16:24:40 #topic Grafana 16:24:48 Quick info 16:24:55 I have 2 patches: 16:25:17 https://review.openstack.org/#/c/607285/ - to update fullstack-python35 to py36, but it will be probably changed/abandoned as we will change those jobs 16:25:42 and second: https://review.openstack.org/#/c/583870/ to add tempest-slow job to grafana which we are now missing in dashboard 16:26:19 also, speaking about grafana I noticed that we are probably missing some other jobs also, e.g. openstack-tox-py36 16:26:34 slaweq: I also had a change to update fullstack-python35 to 36, I'll abandon it as well: https://review.openstack.org/#/c/605128/ 16:26:37 is there any volunteer to check that and add missing jobs to grafana? 16:26:51 o/ 16:27:25 njohnston: thx :) 16:27:39 #action njohnston to check and add missing jobs to grafana dashboard 16:28:11 ahh, I forgot about link 16:28:16 FYI, answer from fungi on the previous question: "the infra team isn't really in charge of making it happen, but https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions is interpreted to mean that projects should be switching their master branches to use ubuntu-bionic instead of ubuntu-xenial for jobs soon so they work out any remaining kinks before stein releases" 16:28:17 http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?orgId=1 16:28:58 njohnston: thx for checking that, so we should be good to switch fullstack to py36 and bionic 16:28:59 yep, one thing we probably want to consider is whether this should be treated like a de facto cycle goal 16:29:13 * slaweq wonders how many new issues we will have on new Ubuntu release :) 16:29:21 (making sure projects are testing against bionic instead of xenial on master before some point ni this cycle) 16:29:42 fungi: so are we switching in this cycle? 16:29:55 or just preparing for the switch 16:29:57 ? 16:30:25 mlavalle: our governance documents state we test on the latest ubuntu lts release 16:30:36 ack 16:30:42 so if we're not, we either need to fix that or update governance 16:31:27 previously we've considered it reasonable to stick with whatever the latest ubuntu lts version was at the start of the cycle, which for rocky was ubuntu xenial 16:32:01 ubuntu bionic was released partway into the rocky cycle, so we stuck with xenial through rocky but should switch to bionic in stein 16:32:13 so we should switch all our jobs to bionic in this cycle, right? 16:32:20 thanks for the clarification 16:32:25 ideally yes 16:32:51 ok, thx fungi 16:32:57 that would be in line with community goal py3.6 tests too (they need a distro with it) 16:33:25 during the ubuntu trusty to xenial transition trove ended up with a lot of their jobs still on trusty after other projects released using xenial and that made integration testing really challenging for them. ultimately they needed to update their stable branch testing to xenial which was nontrivial 16:34:21 so I can prepare some etherpad with list of jobs which we should switch and we will track progress there, what You think? 16:35:36 that's a good idea 16:35:49 we can review it weekly in this meeting 16:35:56 yes :) 16:36:08 fungi: I was going to write an email to the ML with the neutron steadium as the intended audience, but as a member of the TC perhaps this is something that should come from that level, at least to remind everyone to start making plans for an orderly transition 16:36:15 #action slaweq to prepare etherpad with list of jobs to move to Bionic 16:36:48 s/as a member of the TC/since you are a member of the TC/ 16:38:27 ok, moving on? 16:38:39 yes go ahead 16:39:18 so getting back to grafana, we have some issues with I want to talk about but generally it's not very bad 16:39:45 what is worth to mention is fact that tempest jobs and scenario jobs are doing quite good last week 16:40:30 that's great! 16:40:52 so let's talk with some issues which we have there 16:40:57 #topic fullstack/functional 16:40:58 * mlavalle flinches for the but.... 16:41:10 LOL 16:41:55 There is many errors like reported in https://bugs.launchpad.net/neutron/+bug/1795482 recently in fullstack 16:41:55 Launchpad bug 1795482 in neutron "Deleting network namespaces sometimes fails in check/gate queue with ENOENT" [High,In progress] - Assigned to Brian Haley (brian-haley) 16:42:02 and since new pyroute2 version was released (0.5.3) there is issue like in https://bugs.launchpad.net/neutron/+bug/1795548 16:42:02 Launchpad bug 1795548 in neutron "neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_lifecycle* fail" [High,In progress] - Assigned to Brian Haley (brian-haley) 16:42:21 I mention both of them together as haleyb already proposed one patch for both: https://review.openstack.org/#/c/607009/ 16:42:30 so we should be good after this will be merged 16:43:22 anything else related to fullstack/functional jobs? 16:44:32 * mlavalle will review today 16:44:44 thx mlavalle 16:45:08 if there is nothing else, lets go to the next one 16:45:09 #topic Tempest/Scenario 16:45:31 Patch https://review.openstack.org/#/c/578796/ waits for review - it's last job which is now done using legacy zuul templates, 16:45:35 please review it :) 16:47:16 * mlavalle added it to his pile 16:47:22 thx mlavalle 16:47:38 and one more thing, recently I saw few times that one of trunk tests is failing still 16:47:40 for example: One of trunk tests is failing from time to time, example 16:47:46 http://logs.openstack.org/85/606385/4/check/neutron-tempest-plugin-dvr-multinode-scenario/e7a983b/logs/testr_results.html.gz 16:48:16 so my fix for long bash command helped but there is something else also still 16:48:29 slaweq: Since no more legacy jobs, does that mean that playbooks/legacy will go away? 16:49:04 njohnston: maybe I forgot to remove it :/ 16:51:15 it'll be a satisfying follow-on change 16:51:26 njohnston: ok, thx 16:51:34 looks good git-checking out the review, legacy/ went away here 16:52:03 oh it's in neutron 16:52:11 speaking about trunk test issue, I will open new bug for it but I don't know if I will have time to investigate it 16:52:30 I can take it 16:52:40 #action slaweq will report bug about failing trunk scenario test 16:52:46 ok, thx mlavalle 16:52:59 please ping once you file the bug 16:53:06 #action mlavalle will check failing trunk scenario test 16:53:08 mlavalle: sure 16:53:11 thx a lot for help 16:53:29 ok, moving on to the next one 16:53:31 #topic grenade 16:53:43 we still have this issue with grenade-multinode-dvr job 16:54:24 what I observed yesterday was that this instance created by grenade test started pinging after around 240-250 seconds after it was created 16:54:43 wow, that is pretty long 16:54:54 4 minutes 16:55:15 today I'm trying to add some additional logs to this job to check iptables/openflow/arp tables on both nodes and in all namespaces but I didn't do it yet properly 16:55:44 yes, it's around 4 minutes and I don't know why it's like that 16:56:43 I will continue work on this one 16:57:06 ok, so that's all from me 16:57:26 I have one comment 16:58:11 boden is trying to add a vmware-nsx job to checking queue. But for non statium projects we said 3rd party CY, right? 16:58:17 CI^^^^ 16:58:27 hi 16:58:35 I think it would be good to add it as 3rd party job 16:58:42 munimeha1: hi 16:58:48 for tap as service inclusion to Neutron 16:58:49 require CI for Grenade coverage 16:58:56 what need to be done 16:59:39 We did, and I think the assumption was that we wouldn't have hardware to properly check 3rd party (i.e. no NSX switches). 16:59:54 correct 17:00:17 munimeha1: I think we are out of time now, can You ask about it on neutron-channel? 17:00:24 sure 17:00:26 #endmeeting