15:00:25 #startmeeting neutron_ci 15:00:26 Meeting started Wed Jul 15 15:00:25 2020 UTC and is due to finish in 60 minutes. The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:30 The meeting name has been set to 'neutron_ci' 15:00:34 o/ 15:01:07 today I'll need a bit of help from you 15:01:13 half o/ (listening to an internal meeting in // ) 15:01:22 I know... 15:02:17 ok, let's go 15:02:18 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:02:22 Please open now :) 15:02:29 (as usual) 15:02:33 #topic Actions from previous meetings 15:02:40 maciejjozefczyk to check neutron_tempest_plugin.scenario.test_connectivity.NetworkConnectivityTest.test_connectivity_through_2_routers in ovn jobs 15:02:52 heya 15:02:56 yes im looking at this one 15:03:03 cool 15:03:13 I believe I tracked something https://review.opendev.org/#/c/740491/ 15:03:51 I see a job failing 15:03:54 it started failing somewhere around those refactors 15:03:54 https://github.com/ovn-org/ovn/commits/master 15:04:03 so this is not always happening 15:04:05 ovn-northd: Document OVS register usage in logical flows. + 15:04:22 yes, thats strange... for stable release it works fine 15:04:32 im gonna continue investigating whats this 15:04:41 sure thanks! 15:04:49 I was unable to reproduce it locally so I'm using gate to do so 15:05:28 and did you have time for the other action? 15:05:29 maciejjozefczyk to check failing neutron-ovn-tempest-full-multinode-ovs-master job 15:06:20 thats the same 15:06:30 ths scenario started failing in this particular job 15:06:34 ok, I'll remove it from the logs 15:06:36 kk 15:06:44 my bad, as I said, I'll need help today! 15:06:55 and the last one 15:06:56 slaweq to change condition in the TestNeutronServer to have better logging 15:07:08 I think we can wait two weeks for this one 15:07:31 something to add here? 15:07:59 ok 15:08:00 #topic Stadium projects 15:08:18 ralonsoh: I think this is for logging: https://review.opendev.org/740283 15:08:24 so ongoing 15:08:35 you are right 15:08:38 I didn't see that 15:08:42 I'll add it to the logs 15:09:20 ok, I'll check that after the meeting 15:09:26 about the grenade jobs 15:09:43 neutron-ovn has an ongoing patch 15:09:55 https://review.opendev.org/#/c/729591/ 15:10:26 but is failing due to an unknown variable TEMPEST_CONFIG 15:10:42 if I have time, I'll review this patch this week 15:11:08 Do we have another ongoing patch related to https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop? 15:11:10 I have similar for odl, I have to go back 15:11:22 as I had some reviews from QA team 15:11:31 link? 15:11:44 https://review.opendev.org/725647 15:11:50 I found it finally. 15:12:09 The job itself is failing, but it was failing previously as well 15:12:26 I had to ping again the odl community, but not that responsive recently 15:13:24 yeah, I can't help you there, but I can review it later today 15:13:52 ralonsoh: thanks 15:14:13 next item is IPv6-only CI 15:14:18 https://etherpad.opendev.org/p/neutron-stadium-ipv6-testing 15:14:45 but I see most of the patches still in WIP 15:14:51 and not very active 15:15:17 indeed it has been some time since I last checked that one 15:15:35 as I remember gmann was/is the driver and he is really busy nowadays 15:15:58 I know 15:16:19 (we need to focus on one single task and finish it) 15:16:53 +1 (happy ideal world) 15:17:06 ok, next topic 15:17:07 #topic Switch to Ubuntu Focal 15:17:15 #link https://etherpad.opendev.org/p/neutron-victoria-switch_to_focal 15:17:23 I rebased https://review.opendev.org/#/c/734304/ 15:17:36 we merged the patch installing ovs from PyPI 15:17:51 that means the problem we had in this patch should be solved 15:18:24 (the CI is too busy and 734304 doesn't start) 15:19:04 #link https://review.opendev.org/#/c/737370/ 15:19:15 is blocked due to https://review.opendev.org/#/c/734700/ 15:19:48 until we don't have the jobs migrated in tempest, we can't continue in neutron 15:20:09 and the same for ODL 15:20:12 #link https://review.opendev.org/#/c/736703/ 15:20:57 for tempest jobs depends-on 734700 should work (at least that was the recommended way to test in last progress report) 15:21:28 either that or I got confused in my review ids 15:22:26 bcafarel, and in Neutron the patch is working https://review.opendev.org/#/c/737370/ 15:22:33 "neutron-tempest-dvr-ha-multinode-full" is passing 15:23:26 I think we should just wait for the patch in tempest to land 15:24:40 do we have something else in this topic? 15:25:09 ok, let's move then 15:25:11 #topic Stable branches 15:25:20 Ussuri dashboard: http://grafana.openstack.org/d/pM54U-Kiz/neutron-failure-rate-previous-stable-release?orgId=1 15:25:37 bcafarel, I'll need your expertise here 15:25:57 and in Train dashboard: http://grafana.openstack.org/d/dCFVU-Kik/neutron-failure-rate-older-stable-release?orgId=1 15:26:25 ok so the good news part is, gates are back in shape in all (neutron) stable branches! 15:26:26 if I'm not wrong, the pep8 patches are merged 15:26:39 cool 15:27:06 which may be a part of the gates backlog, I sent a few rechecks this morning on top of them 15:28:02 well, I see the recheck queue is almost at 100% 15:28:02 on the not-so-nice side, any other project wanting green stable gates needs to backport that isort fix until they release a new version 15:28:42 bcafarel, I'm tracking the pylint project, once they release a version supporting the new isort release, I'll update requirements 15:28:56 in all needed branches 15:29:19 ralonsoh++ quite a few folks will really appreciate that one I think :) 15:29:32 apart from this, something remarkable this week in the stable branches? 15:29:56 not much, we should have more interesting data next week 15:30:04 (gates fixes are too recent here) 15:30:08 sure 15:30:12 thanks a lot 15:30:38 ok, let's move on 15:30:42 #topic Grafana 15:30:50 #link http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?orgId=1 15:31:09 I don't see anything to be commented here 15:31:56 well yes,m neutron-ovn-tempest-ovs-master-fedora 15:32:00 still broken 15:32:27 * ralonsoh review neutron-ovn-tempest-ovs-master-fedora periodic job 15:32:42 I'll check it later 15:33:12 I don't have the cool script slawek uses to see the number of rechecks 15:34:08 anyway, something else to add here? 15:34:41 ok, next topic 15:34:43 #topic fullstack/functional 15:35:01 I filled a bug in oslo.privsep 15:35:05 #link https://bugs.launchpad.net/oslo.privsep/+bug/1887506 15:35:05 Launchpad bug 1887506 in oslo.privsep "Privileged daemon should not monkey patch "os", "threading" or "socket" libraries" [Undecided,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez) 15:35:40 the goal is to un-monkey-patch the privileged daemon to avoid the timeouts we have during the FTs execution 15:35:57 #link https://review.opendev.org/#/c/740970/ 15:36:55 wow that looks nice 15:37:09 well, I know it works locally 15:37:16 but that means nothing, you know 15:37:37 I have https://review.opendev.org/#/c/741017/ to check this patch in the Neutron CI 15:38:01 I have another log from the previous meeting 15:38:02 neutron.tests.functional.test_server.TestWsgiServer.test_restart_wsgi_on_sighup_multiple_workers 15:38:06 https://f7a63aeb9edd557a2176-4740624f0848c8c3257f704064a4516f.ssl.cf2.rackcdn.com/736026/4/gate/neutron-functional/d7d5c47/testr_results.html 15:38:34 I don't see this error assigned, so I'll take it (if I have some time this week) 15:38:58 maybe have something like https://review.opendev.org/#/c/741017/ with all tests disabled (except functional/FT) and run a good number of rechecks? 15:39:33 hmmm right! 15:39:33 that would give more confidence in the "un-monkey-ing fixes timeouts" 15:39:55 you are right, I'll comment the rest of the jobs there 15:40:35 I have nothing in the fullstack plate 15:40:39 do you have something? 15:41:09 ok, next topic 15:41:11 #topic Tempest/Scenario 15:41:17 #link https://review.opendev.org/#/c/736186/ 15:41:24 slawek left this patch to be reviwed 15:41:41 aaaand I see that was merged! 15:41:49 sorry, I'll remove it now 15:41:59 the best review, W+1 15:42:16 and the next one 15:42:19 #link https://review.opendev.org/#/c/739955/ 15:42:33 it makes sense, I'll review it now 15:43:32 and we have also the problem with the ovn tempest job 15:43:36 but this is already addressed 15:43:56 something else here? 15:44:20 ok, and last topic 15:44:21 not from me 15:44:33 #topic Periodic 15:44:34 http://zuul.openstack.org/buildsets?project=openstack%2Fneutron&pipeline=periodic&branch=master 15:44:43 as commented, the Fedora job 15:44:50 I'll try to review it this week 15:45:19 (any help will be welcomed) 15:45:30 ovn fedora periodic? 15:45:41 neutron-ovn-tempest-ovs-master-fedora 15:45:53 http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?orgId=1 15:46:10 In the "periodic jobs" window 15:46:51 opt/stack/ovn/controller/ovn-controller.c:505: undefined reference to `ovsdb_idl_reset_min_index' 15:47:04 https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-master-fedora 15:47:16 aight. ok 15:47:20 I'll fix that tomorrow morning 15:47:22 sorry for late fix 15:47:26 hahahaha 15:47:27 np! 15:47:38 yeah i saw this problem before 15:47:43 what is this? 15:47:47 its a problem in OVN_BRANCH OVS_BRANCH combination 15:47:53 ahhhh ok 15:48:04 I saw we are using ovn 20.06 now 15:48:10 right? 15:48:17 its not yet merged 15:48:35 ok 15:48:48 so in this job it is OVN_BRANCH=master and OVS_BRANCH=51e9479da62edb04a5be47a7655de75c299b9fa1 15:49:19 so its broken because of it, OVN relates to things not be a part of that OVS hash 15:49:22 I'll fix that tomorrow, thanks! 15:49:33 maciejjozefczyk, thanks to you 15:50:32 that's all for today 15:50:57 \o 15:50:58 do you have something else to add? 15:51:17 thank you all 15:51:21 #endmeeting