15:00:25 <ralonsoh> #startmeeting neutron_ci
15:00:26 <openstack> Meeting started Wed Jul 15 15:00:25 2020 UTC and is due to finish in 60 minutes.  The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:27 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:30 <openstack> The meeting name has been set to 'neutron_ci'
15:00:34 <lajoskatona> o/
15:01:07 <ralonsoh> today I'll need a bit of help from you
15:01:13 <bcafarel> half o/ (listening to an internal meeting in // )
15:01:22 <ralonsoh> I know...
15:02:17 <ralonsoh> ok, let's go
15:02:18 <ralonsoh> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:02:22 <ralonsoh> Please open now :)
15:02:29 <ralonsoh> (as usual)
15:02:33 <ralonsoh> #topic Actions from previous meetings
15:02:40 <ralonsoh> maciejjozefczyk to check neutron_tempest_plugin.scenario.test_connectivity.NetworkConnectivityTest.test_connectivity_through_2_routers in ovn jobs
15:02:52 <maciejjozefczyk> heya
15:02:56 <maciejjozefczyk> yes im looking at this one
15:03:03 <ralonsoh> cool
15:03:13 <maciejjozefczyk> I believe I tracked something https://review.opendev.org/#/c/740491/
15:03:51 <ralonsoh> I see a job failing
15:03:54 <maciejjozefczyk> it started failing somewhere around those refactors
15:03:54 <maciejjozefczyk> https://github.com/ovn-org/ovn/commits/master
15:04:03 <ralonsoh> so this is not always happening
15:04:05 <maciejjozefczyk> ovn-northd: Document OVS register usage in logical flows. +
15:04:22 <maciejjozefczyk> yes, thats strange... for stable release it works fine
15:04:32 <maciejjozefczyk> im gonna continue investigating whats this
15:04:41 <ralonsoh> sure thanks!
15:04:49 <maciejjozefczyk> I was unable to reproduce it locally so I'm using gate to do so
15:05:28 <ralonsoh> and did you have time for the other action?
15:05:29 <ralonsoh> maciejjozefczyk to check failing neutron-ovn-tempest-full-multinode-ovs-master job
15:06:20 <maciejjozefczyk> thats the same
15:06:30 <maciejjozefczyk> ths scenario started failing in this particular job
15:06:34 <ralonsoh> ok, I'll remove it from the logs
15:06:36 <maciejjozefczyk> kk
15:06:44 <ralonsoh> my bad, as I said, I'll need help today!
15:06:55 <ralonsoh> and the last one
15:06:56 <ralonsoh> slaweq to change condition in the TestNeutronServer to have better logging
15:07:08 <ralonsoh> I think we can wait two weeks for this one
15:07:31 <ralonsoh> something to add here?
15:07:59 <ralonsoh> ok
15:08:00 <ralonsoh> #topic Stadium projects
15:08:18 <lajoskatona> ralonsoh: I think this is for logging: https://review.opendev.org/740283
15:08:24 <lajoskatona> so ongoing
15:08:35 <ralonsoh> you are right
15:08:38 <ralonsoh> I didn't see that
15:08:42 <ralonsoh> I'll add it to the logs
15:09:20 <ralonsoh> ok, I'll check that after the meeting
15:09:26 <ralonsoh> about the grenade jobs
15:09:43 <ralonsoh> neutron-ovn has an ongoing patch
15:09:55 <ralonsoh> https://review.opendev.org/#/c/729591/
15:10:26 <ralonsoh> but is failing due to an unknown variable TEMPEST_CONFIG
15:10:42 <ralonsoh> if I have time, I'll review this patch this week
15:11:08 <ralonsoh> Do we have another ongoing patch related to https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop?
15:11:10 <lajoskatona> I have similar for odl, I have to go back
15:11:22 <lajoskatona> as I had some reviews from QA team
15:11:31 <ralonsoh> link?
15:11:44 <lajoskatona> https://review.opendev.org/725647
15:11:50 <lajoskatona> I found it finally.
15:12:09 <lajoskatona> The job itself is failing, but it was failing previously as well
15:12:26 <lajoskatona> I had to ping again the odl community, but not that responsive recently
15:13:24 <ralonsoh> yeah, I can't help you there, but I can review it later today
15:13:52 <lajoskatona> ralonsoh: thanks
15:14:13 <ralonsoh> next item is IPv6-only CI
15:14:18 <ralonsoh> https://etherpad.opendev.org/p/neutron-stadium-ipv6-testing
15:14:45 <ralonsoh> but I see most of the patches still in WIP
15:14:51 <ralonsoh> and not very active
15:15:17 <bcafarel> indeed it has been some time since I last checked that one
15:15:35 <lajoskatona> as I remember gmann was/is  the driver and he is really busy nowadays
15:15:58 <ralonsoh> I know
15:16:19 <ralonsoh> (we need to focus on one single task and finish it)
15:16:53 <lajoskatona> +1 (happy ideal world)
15:17:06 <ralonsoh> ok, next topic
15:17:07 <ralonsoh> #topic Switch to Ubuntu Focal
15:17:15 <ralonsoh> #link https://etherpad.opendev.org/p/neutron-victoria-switch_to_focal
15:17:23 <ralonsoh> I rebased https://review.opendev.org/#/c/734304/
15:17:36 <ralonsoh> we merged the patch installing ovs from PyPI
15:17:51 <ralonsoh> that means the problem we had in this patch should be solved
15:18:24 <ralonsoh> (the CI is too busy and 734304 doesn't start)
15:19:04 <ralonsoh> #link https://review.opendev.org/#/c/737370/
15:19:15 <ralonsoh> is blocked due to https://review.opendev.org/#/c/734700/
15:19:48 <ralonsoh> until we don't have the jobs migrated in tempest, we can't continue in neutron
15:20:09 <ralonsoh> and the same for ODL
15:20:12 <ralonsoh> #link https://review.opendev.org/#/c/736703/
15:20:57 <bcafarel> for tempest jobs depends-on 734700 should work (at least that was the recommended way to test in last progress report)
15:21:28 <bcafarel> either that or I got confused in my review ids
15:22:26 <ralonsoh> bcafarel, and in Neutron the patch is working https://review.opendev.org/#/c/737370/
15:22:33 <ralonsoh> "neutron-tempest-dvr-ha-multinode-full" is passing
15:23:26 <ralonsoh> I think we should just wait for the patch in tempest to land
15:24:40 <ralonsoh> do we have something else in this topic?
15:25:09 <ralonsoh> ok, let's move then
15:25:11 <ralonsoh> #topic Stable branches
15:25:20 <ralonsoh> Ussuri dashboard: http://grafana.openstack.org/d/pM54U-Kiz/neutron-failure-rate-previous-stable-release?orgId=1
15:25:37 <ralonsoh> bcafarel, I'll need your expertise here
15:25:57 <ralonsoh> and in Train dashboard: http://grafana.openstack.org/d/dCFVU-Kik/neutron-failure-rate-older-stable-release?orgId=1
15:26:25 <bcafarel> ok so the good news part is, gates are back in shape in all (neutron) stable branches!
15:26:26 <ralonsoh> if I'm not wrong, the pep8 patches are merged
15:26:39 <ralonsoh> cool
15:27:06 <bcafarel> which may be a part of the gates backlog, I sent a few rechecks this morning on top of them
15:28:02 <ralonsoh> well, I see the recheck queue is almost at 100%
15:28:02 <bcafarel> on the not-so-nice side, any other project wanting green stable gates needs to backport that isort fix until they release a new version
15:28:42 <ralonsoh> bcafarel, I'm tracking the pylint project, once they release a version supporting the new isort release, I'll update requirements
15:28:56 <ralonsoh> in all needed branches
15:29:19 <bcafarel> ralonsoh++ quite a few folks will really appreciate that one I think :)
15:29:32 <ralonsoh> apart from this, something remarkable this week in the stable branches?
15:29:56 <bcafarel> not much, we should have more interesting data next week
15:30:04 <bcafarel> (gates fixes are too recent here)
15:30:08 <ralonsoh> sure
15:30:12 <ralonsoh> thanks a lot
15:30:38 <ralonsoh> ok, let's move on
15:30:42 <ralonsoh> #topic Grafana
15:30:50 <ralonsoh> #link http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?orgId=1
15:31:09 <ralonsoh> I don't see anything to be commented here
15:31:56 <ralonsoh> well yes,m neutron-ovn-tempest-ovs-master-fedora
15:32:00 <ralonsoh> still broken
15:32:27 * ralonsoh review neutron-ovn-tempest-ovs-master-fedora periodic job
15:32:42 <ralonsoh> I'll check it later
15:33:12 <ralonsoh> I don't have the cool script slawek uses to see the number of rechecks
15:34:08 <ralonsoh> anyway, something else to add here?
15:34:41 <ralonsoh> ok, next topic
15:34:43 <ralonsoh> #topic fullstack/functional
15:35:01 <ralonsoh> I filled a bug in oslo.privsep
15:35:05 <ralonsoh> #link https://bugs.launchpad.net/oslo.privsep/+bug/1887506
15:35:05 <openstack> Launchpad bug 1887506 in oslo.privsep "Privileged daemon should not monkey patch "os", "threading" or "socket" libraries" [Undecided,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
15:35:40 <ralonsoh> the goal is to un-monkey-patch the privileged daemon to avoid the timeouts we have during the FTs execution
15:35:57 <ralonsoh> #link https://review.opendev.org/#/c/740970/
15:36:55 <bcafarel> wow that looks nice
15:37:09 <ralonsoh> well, I know it works locally
15:37:16 <ralonsoh> but that means nothing, you know
15:37:37 <ralonsoh> I have https://review.opendev.org/#/c/741017/ to check this patch in the Neutron CI
15:38:01 <ralonsoh> I have another log from the previous meeting
15:38:02 <ralonsoh> neutron.tests.functional.test_server.TestWsgiServer.test_restart_wsgi_on_sighup_multiple_workers
15:38:06 <ralonsoh> https://f7a63aeb9edd557a2176-4740624f0848c8c3257f704064a4516f.ssl.cf2.rackcdn.com/736026/4/gate/neutron-functional/d7d5c47/testr_results.html
15:38:34 <ralonsoh> I don't see this error assigned, so I'll take it (if I have some time this week)
15:38:58 <bcafarel> maybe have something like https://review.opendev.org/#/c/741017/ with all tests disabled (except functional/FT) and run a good number of rechecks?
15:39:33 <ralonsoh> hmmm right!
15:39:33 <bcafarel> that would give more confidence in the "un-monkey-ing fixes timeouts"
15:39:55 <ralonsoh> you are right, I'll comment the rest of the jobs there
15:40:35 <ralonsoh> I have nothing in the fullstack plate
15:40:39 <ralonsoh> do you have something?
15:41:09 <ralonsoh> ok, next topic
15:41:11 <ralonsoh> #topic Tempest/Scenario
15:41:17 <ralonsoh> #link https://review.opendev.org/#/c/736186/
15:41:24 <ralonsoh> slawek left this patch to be reviwed
15:41:41 <ralonsoh> aaaand I see that was merged!
15:41:49 <ralonsoh> sorry, I'll remove it now
15:41:59 <bcafarel> the best review, W+1
15:42:16 <ralonsoh> and the next one
15:42:19 <ralonsoh> #link https://review.opendev.org/#/c/739955/
15:42:33 <ralonsoh> it makes sense, I'll review it now
15:43:32 <ralonsoh> and we have also the problem with the ovn tempest job
15:43:36 <ralonsoh> but this is already addressed
15:43:56 <ralonsoh> something else here?
15:44:20 <ralonsoh> ok, and last topic
15:44:21 <bcafarel> not from me
15:44:33 <ralonsoh> #topic Periodic
15:44:34 <ralonsoh> http://zuul.openstack.org/buildsets?project=openstack%2Fneutron&pipeline=periodic&branch=master
15:44:43 <ralonsoh> as commented, the Fedora job
15:44:50 <ralonsoh> I'll try to review it this week
15:45:19 <ralonsoh> (any help will be welcomed)
15:45:30 <maciejjozefczyk> ovn fedora periodic?
15:45:41 <ralonsoh> neutron-ovn-tempest-ovs-master-fedora
15:45:53 <ralonsoh> http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?orgId=1
15:46:10 <ralonsoh> In the "periodic jobs" window
15:46:51 <bcafarel> opt/stack/ovn/controller/ovn-controller.c:505: undefined reference to `ovsdb_idl_reset_min_index'
15:47:04 <maciejjozefczyk> https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-master-fedora
15:47:16 <maciejjozefczyk> aight. ok
15:47:20 <maciejjozefczyk> I'll fix that tomorrow morning
15:47:22 <maciejjozefczyk> sorry for late fix
15:47:26 <ralonsoh> hahahaha
15:47:27 <ralonsoh> np!
15:47:38 <maciejjozefczyk> yeah i saw this problem before
15:47:43 <ralonsoh> what is this?
15:47:47 <maciejjozefczyk> its a problem in OVN_BRANCH OVS_BRANCH combination
15:47:53 <ralonsoh> ahhhh ok
15:48:04 <ralonsoh> I saw we are using ovn 20.06 now
15:48:10 <ralonsoh> right?
15:48:17 <maciejjozefczyk> its not yet merged
15:48:35 <maciejjozefczyk> ok
15:48:48 <maciejjozefczyk> so in this job it is OVN_BRANCH=master and  OVS_BRANCH=51e9479da62edb04a5be47a7655de75c299b9fa1
15:49:19 <maciejjozefczyk> so its broken because of it, OVN relates to things not be a part of that OVS hash
15:49:22 <maciejjozefczyk> I'll fix that tomorrow, thanks!
15:49:33 <ralonsoh> maciejjozefczyk, thanks to you
15:50:32 <ralonsoh> that's all for today
15:50:57 <maciejjozefczyk> \o
15:50:58 <ralonsoh> do you have something else to add?
15:51:17 <ralonsoh> thank you all
15:51:21 <ralonsoh> #endmeeting