15:00:14 #startmeeting neutron_ci 15:00:15 Meeting started Wed Sep 23 15:00:14 2020 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:19 The meeting name has been set to 'neutron_ci' 15:00:21 hi 15:01:08 Hi 15:01:27 o/ 15:01:40 ok, I think we can start 15:01:47 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:01:48 Please open now :) 15:02:36 #topic Actions from previous meetings 15:02:42 ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807 15:02:53 slaweq: Error: Could not gather data from Launchpad for bug #1886807 (https://launchpad.net/bugs/1886807). The error has been logged 15:03:18 slaweq, sorry, I didn't start this task 15:03:34 ralonsoh: np 15:03:42 should I assign this one to You for next week? 15:03:47 yes 15:03:49 probably the bot complaining launchpad is slooow 15:04:05 #action ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807 15:04:15 slaweq: Error: Could not gather data from Launchpad for bug #1886807 (https://launchpad.net/bugs/1886807). The error has been logged 15:04:15 thx ralonsoh 15:04:17 ok, next one 15:04:19 ralonsoh to check issue with pep8 failures like https://zuul.opendev.org/t/openstack/build/6c8fbf9b97b44139bf1d70b9c85455bb 15:04:56 yes, I pushed a patch for this one 15:05:03 and Oleg another one on top of mine 15:05:04 let me check 15:05:27 https://review.opendev.org/#/c/748594/ 15:05:39 https://review.opendev.org/#/c/749320/ 15:05:49 No more issues reported 15:06:10 thx ralonsoh :) 15:06:21 ok, and the last one from last week 15:06:23 bcafarel to check failing periodic fedora job 15:06:52 * bcafarel looks for links 15:07:24 short version fedora jobs are broken on f31, infra aims to switch to 32 15:07:39 bcafarel: and on 32 it works, right? 15:07:47 if yes, I'm ok with that 15:08:02 with https://review.opendev.org/#/c/750292 as depends-on that job works 15:08:07 f31 is out of support in ~2 months anyway 15:08:17 one month after the release of f33 15:08:31 bcafarel++ ok, thx for checking 15:08:41 but the devstack fedora job fails in linked review on neutron-dhcp-agent issues (quite a few "no such file or directory" on pid and conf files...) 15:10:16 ouch 15:10:18 https://zuul.opendev.org/t/openstack/build/d7d455eadaac4f5081fca5dd7197b6ea/log/controller/logs/screen-q-dhcp.txt?severity=4 15:10:49 it can be something with python 3.8 maybe 15:10:57 as f32 is using that version of python already 15:11:43 hmm, ubuntu focal also uses python 3.8 15:12:00 focal is on 3.8 too (just checked) 15:12:03 and we have there errors like https://zuul.opendev.org/t/openstack/build/af52abc771a541548d8e528af79a524d/log/controller/logs/screen-q-dhcp.txt?severity=4 15:12:10 but job passed 15:12:26 anyone wants to report bug and investigate that? 15:12:48 maybe next week 15:13:20 ok, so I will report a bug and we will see next week 15:13:27 it's not critical for us I think 15:13:40 but worth to have it tracked 15:13:57 #action slaweq to report bug with neutron-dhcp-agent and Fedora 32 15:13:58 TBH, this should not be logged as an error 15:14:15 ralonsoh: so maybe that's redherring 15:14:22 we are just checking if the interface exists and has MAC 15:14:30 could be, yes 15:15:04 Than perhaps warning should be enough? 15:15:11 or even debug 15:15:21 info 15:15:28 we return false 15:15:34 not an exception 15:15:40 we should inform about this 15:15:55 that could be used in an active wait, waiting for the interface to be ready 15:16:39 I run a quick search on logstash and as I see it happens quite often, but the jobs are passing 15:16:45 ok, but still there are failed tests on that job 15:16:56 and we have to check why so many tests are failing due to ssh issue 15:17:02 I will report this bug and we will see 15:17:18 could it be similar to failing tests we see in focal? 15:17:19 ralonsoh: if You have time please report a bug for this logging thing 15:17:23 sure 15:17:32 bcafarel: nah, that's other tests 15:17:40 so I don't think it's the same thing really 15:18:34 it would have been too easy to have one single root cause :( 15:18:46 bcafarel: exactly 15:18:53 ok, I think we can move on now 15:18:56 #topic Switch to Ubuntu Focal 15:19:05 Etherpad: https://etherpad.opendev.org/p/neutron-victoria-switch_to_focal 15:19:09 I updated it a bit today 15:19:32 I know that gmann is going to pull the trigger and switch all devstack based jobs to focal today 15:20:01 so we may have problems with some jobs and in such case we should propose patch(es) to switch unstable jobs to bionic temporary 15:20:09 and then focus on debugging those issues 15:20:10 yeah these are on gate now https://review.opendev.org/#/c/731207/ https://review.opendev.org/#/c/734700/ 15:21:25 for unstable neutron-tempest-plugin scenario jobs I proposed today https://review.opendev.org/#/c/753552/ 15:21:32 hopefully it will help 15:21:34 but lets see 15:22:41 if https://review.opendev.org/#/c/738163/ passes fine on top of it it will be nice 15:22:50 sorry https://review.opendev.org/#/c/748367 15:23:02 (too many tabs with "Focal" in it...) 15:23:47 thx bcafarel for taking care of it 15:23:55 any other updates regarding this topic? 15:24:47 did the review-priority patches get merged? after the jobs are switched it may be a bit longer to merge stuff 15:25:12 bcafarel: which review-priority patches? 15:25:44 https://review.opendev.org/746347 / https://review.opendev.org/747774 15:26:29 hopefully we can get them in with all the focal switch + ovs hash issue 15:26:31 no they are failing with the ovs build issue :-( 15:27:05 yep, both failed on ovn jobs :/ 15:27:42 ok, I think we can move on 15:27:45 #topic Stadium projects 15:27:54 anything new regarding ci of stadium projects? 15:28:59 nothing from my side 15:29:23 ok, thx lajoskatona 15:29:25 so lets move on 15:29:30 #topic Stable branches 15:29:40 anything here? bcafarel? 15:29:41 we may see some activity on not spotted focal issues there, but so far so good 15:29:46 (late answer for stadium projects) 15:30:31 for stable, a few rechecks here and there needed, but backports got in nicely in last week (up to queens) 15:31:30 bcafarel, hmmmmm 15:31:31 https://review.opendev.org/#/c/752387/ 15:31:35 still waiting!! 15:31:53 ok, sorry, the OVN CI jobs 15:32:11 :) 15:32:30 ok, so lets move on 15:32:31 ah ussuri yep it probably is broken now 15:32:32 #topic Grafana 15:33:57 #link http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?orgId=1 15:34:35 what worries me is neutron-functional-with-uwsgi job which is failing pretty often recently 15:34:53 but when I was checking some jobs' results today, it was mostly some ovs or db timeouts 15:34:59 agree with this 15:35:09 so looked for me that this may be just overloaded infra maybe 15:35:50 so unless You have some specific example of issues to investigate there, I would say to wait few more days and see how it will be 15:37:41 and that is basically all from my side for today 15:38:21 do You have anything else You want to discuss today? 15:38:26 no thanks 15:38:32 if not I will give You 20 minutes back :) 15:39:06 ok, so thx for attending the meeting 15:39:11 and see You online :) 15:39:13 o/ 15:39:15 o/ and let's see how focal goes 15:39:18 Bye 15:39:19 #endmeeting