15:00:12 #startmeeting neutron_ci 15:00:12 Meeting started Tue Jul 26 15:00:12 2022 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:12 The meeting name has been set to 'neutron_ci' 15:00:14 o/ 15:00:16 o/ 15:00:53 o/ 15:01:10 lets wait for other folks too 15:01:24 ralonsoh: ykarel bcafarel ping, neutron CI meeting 15:01:38 si sorry 15:01:41 hi* 15:02:31 ok, lets start 15:02:36 Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1 15:02:37 Please open now :) 15:02:44 #topic Actions from previous meetings 15:02:50 o/ 15:02:53 slaweq to fix functiona/fullstack failures on centos 9 stream: https://bugs.launchpad.net/neutron/+bug/1976323 15:03:02 still no progress with that one, but I will get to it :) 15:03:07 #action slaweq to fix functiona/fullstack failures on centos 9 stream: https://bugs.launchpad.net/neutron/+bug/1976323 15:03:14 next one 15:03:16 slaweq to move fedora periodic job to centos9 stream 15:03:33 no progress yet but I just started working on it few minutes ago 15:03:39 so I hope I will have something soon 15:03:43 #action slaweq to move fedora periodic job to centos9 stream 15:04:02 next one 15:04:05 lajoskatona to open LP bugs related to the stable branches timeouts 15:04:32 I think I have, just a sec 15:04:59 https://review.opendev.org/q/119b82f1b10df50a696e936e80672c7ddb00436f 15:05:00 https://bugs.launchpad.net/neutron/+bug/1982206 15:05:06 thanks ralonsoh 15:05:14 ++ 15:05:16 thx 15:05:28 next one then 15:05:29 mlavalle to report issue with propose-translation-update job and investigate that issue 15:05:37 I did investigate 15:05:53 this job is failing massively for all the projects 15:06:11 since this merged: https://review.opendev.org/c/zuul/zuul-jobs/+/846248 15:06:52 so I proposed a revert. Not because I thought necessarily that was the best solution, but to start a discussion: https://review.opendev.org/c/zuul/zuul-jobs/+/850917 15:07:01 Revert of revert :-) 15:07:12 there is an alternative proposal: https://review.opendev.org/c/openstack/project-config/+/850962 15:07:37 and indeed I think I got the conversation going: https://review.opendev.org/c/openstack/project-config/+/850962 15:07:48 so we are getting there 15:08:05 thx mlavalle for taking care of it 15:08:24 with that we checked all action items from last week 15:08:25 so most likely I will end up abandoning the revert of the revert and hopefully will find a solution with the latest patch from Ian 15:08:35 ++ 15:08:41 +1 15:08:43 +1 15:08:52 #topic Stable branches 15:08:58 bcafarel: any updates here? 15:08:59 as I said, the revert of the revert was more a provocation than anything else 15:09:21 :-) 15:09:24 as long as the "revert of the revert of the revert..." line still fits in the screen :) 15:10:06 for stable branches please check this for train: https://review.opendev.org/c/openstack/requirements/+/850828 15:10:10 for stable branches, we have these timeout issues - some in UT fixed by lajoskatona to backport and others, still looking as they timeout during installation of packages 15:10:32 https://bugs.launchpad.net/neutron/+bug/1982206 and https://bugs.launchpad.net/neutron/+bug/1982720 (that later one breaking train grenade) 15:11:42 ahh ok, so we have bug for the train issue, I forgot that 15:11:43 so grenade timeout happens only on train? 15:13:14 at least from what I have seen yes 15:13:25 ~2 hours to get paste "Processing triggers for libc-bin" step apparently :/ 15:13:53 https://zuul.opendev.org/t/openstack/build/1a4f23e400a3491b88b161e50878753a/log/job-output.txt#2498 15:14:06 regarding this grenade job, is this really neutron issue? 15:14:57 good question 15:15:29 for me this doesn't seems like an neutron issue really 15:15:47 I can try to investigate it a bit more 15:16:01 if in Train we are still using 18.04, that means the default binary is python3.6 15:16:04 if I'm not wrong 15:16:12 so that error will be persistent 15:16:27 ralonsoh: yes, it can be 15:16:34 I will investigate it 15:16:36 #action slaweq to check neutron-grenade failurs on stable/train: https://bugs.launchpad.net/neutron/+bug/1982720 15:16:39 thanks 15:16:48 thanks 15:16:48 seems https://ff10fb94d3bc910c8640-4a6b1d9d45ac1c9671941502c3743ab2.ssl.cf1.rackcdn.com/850828/1/check/neutron-grenade/1a4f23e/logs/devstack-gate-setup-workspace-old.txt the actual error? 15:17:25 aha, so is it maybe because stable/stein is EM already? 15:17:40 ah right, could be 15:18:12 Merged openstack/ovn-octavia-provider stable/wallaby: Ensure members without subnet belong to VIP subnet or fail https://review.opendev.org/c/openstack/ovn-octavia-provider/+/850558 15:18:15 no, it's something different 15:18:44 anyway, lets not spent too much time on this today, I will check it and we will see 15:18:52 li+1 15:18:54 IMO, we should not spend too much time on grenade on Train 15:19:09 +1 15:19:18 true 15:19:22 +1 15:19:23 ok, lets move on now 15:19:25 #topic Stadium projects 15:19:31 lajoskatona: anything new? 15:20:13 only bagpipe is failing as I remember from this morning I have to check it in detail 15:20:40 it's possible that only the periodic job failed 15:21:36 that's it for the stadiums 15:21:52 ok, thx 15:21:59 #topic Grafana 15:22:06 #link https://grafana.opendev.org/d/f913631585/neutron-failure-rate 15:22:42 there is not much there as in check queue we have pretty high numbers on some jobs 15:22:52 like e.g. multinode ovs jobs - issue related to os-vif 15:23:15 last week we also had issue with pyroute2 which blocked our gate 15:23:27 yeah, both libraries 15:24:21 with both of those issues, I don't have much to say about rechecks neighter 15:24:32 as we didn't merged too many patches recently 15:24:51 Regarding bare rechecks: 15:24:52 +---------+---------------+--------------+-------------------+... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/TKTpTbrxBQEuJgvvIDGanBWX) 15:24:57 here are the stats 15:25:10 is it lower than last week? 15:25:16 not much 15:25:22 but I need to update this script finally 15:25:33 ok, thanks for taking care of it 15:25:38 as it is counting all rechecks from patches updated in last X days 15:25:48 even rechecks which are older 15:26:06 and that's all from me for the rechecks and grafana 15:26:09 any questions/comments? 15:26:20 no thanks 15:27:08 if no, I think we can move on 15:27:13 to the last topic for today 15:27:17 #topic Periodic 15:27:22 here there is one new issue 15:27:27 openstack-tox-py39-with-oslo-master is failing since 21.07.2022 every day 15:27:36 I reported bug https://bugs.launchpad.net/neutron/+bug/1982818 15:27:45 and failure example is https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9d6/periodic/opendev.org/openstack/neutron/master/openstack-tox-py39-with-oslo-master/9d672a3/testr_results.html 15:27:55 anyone wants to check it maybe? 15:27:59 if not I can take a look 15:28:01 yeah 15:28:07 maybe related to new DB 15:28:09 I can check it 15:28:14 thx ralonsoh 15:28:30 #action ralonsoh to check failing openstack-tox-py39-with-oslo-master periodic job 15:28:41 I added comment to the bug, 15:28:59 seems oslo.db lkast release 15:29:13 ahhh yes 15:29:15 thx lajoskatona 15:30:01 The patch "a530cbf Remove the 'Session.autocommit' parameter" was guilty in my env 15:30:19 from oslo.db, but I had no more time to see what it is really 15:32:18 anyone have any other topic related to neutron CI to discuss today? 15:32:30 if not, I will give You some time back today :) 15:33:05 nothing from me 15:33:14 fine for me 15:33:14 nothing from me 15:33:25 ok, thx for attending the meeting then 15:33:28 and see You all online 15:33:30 o/ 15:33:31 bye 15:33:32 #endmeeting