15:00:12 <slaweq> #startmeeting neutron_ci
15:00:12 <opendevmeet> Meeting started Tue Jul 26 15:00:12 2022 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:12 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:12 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:00:14 <mlavalle> o/
15:00:16 <slaweq> o/
15:00:53 <lajoskatona> o/
15:01:10 <slaweq> lets wait for other folks too
15:01:24 <slaweq> ralonsoh: ykarel bcafarel ping, neutron CI meeting
15:01:38 <ralonsoh> si sorry
15:01:41 <ralonsoh> hi*
15:02:31 <slaweq> ok, lets start
15:02:36 <slaweq> Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1
15:02:37 <slaweq> Please open now :)
15:02:44 <slaweq> #topic Actions from previous meetings
15:02:50 <ykarel> o/
15:02:53 <slaweq> slaweq to fix functiona/fullstack failures on centos 9 stream: https://bugs.launchpad.net/neutron/+bug/1976323
15:03:02 <slaweq> still no progress with that one, but I will get to it :)
15:03:07 <slaweq> #action     slaweq to fix functiona/fullstack failures on centos 9 stream: https://bugs.launchpad.net/neutron/+bug/1976323
15:03:14 <slaweq> next one
15:03:16 <slaweq> slaweq to move fedora periodic job to centos9 stream
15:03:33 <slaweq> no progress yet but I just started working on it few minutes ago
15:03:39 <slaweq> so I hope I will have something soon
15:03:43 <slaweq> #action slaweq to move fedora periodic job to centos9 stream
15:04:02 <slaweq> next one
15:04:05 <slaweq> lajoskatona to open LP bugs related to the stable branches timeouts
15:04:32 <lajoskatona> I think I have, just a sec
15:04:59 <ralonsoh> https://review.opendev.org/q/119b82f1b10df50a696e936e80672c7ddb00436f
15:05:00 <lajoskatona> https://bugs.launchpad.net/neutron/+bug/1982206
15:05:06 <lajoskatona> thanks ralonsoh
15:05:14 <slaweq> ++
15:05:16 <slaweq> thx
15:05:28 <slaweq> next one then
15:05:29 <slaweq> mlavalle to report issue with propose-translation-update job and investigate that issue
15:05:37 <mlavalle> I did investigate
15:05:53 <mlavalle> this job is failing massively for all the projects
15:06:11 <mlavalle> since this merged: https://review.opendev.org/c/zuul/zuul-jobs/+/846248
15:06:52 <mlavalle> so I proposed a revert. Not because I thought necessarily that was the best solution, but to start a discussion: https://review.opendev.org/c/zuul/zuul-jobs/+/850917
15:07:01 <lajoskatona> Revert of revert :-)
15:07:12 <ralonsoh> there is an alternative proposal: https://review.opendev.org/c/openstack/project-config/+/850962
15:07:37 <mlavalle> and indeed I think I got the conversation going: https://review.opendev.org/c/openstack/project-config/+/850962
15:07:48 <mlavalle> so we are getting there
15:08:05 <slaweq> thx mlavalle for taking care of it
15:08:24 <slaweq> with that we checked all action items from last week
15:08:25 <mlavalle> so most likely I will end up abandoning the revert of the revert and hopefully will find a solution with the latest patch from Ian
15:08:35 <slaweq> ++
15:08:41 <ralonsoh> +1
15:08:43 <ykarel> +1
15:08:52 <slaweq> #topic Stable branches
15:08:58 <slaweq> bcafarel: any updates here?
15:08:59 <mlavalle> as I said, the revert of the revert was more a provocation than anything else
15:09:21 <lajoskatona> :-)
15:09:24 <bcafarel> as long as the "revert of the revert of the revert..." line still fits in the screen :)
15:10:06 <lajoskatona> for stable branches please check this for train: https://review.opendev.org/c/openstack/requirements/+/850828
15:10:10 <bcafarel> for stable branches, we have these timeout issues - some in UT fixed by lajoskatona to backport and others, still looking as they timeout during installation of packages
15:10:32 <bcafarel> https://bugs.launchpad.net/neutron/+bug/1982206 and https://bugs.launchpad.net/neutron/+bug/1982720 (that later one breaking train grenade)
15:11:42 <lajoskatona> ahh ok, so we have bug for the train issue, I forgot that
15:11:43 <slaweq> so grenade timeout happens only on train?
15:13:14 <bcafarel> at least from what I have seen yes
15:13:25 <bcafarel> ~2 hours to get paste "Processing triggers for libc-bin" step apparently :/
15:13:53 <slaweq> https://zuul.opendev.org/t/openstack/build/1a4f23e400a3491b88b161e50878753a/log/job-output.txt#2498
15:14:06 <slaweq> regarding this grenade job, is this really neutron issue?
15:14:57 <lajoskatona> good question
15:15:29 <slaweq> for me this doesn't seems like an neutron issue really
15:15:47 <slaweq> I can try to investigate it a bit more
15:16:01 <ralonsoh> if in Train we are still using 18.04, that means the default binary is python3.6
15:16:04 <ralonsoh> if I'm not wrong
15:16:12 <ralonsoh> so that error will be persistent
15:16:27 <slaweq> ralonsoh: yes, it can be
15:16:34 <slaweq> I will investigate it
15:16:36 <slaweq> #action slaweq to check neutron-grenade failurs on stable/train: https://bugs.launchpad.net/neutron/+bug/1982720
15:16:39 <ralonsoh> thanks
15:16:48 <lajoskatona> thanks
15:16:48 <ykarel> seems https://ff10fb94d3bc910c8640-4a6b1d9d45ac1c9671941502c3743ab2.ssl.cf1.rackcdn.com/850828/1/check/neutron-grenade/1a4f23e/logs/devstack-gate-setup-workspace-old.txt the actual error?
15:17:25 <slaweq> aha, so is it maybe because stable/stein is EM already?
15:17:40 <ralonsoh> ah right, could be
15:18:12 <opendevreview> Merged openstack/ovn-octavia-provider stable/wallaby: Ensure members without subnet belong to VIP subnet or fail  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/850558
15:18:15 <slaweq> no, it's something different
15:18:44 <slaweq> anyway, lets not spent too much time on this today, I will check it and we will see
15:18:52 <ykarel> li+1
15:18:54 <ralonsoh> IMO, we should not spend too much time on grenade on Train
15:19:09 <lajoskatona> +1
15:19:18 <slaweq> true
15:19:22 <ykarel> +1
15:19:23 <slaweq> ok, lets move on now
15:19:25 <slaweq> #topic Stadium projects
15:19:31 <slaweq> lajoskatona: anything new?
15:20:13 <lajoskatona> only bagpipe is failing as I remember from this morning I have to check it in detail
15:20:40 <lajoskatona> it's possible that only the periodic job failed
15:21:36 <lajoskatona> that's it for the stadiums
15:21:52 <slaweq> ok, thx
15:21:59 <slaweq> #topic Grafana
15:22:06 <slaweq> #link https://grafana.opendev.org/d/f913631585/neutron-failure-rate
15:22:42 <slaweq> there is not much there as in check queue we have pretty high numbers on some jobs
15:22:52 <slaweq> like e.g. multinode ovs jobs - issue related to os-vif
15:23:15 <slaweq> last week we also had issue with pyroute2 which blocked our gate
15:23:27 <ralonsoh> yeah, both libraries
15:24:21 <slaweq> with both of those issues, I don't have much to say about rechecks neighter
15:24:32 <slaweq> as we didn't merged too many patches recently
15:24:51 <slaweq> Regarding bare rechecks:
15:24:52 <slaweq> +---------+---------------+--------------+-------------------+... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/TKTpTbrxBQEuJgvvIDGanBWX)
15:24:57 <slaweq> here are the stats
15:25:10 <lajoskatona> is it lower than last week?
15:25:16 <slaweq> not much
15:25:22 <slaweq> but I need to update this script finally
15:25:33 <lajoskatona> ok, thanks for taking care of it
15:25:38 <slaweq> as it is counting all rechecks from patches updated in last X days
15:25:48 <slaweq> even rechecks which are older
15:26:06 <slaweq> and that's all from me for the rechecks and grafana
15:26:09 <slaweq> any questions/comments?
15:26:20 <ralonsoh> no thanks
15:27:08 <slaweq> if no, I think we can move on
15:27:13 <slaweq> to the last topic for today
15:27:17 <slaweq> #topic Periodic
15:27:22 <slaweq> here there is one new issue
15:27:27 <slaweq> openstack-tox-py39-with-oslo-master is failing since 21.07.2022 every day
15:27:36 <slaweq> I reported bug https://bugs.launchpad.net/neutron/+bug/1982818
15:27:45 <slaweq> and failure example is https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9d6/periodic/opendev.org/openstack/neutron/master/openstack-tox-py39-with-oslo-master/9d672a3/testr_results.html
15:27:55 <slaweq> anyone wants to check it maybe?
15:27:59 <slaweq> if not I can take a look
15:28:01 <ralonsoh> yeah
15:28:07 <ralonsoh> maybe related to new DB
15:28:09 <ralonsoh> I can check it
15:28:14 <slaweq> thx ralonsoh
15:28:30 <slaweq> #action ralonsoh to check failing openstack-tox-py39-with-oslo-master periodic job
15:28:41 <lajoskatona> I added comment to the bug,
15:28:59 <lajoskatona> seems oslo.db lkast release
15:29:13 <ralonsoh> ahhh yes
15:29:15 <slaweq> thx lajoskatona
15:30:01 <lajoskatona> The patch "a530cbf Remove the 'Session.autocommit' parameter" was guilty in my env
15:30:19 <lajoskatona> from oslo.db, but I had no more time to see what it is really
15:32:18 <slaweq> anyone have any other topic related to neutron CI to discuss today?
15:32:30 <slaweq> if not, I will give You some time back today :)
15:33:05 <lajoskatona> nothing from me
15:33:14 <ralonsoh> fine for me
15:33:14 <mlavalle> nothing from me
15:33:25 <slaweq> ok, thx for attending the meeting then
15:33:28 <slaweq> and see You all online
15:33:30 <slaweq> o/
15:33:31 <ralonsoh> bye
15:33:32 <slaweq> #endmeeting