15:00:58 <slaweq> #startmeeting neutron_ci
15:00:58 <opendevmeet> Meeting started Tue Aug 23 15:00:58 2022 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:58 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:58 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:01:00 <mlavalle> o/
15:01:01 <lajoskatona> o/
15:01:11 <slaweq> o/
15:01:16 <slaweq> Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1
15:01:39 <slaweq> ralonsoh: ykarel ping
15:01:43 <slaweq> bcafarel:
15:01:56 <ralonsoh> sorry, hi
15:02:02 <ykarel> hi
15:02:17 <bcafarel> o/ sorry
15:02:22 <slaweq> no problem :)
15:02:29 <slaweq> I think we can start
15:02:32 <slaweq> #topic Actions from previous meetings
15:02:41 <slaweq> slaweq to fix functiona/fullstack failures on centos 9 stream: https://bugs.launchpad.net/neutron/+bug/1976323
15:02:48 <slaweq> I got back to it today
15:03:10 <slaweq> and I still have no idea why 2 functional tests related to the deletion of conntrack entires are failing
15:03:50 <slaweq> on centos 9 stream there is libnetfilter_conntrack-1.0.8-4.el9.x86_64 and with this version tests are failing
15:04:17 <slaweq> but when I downgraded it to libnetfilter_conntrack-1.0.6-5.el8.x86_64 tests are passing
15:04:47 <ralonsoh> any error in syslog?
15:04:53 <ralonsoh> or in neutron
15:05:12 <slaweq> from what I found, with version 1.0.8 we got -1 in https://github.com/openstack/neutron/blob/master/neutron/privileged/agent/linux/netlink_lib.py#L200
15:06:01 <slaweq> ralonsoh: no any other errors
15:06:19 <ralonsoh> slaweq, I'll ping you later
15:06:32 <slaweq> ralonsoh: thx, we can talk about it offline tomorrow morning
15:07:05 <slaweq> in the meantime I will assign same action for me for this week
15:07:10 <slaweq> #action     slaweq to fix functiona/fullstack failures on centos 9 stream: https://bugs.launchpad.net/neutron/+bug/1976323
15:07:14 <slaweq> ok, next one
15:07:27 <slaweq> ralonsoh to check route not found failure in functional job https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9a3/852723/2/check/neutron-functional-with-uwsgi/9a30d07/testr_results.html
15:07:37 <ralonsoh> yeah, still fighting with this one, sorry
15:07:44 <slaweq> no problem
15:07:55 <slaweq> do You want to assign it for You for this week?
15:08:23 <ralonsoh> sure
15:08:36 <slaweq> #action ralonsoh to check route not found failure in functional job https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9a3/852723/2/check/neutron-functional-with-uwsgi/9a30d07/testr_results.html
15:08:38 <slaweq> thx
15:08:46 <slaweq> ok, next one
15:08:47 <slaweq> slaweq to check failing neutron-ovn-tempest-ovs-master-centos-9-stream periodic job
15:08:57 <slaweq> I also got to this today
15:09:19 <slaweq> for now I proposed devstack patch https://review.opendev.org/c/openstack/devstack/+/854190
15:09:40 <slaweq> and I'm waiting for results of that job with this change
15:09:55 <slaweq> I think it will move forward a bit but there will be other changes needed as well probably
15:10:04 <ralonsoh> when that started to happen?
15:10:32 <slaweq> ralonsoh: I don't really know
15:10:49 <slaweq> I will also assign it to me for this week and will continue work on fixes
15:10:57 <slaweq> #action slaweq to check failing neutron-ovn-tempest-ovs-master-centos-9-stream periodic job
15:11:42 <slaweq> and that was last Action Item for today
15:11:44 <slaweq> #topic Stable branches
15:11:57 <slaweq> lajoskatona: bcafarel: anything new with stable branches?
15:12:17 <lajoskatona> From me only what I mentioned on the team meeting
15:12:17 <bcafarel> not a lot on my side, backports on >=train are merging fine
15:13:04 <lajoskatona> I checked stein failures and as I see we can skip grenade as that is experimenta lfrom QA teams perspective on these branches
15:13:49 <bcafarel> +1 and backports in these branches should not have upgrade impacts
15:13:58 <lajoskatona> and the linuxbridge tempest job is failing and I know what is the problem but can't figure out what to say in zuul cfg to have the good result (disable q-log or disable neutron-log)
15:14:19 <slaweq> lajoskatona: You can try to disable both :)
15:14:22 <bcafarel> lajoskatona: use uppercase to yell at zuul? :)
15:14:31 <lajoskatona> :-)
15:14:56 <lajoskatona> ok, I will push not wip patch for it
15:15:13 <slaweq> lajoskatona: please send me link to patch, I will take a look :)
15:15:48 <lajoskatona> slaweq: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/853794 & https://review.opendev.org/c/openstack/neutron/+/853608
15:16:28 <slaweq> ok, I will look at it tomorrow
15:17:34 <slaweq> I think we can move on
15:17:40 <lajoskatona> +1
15:17:51 <bcafarel> yep (and thanks lajoskatona for working on these)
15:18:32 <slaweq> #topic Stadium projects
15:18:44 <slaweq> any updates regarding stadium ?
15:19:07 <lajoskatona> I have one small thing forthe weekly jobs
15:19:31 <lajoskatona> the bgpvpn job is failing but as I see only because the required project list is empty
15:19:50 <lajoskatona> no sorry it is bagpipe: https://review.opendev.org/c/openstack/networking-bagpipe/+/854003
15:21:07 <lajoskatona> otherwise I don't know of any issues for stadiums
15:22:51 <slaweq> thx for update lajoskatona
15:22:58 <slaweq> I just approved that patch
15:23:13 <slaweq> #topic Grafana
15:23:46 <slaweq> in grafana things looks pretty good
15:24:01 <slaweq> except that today many jobs are going to very high failure rates
15:24:33 <slaweq> but also, there is very few runs of the jobs so maybe it was just one or 2 runs which failed and that's why percentage are high
15:24:59 <mlavalle> yeah, I think so
15:25:08 <slaweq> because when I was checking recent patches, I didn't notice today any major issue with our gates
15:26:34 <slaweq> I think we can move on
15:26:38 <slaweq> to the next topic
15:26:42 <slaweq> #topic Rechecks
15:27:05 <slaweq> +---------+----------+... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/VBdFagQCTXlesqxdQgkdHVzg)
15:27:19 <slaweq> average number of rechecks to get patch merge is pretty good still
15:27:44 <slaweq> and regarding bare rechecks in last 7 days:
15:27:45 <slaweq> +---------+---------------+--------------+-------------------+... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/fYoqUJzwSUDwuRESBqExPdYb)
15:27:53 <slaweq> it is also much better
15:27:57 <slaweq> thank You all :)
15:28:16 <lajoskatona> \o/
15:28:52 <slaweq> anything You want to add there?
15:28:56 <mlavalle> Cool!
15:29:46 <slaweq> ok, lets move on
15:29:51 <slaweq> #topic Periodic
15:30:11 <slaweq> here I just wanted to say thanks to ralonsoh as openstack-tox-py39-with-oslo-master is fixed now :)
15:30:35 <ralonsoh> cool
15:31:05 <slaweq> other than that, only broken job is the one with centos-9 stream but we talked about it already earlier today
15:31:06 <lajoskatona> +1 thanks
15:31:08 <mlavalle> Thanks ralonsoh
15:31:26 <slaweq> and I have nothing more for today's meeting
15:31:44 <slaweq> so if You don't have any other topics related to CI, we can finish meeting earlier
15:31:45 <bcafarel> I like the trend of "not a lot to say on CI" meetings
15:31:51 <slaweq> :)
15:32:17 <lajoskatona> :-)
15:32:39 <slaweq> ok, so lets have few minutes back :)
15:32:46 <ykarel> :)
15:32:47 <slaweq> thx for attending the meeting
15:32:50 <mlavalle> \o/
15:32:51 <slaweq> and see You online
15:32:52 <slaweq> o/
15:32:52 <bcafarel> o/
15:32:53 <ralonsoh> bye!
15:32:55 <slaweq> #endmeeting