15:01:13 <ykarel> #startmeeting neutron_ci
15:01:13 <opendevmeet> Meeting started Tue Jul 16 15:01:13 2024 UTC and is due to finish in 60 minutes.  The chair is ykarel. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:13 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:13 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:01:19 <ykarel> Ping list: bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira
15:01:35 <lajoskatona> o/
15:01:54 <mlavalle> \o
15:02:05 <slaweq> o/
15:03:12 <haleyb> o/
15:04:32 <ykarel> hello everyone, let's start with topics
15:04:38 <ykarel> #topic Actions from previous meetings
15:04:45 <ykarel> lajoskatona to check timeout increase for test_configurations_are_synced_towards_placement
15:05:03 <lajoskatona> I checked it uickly, but tottell the truth just quickly
15:06:00 <lajoskatona> I hadn't seens another occurance (last week was when I checked), and the fake placement was the source of the issue (timeout in response)
15:06:25 <lajoskatona> so just following the issue if it becomes more frequent I would suggest
15:07:32 <ykarel> lajoskatona, ack. thx for checking
15:07:45 <ykarel> ykarel to check lp 2072567
15:08:00 <ykarel> rodolfo picked it up and pushed some test patches
15:08:19 <ykarel> once we have the ci issue cleared can recheck this one
15:08:30 <ykarel> mlavalle to check failures in test_router_interface_status
15:08:37 <mlavalle> I did check it
15:09:15 <mlavalle> the test case creates a router and then adds an interface to it, which fails to become active. The port stays down and the test case time out
15:09:43 <mlavalle> I only saw one ocurrence of this issue, but the job, neutron-tempest-plugin-openvswitch-enforce-scope-old-defaults, fails frequently
15:10:05 <mlavalle> I'm looking if there are any relationships with other failures in the same job
15:10:11 <mlavalle> I'll keep you posted
15:11:28 <ykarel> mlavalle, the current failures are due to the gate issue pointed above
15:11:55 <mlavalle> ok, i'll still chack the situation once the ci problem is fixed
15:11:58 <ykarel> test_router_interface_status happened last week so i think current that should be different but ok can check post gate fixes
15:12:03 <ykarel> thx mlavalle
15:12:20 <ykarel> slaweq to report bug for designate failures
15:12:33 <ykarel> slaweq reported https://bugs.launchpad.net/designate/+bug/2072627
15:12:57 <johnsom> The revert to address that has merged
15:13:04 <slaweq> yes, and I think that the problematic oslo.log change is now reverted
15:13:06 <slaweq> or revert is in the gate maybe
15:13:06 <johnsom> https://review.opendev.org/c/openstack/oslo.log/+/923961
15:13:11 <ykarel> thx johnsom so we also had that released?
15:13:19 <slaweq> thx johnsom
15:13:27 <johnsom> I am not sure if it released yet
15:13:50 <johnsom> I don't think so
15:14:16 <ykarel> hmm no release patch yet https://review.opendev.org/q/file:deliverables/dalmatian/oslo.log.yaml
15:14:51 <ykarel> would be good to get that up too
15:15:45 <slaweq> once it will be released we may make designate job voting again
15:15:58 <johnsom> +1
15:16:00 <ykarel> +1 anyone pushing up that release patch?
15:16:35 <johnsom> https://review.opendev.org/c/openstack/releases/+/924235
15:16:43 <tkajinam> are you talking about https://review.opendev.org/c/openstack/releases/+/924235 ?
15:16:44 <tkajinam> yeah
15:16:59 <ykarel> yeah that was too quick :)
15:17:04 <ykarel> thanks tkajinam
15:17:12 <tkajinam> we also downgrade oslo.log in u-c so I guess jobs are all green if they follow u-c https://review.opendev.org/c/openstack/requirements/+/924053
15:17:24 <tkajinam> s/downgrade/downgraded/
15:17:50 <ykarel> k then jobs should be already green
15:18:08 <johnsom> tkajinam +1
15:18:15 <slaweq> I indeed saw today our designate job green on one or two patches but I though it may just be luck
15:18:17 <slaweq> :)
15:18:38 <slaweq> thx tkajinam for lowering it in u-c
15:18:59 <tkajinam> ;-)
15:19:36 <ykarel> k let's move to next
15:19:38 <ykarel> #topic Stable branches
15:19:48 <ykarel> all good, except periodic postgres job broken by recent change https://bugs.launchpad.net/neutron/+bug/2072567
15:19:58 <ykarel> bcafarel, anything to add?
15:20:20 <bcafarel> thanks ykarel, I was behind on my checks - from what I checked all good
15:20:45 <ykarel> thx bcafarel
15:20:54 <ykarel> #topic Stadium projects
15:21:04 <ykarel> sfc still broken https://bugs.launchpad.net/neutron/+bug/2068727
15:21:09 <lajoskatona> all green except sfc, and I had no time to check that
15:22:14 <lajoskatona> perhaps next week or end of this week I will have some time to check again sfc, but not sure about it
15:22:25 <ykarel> lajoskatona, so https://review.opendev.org/c/openstack/networking-sfc/+/921514 attempt to fix it, right?
15:22:56 <lajoskatona> yes, but that is not enough
15:23:08 <lajoskatona> or perhaps just jumped on a red hering
15:23:15 <ykarel> k thx for the updates
15:23:29 <ykarel> #topic Rechecks
15:23:45 <ykarel> we have couple of rechecks due to recent gate failures
15:23:58 <ykarel> bare recheck wise it was quite good 0/21 rechecks
15:24:11 <ykarel> so let's keep checking ci failures before any rechecks
15:24:22 <ykarel> #topic Tempest/Scenario
15:24:36 <ykarel> for this we have two patches up
15:24:38 <ykarel> - https://review.opendev.org/c/openstack/neutron/+/924213
15:24:38 <ykarel> - https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/924124
15:24:56 <ykarel> still in ci, let's see how that goes
15:25:34 <ykarel> if these are green, let's get these in
15:26:25 <ykarel> that's it in failures, periodics also hitting these so fully red
15:26:36 <lajoskatona> 🤞
15:26:47 <ykarel> #topic Grafana
15:26:54 <ykarel> https://grafana.opendev.org/d/f913631585/neutron-failure-rate
15:27:01 <ykarel> let's have a quick look here too
15:28:09 <ykarel> looks quite slow for me
15:29:36 <lajoskatona> :-)
15:29:58 <slaweq> our neutron-tempest-plugin jobs seems to be very unstable but it was already mentioned above that there are proposed fixes for it
15:30:02 <lajoskatona> except tempest which you mentioned things look quite normal as I see
15:30:06 <ykarel> k took some time, apart from the ovs failures we already checked above, looks good
15:30:34 <ykarel> #topic On Demand
15:30:43 <ykarel> anything you would like to raise here?
15:30:49 <mlavalle> not from me
15:30:57 <slaweq> nothing from me
15:31:38 <lajoskatona> nothing from me
15:32:08 <ykarel> k thx everyone for joining
15:32:13 <ykarel> #endmeeting