14:02:05 <ykarel> #startmeeting neutron_ci
14:02:05 <opendevmeet> Meeting started Mon Oct  7 14:02:05 2024 UTC and is due to finish in 60 minutes.  The chair is ykarel. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:05 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:05 <opendevmeet> The meeting name has been set to 'neutron_ci'
14:02:13 <ykarel> Ping list: bcafarel, lajoskatona, slawek, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira
14:02:18 <ralonsoh> hello!
14:02:22 <ralonsoh> IRC today, right?
14:03:09 <ykarel> yeap
14:03:12 <slaweq> hi
14:06:17 <ykarel> k i think we can start, low on attendance today
14:06:27 <ykarel> #topic Actions from previous meetings
14:06:37 <ykarel> ralonsoh to check failure with fips job in 2023.1 and zed
14:06:41 <ralonsoh> #link https://review.opendev.org/q/topic:%22bug/2083482%22
14:07:20 <ykarel> thx ralonsoh
14:07:29 <ykarel> slaweq to check UT failure 2081868
14:07:37 <slaweq> I did patch https://review.opendev.org/c/openstack/neutron/+/931036
14:08:07 <slaweq> hopefully if same issue will happen next time, it will tell us more about real error returned from the server
14:08:40 <ykarel> k merged already, thx slaweq
14:08:52 <ykarel> ykarel checking with #rdo for 2024.2 centos 9-stream failures
14:09:05 <ykarel> a WIP patch https://review.opendev.org/c/openstack/devstack/+/930913 is there
14:09:53 <ykarel> also RDO about to be released in coming days that should also resolve this
14:09:54 <ykarel> #topic Stable branches
14:10:07 <ykarel> no much activity on stable this week
14:10:33 <ykarel> there were failures in periodic failures but those are known ones
14:10:48 <ykarel> #topic Stadium projects
14:10:54 <ykarel> all green in periodics
14:11:23 <ykarel> lajoskatona seems not around, if we have any other update for stadiums
14:11:54 <ykarel> #topic Rechecks
14:12:29 <ykarel> not bad, we had 6/38 bare rechecks
14:12:39 <ykarel> 2 of these were mine in a test patch :(
14:12:43 <ralonsoh> hehehe
14:13:03 <ykarel> #topic fullstack/functional
14:13:18 <ykarel> fullstack MismatchError: 2 != 1: More than one active HA routers https://bugs.launchpad.net/neutron/+bug/2083609
14:13:28 <ykarel> we have seen couple of failures for this
14:13:54 <slaweq> I can try to take a look if I will have some time
14:14:01 <ralonsoh> yeah, I'll try to check it this week
14:14:06 <ykarel> ralonsoh do you recall this? i recall u mentioned you seen that in some of your test patch?
14:14:18 <ralonsoh> yes, in the WSGI one
14:14:37 <ralonsoh> (hmmm maybe no..., I need to check that)
14:14:44 <ralonsoh> I'll debug it this week
14:14:54 <ykarel> ok so whom should i assign this, 2 volunteers :)
14:15:15 <ralonsoh> sorry, I didn't see slaweq comment!
14:15:34 <ralonsoh> my bad (replying in other window in another chat)
14:16:03 <slaweq> please assign to my as ralonsoh already have a lot of things on his plate
14:16:30 <ykarel> k thx
14:16:32 <ykarel> #action slawek to check https://bugs.launchpad.net/neutron/+bug/2083609
14:16:41 <slaweq> thx
14:16:46 <ykarel> functional Too many open files https://bugs.launchpad.net/neutron/+bug/2080199
14:16:54 <ykarel> we still hitting this
14:17:13 <slaweq> there was patch from ralonsoh I think to close ovsdb connections in tests, no?
14:17:15 <ykarel> ralonsoh, i commented on your patch , can you confirm that
14:17:27 <slaweq> shouldn't it help maybe for this?
14:17:46 <ralonsoh> yes but this is not enough
14:17:56 <ralonsoh> it reduces the number of files, but not all
14:18:02 <ralonsoh> we should increase the limit
14:18:06 <ykarel> k thx for confirming, as i saw multiple failures in test patches
14:18:54 <ykarel> ok so should we revise https://review.opendev.org/c/openstack/neutron/+/928759 and get that in?
14:19:05 <ykarel> i commented what worked for increasing ulimit persistantly
14:20:05 <ralonsoh> cool
14:20:29 <ykarel> k then will update that patch
14:20:39 <ralonsoh> rebase on top of master
14:20:41 <ralonsoh> to have mine too
14:20:45 <ykarel> ++
14:21:01 <ykarel> #topic Tempest/Scenario
14:21:10 <ykarel> neutron api fail while creating resources devstack setup https://bugs.launchpad.net/neutron/+bug/2083570
14:21:18 <ykarel> tempest failure deleting router port https://bugs.launchpad.net/neutron/+bug/2083287
14:21:40 <ykarel> these also seen multiple times, and with timing seems related to wsgi switch patches
14:21:51 <ykarel> but would need to be confirmed
14:21:56 <ykarel> seen only in master as of now
14:22:51 <ralonsoh> we have migrated only the OVS jobs
14:23:18 <ralonsoh> the OVN patch is still in gerrit (and every time I recheck there is a new failure, not only in master)
14:23:19 <lajoskatona> o/
14:23:41 <ykarel> mmm but i have seen some ovn parent jobs modified
14:23:49 <ykarel> https://review.opendev.org/c/openstack/neutron/+/924317
14:24:21 <ralonsoh> right, in Neutron, but most of the jobs are in the n-t-p patch
14:24:29 <ralonsoh> but yes, we have some OVN jobs
14:24:37 <ykarel> and the above failures are seen only on those modified jobs
14:24:51 <ralonsoh> so that's a big problem...
14:25:04 <ralonsoh> we need wsgi API working fine
14:25:21 <ykarel> yes
14:26:53 <ykarel> k thx it could be related than to the switch
14:27:16 <ykarel> so we could revert that for time being to unblock gate
14:28:15 <ykarel> ok ? or you see it could be fixed easily?
14:28:30 <ralonsoh> tomorrow morning I'll debug it and ping you
14:28:41 <ralonsoh> I'll take this as high priority
14:28:55 <ykarel> thx ralonsoh
14:29:08 <ykarel> #action ralonsoh to check for issues triggered by wsgi switch
14:29:26 <ykarel> #topic Grafana
14:29:34 <ykarel> https://grafana.opendev.org/d/f913631585/neutron-failure-rate
14:29:40 <ykarel> let's also have quick look here
14:31:55 <lajoskatona> the unit test graph seems quite empty, perhaps because it waits for py311, and now we have py312?
14:32:08 <ykarel> gate queue has few spikes and all those mentioned above, basically fullstack, functional and those api calls
14:33:27 <haleyb> do we need to update the dashboard? jobs have changed a bit recently (and some still in progress)
14:35:13 <ykarel> lajoskatona, haleyb yeap right, the dashboard needs update
14:35:24 <lajoskatona> As I see yes: https://opendev.org/openstack/project-config/src/branch/master/grafana/neutron.yaml
14:36:33 <ykarel> anyone would like to push the update to ^ ?
14:36:49 <haleyb> i can take that
14:36:53 <ykarel> thx haleyb
14:36:57 <lajoskatona> I can change that, but on the dev-list there was a mail to perhaps have more py jobs between py39 and py312
14:37:16 <lajoskatona> so perhaps it will change in the coming weeks :-)
14:37:19 <ykarel> #action haleyb to update grafana dashboard
14:37:46 <ykarel> #topic On Demand
14:37:58 <ykarel> anything you would like to raise now?
14:38:05 <ralonsoh> no thanks
14:38:16 <lajoskatona> nothing from me
14:38:27 <ykarel> k i have a one
14:38:47 <ykarel> it's regarding postgres testing
14:38:57 <ykarel> ironic team planning to drop it https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/MVHR5WZFEDOZA4ESBN5764EVP67GKOS5/
14:39:02 <ralonsoh> I saw it
14:39:19 <ralonsoh> And, to be honest, I support dropping it
14:39:21 <ykarel> so wanted to check if we have some case for this to keep it running?
14:39:42 <ralonsoh> we have many problems related to postgre and, to be honest, I don't know if that really works
14:39:53 <ralonsoh> because I know there are some projects that doesn't work with this DB
14:40:24 <slaweq> according to https://www.openstack.org/analytics/ it is used by around 5% of users
14:41:00 <slaweq> but IMO we can maybe deprecate/remove support for it too
14:41:08 <ralonsoh> I'll check it but I remember at least 4 issues related with this DB in the last 6 months
14:41:37 <slaweq> maybe this should be email to the ML to ask other projects about the same and in general what community things about such move
14:41:39 <ykarel> yeap multiple issues but all related to neutron changes
14:42:02 <ykarel> also we can have this topic in PTG
14:42:19 <ykarel> may be for TC ?
14:42:29 <ralonsoh> yes, most of the times because posgre is more strict in the sql parsing
14:42:38 <slaweq> but this should be probably topic for some session with operators, to listen their feedback about it, no?
14:44:01 <ykarel> yes that would be better place, need to check where/when that session is
14:44:39 <ralonsoh> agree with creating an operators feedback session
14:44:45 <ralonsoh> haleyb, ^
14:45:03 <lajoskatona> +1
14:45:16 <ralonsoh> should that be created by TC? because this is a generic topic
14:45:20 <ralonsoh> not only Neutron related
14:45:33 <ralonsoh> just to open the audience target
14:46:06 <slaweq> I don't know if there is session like that already scheduled
14:46:18 <slaweq> but if there will be, we should add this topic there
14:46:27 <haleyb> adding to etherpad
14:46:30 <ralonsoh> perfect!
14:46:40 <ykarel> thx haleyb
14:47:06 <ykarel> i think that's it for today, thx everyone for joining
14:47:18 <ykarel> #endmeeting