15:01:31 <ykarel> #startmeeting neutron_ci
15:01:31 <opendevmeet> Meeting started Tue Jun  4 15:01:31 2024 UTC and is due to finish in 60 minutes.  The chair is ykarel. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:31 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:31 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:01:41 <ykarel> Ping list: bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira
15:01:44 <mlavalle> o/
15:01:58 <haleyb> o/
15:02:08 <mlavalle> only irc today, right?
15:02:49 <bcafarel> o/
15:03:32 <ykarel> yeap
15:04:05 <lajoskatona> o/
15:04:52 <ykarel> hello everyone, let's start with topics from previous week
15:04:55 <ykarel> #topic Actions from previous meetings
15:05:04 <ykarel> lajoskatona to look at failures at sfc and bagpipe
15:06:06 <lajoskatona> I checked but to tell the truth I have to check again since I forgot totally what I have done with it :-(
15:06:12 <ykarel> lajoskatona reported https://bugs.launchpad.net/neutron/+bug/2067452 for bagpipe
15:06:19 <ykarel> atleast i recall ^ :)
15:06:39 <lajoskatona> ykarel: you saved me
15:07:07 <ykarel> these are still failing but can discuss in next section
15:07:08 <lajoskatona> I reported an issue for exabgp, and they promised that it will be fixed in next exabp release
15:07:20 <ykarel> thx much
15:07:25 <ykarel> next one on me
15:07:27 <ykarel> ykarel to report lp for volume tests
15:07:45 <ykarel> i reported https://bugs.launchpad.net/openstacksdk/+bug/2067869, cinder team looking at that
15:08:22 <ykarel> #topic Stable branches
15:08:35 <ykarel> bcafarel, any update
15:08:57 <bcafarel> mostly good on the backports that went in
15:09:32 <bcafarel> haleyb: semi-related I found https://review.opendev.org/c/openstack/neutron/+/919516 2023.2 which was cherry-picked from (now merged) 2023.1 probably good to push
15:10:05 <haleyb> bcafarel: i'll remove my -W think that looks ok
15:10:22 <haleyb> but there is no grenade job
15:10:43 <haleyb> that's where the error was obvious
15:11:05 <ykarel> thx bcafarel for the update
15:11:34 <ykarel> #topic Stadium projects
15:11:35 <slaweq> why there is no grenade job at all there?
15:11:39 <slaweq> is that 'normal'?
15:12:36 <lajoskatona> perhaps it is not executed on non-slurp? 2023.2 was non/slurp am I right?
15:12:56 <slaweq> but 'regular' grenade jobs should still be run there
15:13:07 <lajoskatona> that is true
15:13:14 <slaweq> slurp means we support and test upgrade from the N-2 release
15:13:24 <slaweq> but non-slurp still can be upgraded from N-1
15:13:45 <ykarel> seems we just missing ovn grenade from check/gate
15:13:57 <ykarel> +1 to what slaweq said
15:14:03 <slaweq> ykarel you can assign this as AI for me - I will check why this job is not run there
15:14:06 <haleyb> grenade is usually only in check queue
15:14:13 <slaweq> and will propose fix if needed
15:14:36 <ykarel> #action slaweq to include ovn grenade jobs in check queue
15:14:46 <haleyb> for 2023.2
15:14:57 <slaweq> thx
15:15:11 <ykarel> neutron-ovn-grenade-multinode
15:15:57 <ykarel> getting back to stadium
15:16:01 <ykarel> sfc/bagpipe failures
15:16:10 <ykarel> https://zuul.openstack.org/builds?job_name=openstack-tox-py312&project=openstack/networking-bagpipe
15:16:24 <ykarel> for bagpipe we already have https://bugs.launchpad.net/neutron/+bug/2067452
15:16:30 <ykarel> https://zuul.openstack.org/builds?job_name=networking-sfc-tempest&project=openstack%2Fnetworking-sfc&branch=master&skip=0
15:16:39 <ykarel> lajoskatona, can you also check sfc failures
15:16:53 <lajoskatona> sure, I will
15:17:25 <ykarel> #action lajoskatona to check sfc failures
15:17:59 <ykarel> #topic Rechecks
15:18:39 <ykarel> there were comparatively more rechecks/bare rechecks last week due to various known issues
15:19:04 <ykarel> sqlalchemy-master, cover jobs, etc
15:19:21 <ykarel> #topic Unit tests
15:19:37 <ykarel> openstack-tox-py311-with-sqlalchemy-master got broken with neutron-lib changes
15:19:44 <ykarel> Already Fixed with https://review.opendev.org/c/openstack/neutron/+/920897
15:19:51 <ykarel> Ask for adding non-voting job in neutron-lib https://bugs.launchpad.net/neutron/+bug/2067515
15:19:59 <ykarel> i will push that change
15:20:16 <ykarel> #action ykarel to include neutron unit-test job in neutron-lib to catch such issues
15:20:51 <ykarel> For cover job we already have workaround applied, lajoskatona looking for actual fixes
15:21:17 <ykarel> #topic fullstack/functional
15:21:25 <lajoskatona> that is haleyb, or am I?
15:21:43 <haleyb> that's still me
15:21:48 <lajoskatona> :-)
15:21:55 <lajoskatona> just to be sure :-)
15:22:14 <ykarel> sorry it's haleyb :)
15:22:34 <haleyb> how easy is it to use a different machine type for a job?
15:22:44 <haleyb> or node type or whatever it's called
15:23:00 <haleyb> i.e. moar memory
15:23:20 <haleyb> i'll have to look
15:23:47 <lajoskatona> with the label you can select, like: label: nested-virt-ubuntu-focal as I recall
15:23:49 <ykarel> ci nodes are mostly 8 gb nodes
15:24:11 <ykarel> there are some providers provides higher config nodes for specific use cases
15:24:55 <haleyb> ykarel: ack, 8gb is sometimes not enough, i'll look for other options
15:25:30 <ykarel> for some cases we enable swap if 8gb is not enough
15:25:40 <lajoskatona> ask on infra channel if that is possible
15:25:41 <haleyb> ykarel: ah, the change you propsed
15:25:59 <ykarel> if it need real ram then need to look for labels with high configuration
15:26:12 <haleyb> i was even thinking of splitting the testing up, as long as it's combined in the end
15:27:13 <ykarel> if you looking for that cover job, using some swap should be enough as seen in that test patch
15:27:47 <haleyb> ykarel: ack, i'll look closer at your patch might be the easiest option
15:27:56 <ykarel> ack
15:28:26 <ykarel> #topic fullstack/functional
15:28:35 <ykarel> - https://4329afd232bc8c0f7e08-a692672d2d4a5288ede7d6f3bc2193fe.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-sqlalchemy-master/5e7fbe9/testr_results.html
15:28:35 <ykarel> - https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c48/periodic/opendev.org/openstack/neutron/master/neutron-fullstack-with-uwsgi-fips/c48922c/testr_results.html
15:29:31 <ykarel> seen those once, need to dig further
15:29:40 <ykarel> any volunteer to check those?
15:29:51 <mlavalle> I'll check one
15:29:57 <mlavalle> whichever
15:30:13 <ykarel> k thx, i can check other than
15:30:26 <ykarel> #action mlavalle to check failures in neutron-functional
15:30:37 <ykarel> #action ykarel to check failures in neutron-fullstack
15:30:49 <mlavalle> btw, proposed fix for another failure: https://review.opendev.org/c/openstack/neutron/+/921296
15:31:03 <ykarel> thx mlavalle
15:31:06 <mlavalle> I'll address ihrachys comment and will be good to go
15:31:22 <ykarel> thx
15:31:34 <ykarel> #topic Periodic
15:31:42 <ykarel> centos stream 8 jobs needs to be dropped from unmaintained/yoga as failing since 8-stream EOL https://blog.centos.org/2023/04/end-dates-are-coming-for-centos-stream-8-and-centos-linux-7/
15:31:47 <ykarel> - https://zuul.openstack.org/build/fe688f48eefd4c55b38456f1c479b56a
15:31:48 <ykarel> - https://zuul.openstack.org/build/d060a6a68352463ea8df596cc954172c
15:32:21 <ykarel> so the jobs needs to be dropped from unmaintained/yoga
15:32:29 <ykarel> any volunteer to push that patch?
15:33:57 <ykarel> i can push it then
15:34:12 <ykarel> #action ykarel to push patch to drop centos 8-stream jobs
15:34:20 <ykarel> #topic Grafana
15:34:27 <ykarel> https://grafana.opendev.org/d/f913631585/neutron-failure-rate
15:34:35 <ykarel> let's have quick look at grafana too
15:35:49 <ykarel> there some spike in gate queue, and those are known issue with nova
15:36:34 <ykarel> in check some are known issues, and others looks patch specific
15:36:38 <ykarel> anything to add?
15:37:23 <mlavalle> not from me
15:37:31 <lajoskatona> nothing from me
15:38:05 <ykarel> ack
15:38:06 <ykarel> #topic On Demand
15:38:17 <ykarel> anything else you would like to raise here?
15:39:08 <mlavalle> nothing from me
15:39:28 <bcafarel> all good
15:40:36 <ykarel> thx everyone for joining, in that case let's close and have everyone 20 minutes back
15:40:38 <ykarel> #endmeeting