15:00:14 <slaweq> #startmeeting neutron_ci
15:00:14 <opendevmeet> Meeting started Tue Apr 11 15:00:14 2023 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:14 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:14 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:00:20 <ralonsoh> hello
15:00:20 <slaweq> ping bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira
15:00:23 <mtomaska> o/
15:00:28 <slaweq> CI meeting is starting :)
15:00:35 <slaweq> o/
15:00:41 <slaweq> Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1
15:00:41 <slaweq> Please open now :)
15:00:45 <mlavalle2> o/
15:01:29 <lajoskatona> o/
15:02:00 <ykarel> o/
15:02:05 <slaweq> ok, lets start
15:02:11 <slaweq> #topic Actions from previous meetings
15:02:20 <slaweq> ralonsoh to move neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devices to be run in "serial" stage
15:02:48 <ralonsoh> and somethign else, I'll find the patches
15:03:00 <ralonsoh> please, continue, I'll post them here
15:03:22 <slaweq> ok
15:03:25 <slaweq> so next one
15:03:31 <slaweq> mtomaska to fix waiting for SB events in ovn metadata functional tests
15:03:45 <mtomaska> yes, please see linked patch
15:03:51 <mtomaska> https://review.opendev.org/c/openstack/neutron/+/878549
15:04:03 <slaweq> thx mtomaska I will review it soon
15:04:23 <slaweq> and last one:
15:04:24 <slaweq> slaweq to check dhcp issue in fullstack tests
15:04:31 <slaweq> I didn't had time for it TBH
15:05:04 <ralonsoh> slaweq, sorry
15:05:06 <ralonsoh> --> https://review.opendev.org/c/openstack/neutron/+/879029
15:05:12 <ralonsoh> (merged)
15:05:17 <slaweq> so I will assign it to myself for next week
15:05:17 <slaweq> #action slaweq to check dhcp issue in fullstack tests
15:05:19 <slaweq> next topic
15:05:20 <slaweq> #topic Stable branches
15:05:23 <slaweq> anything to discuss here? bcafarel is not available today but maybe someone else have any topics?
15:05:33 <slaweq> (thx ralonsoh for the link)
15:05:39 <ralonsoh> just the problems with grenade
15:05:43 <ralonsoh> reported in two bugs
15:05:50 <ralonsoh> (discussed before)
15:06:06 <mlavalle2> bcafarel is on pto
15:06:31 <slaweq> yeah, this issue with grenade is I think most common issue currently in our gate
15:06:39 <ykarel> https://review.opendev.org/q/topic:drop-master-jobs
15:06:54 <ykarel> noticed master jobs were running in stable, pushed ^ to cleanup
15:06:54 <ralonsoh> ^^ good ones
15:07:13 <lajoskatona> +1
15:07:26 <slaweq> thx ykarel
15:07:35 <slaweq> I just approved ovn-octavia-provider patch
15:07:41 <ykarel> thx
15:07:41 <slaweq> which was the only one not approved yet
15:08:16 <slaweq> so, next topic
15:08:19 <slaweq> #topic Stadium projects
15:08:43 <slaweq> still sfc and dynamic-routing jobs with sqlalchemy master are failing
15:08:48 <lajoskatona> I have a few patches
15:08:53 <lajoskatona> https://review.opendev.org/q/topic:bug/2004265+status:open
15:08:55 <slaweq> but this is under control IIRC
15:09:11 <lajoskatona> https://review.opendev.org/q/topic:oslo_master_periodic+status:open
15:09:23 <ralonsoh> lajoskatona, thanks!
15:09:31 <lajoskatona> yes, currently bagpipe is the one which fails with sqlalchemy2
15:09:49 <lajoskatona> https://review.opendev.org/c/openstack/networking-bagpipe/+/879463
15:10:06 <slaweq> thx lajoskatona for those patches
15:10:11 <slaweq> I will review them
15:10:23 <lajoskatona> this patch is to remove subtransactions=True, but there are still some tests are failing
15:10:23 <ralonsoh> btw, related to sqlalchemy
15:10:32 <ralonsoh> there is an important patch merged recently
15:10:38 <ralonsoh> for -master jobs
15:10:39 <ralonsoh> https://review.opendev.org/c/openstack/oslo.db/+/875986
15:10:52 <ralonsoh> ^^ we need to add oslo.db to these jobs too
15:11:26 <slaweq> don't we have oslo.db from master in those jobs already?
15:11:31 <ykarel> +1
15:11:41 <ralonsoh> I don't know if in networking-* projects
15:11:45 <ralonsoh> for sure in Neutron
15:11:49 <slaweq> ahh, ok
15:11:56 <slaweq> in stadium we may need to check it
15:11:58 <ralonsoh> I'll check it today and I'll push the patches
15:12:03 <slaweq> thx
15:12:29 <slaweq> ralonsoh to check if oslo.db from master branch is used in stadium "sqlalchemy-master" CI jobs
15:12:45 <lajoskatona> those patches are mostly to add it and neutron master an neutron-lib master
15:13:11 <ralonsoh> perfect
15:13:35 <lajoskatona> that's all for this topic from me
15:13:41 <slaweq> thx
15:13:47 <slaweq> I think we can move on
15:13:50 <slaweq> #topic Grafana
15:14:23 <slaweq> in grafana things looks good IMO in last few days
15:14:42 <slaweq> so I don't really have anything else to say here
15:14:49 <slaweq> do You have anything?
15:14:52 <mlavalle2> slow activity also
15:15:24 <mlavalle2> looks good, yes
15:16:19 <slaweq> so, next topic (we are fast today :))
15:16:20 <slaweq> #topic Rechecks
15:16:21 <slaweq> here numbers aren't that good:
15:16:22 <slaweq> +---------+----------+... (full message at <https://matrix.org/_matrix/media/v3/download/matrix.org/stXolUNBHkNjTjoQoOCtwMZH>)
15:16:24 <slaweq> we have a lot of rechecks in average
15:16:43 <slaweq> but in many cases it was related to older rechecks, from e.g. 2-3 weeks ago
15:17:00 <slaweq> or e.g. grenade issue which we discussed already
15:17:22 <slaweq> I didn't get data about bare rechecks for this week
15:17:25 <ralonsoh> right, most of them are related to the FTs issues (solved) and grenade
15:17:36 <slaweq> so that's all from my side there
15:17:42 <slaweq> ralonsoh exactly
15:18:06 <slaweq> anything else to add here or can we move on?
15:19:02 <slaweq> ok, lets move on then
15:19:07 <slaweq> #topic fullstack/functional
15:19:21 <slaweq> actually here I have only one thing related to fullstack tests
15:19:40 <slaweq> ralonsoh added "to talk about fullstack timeouts, what I see in the logs is that in the failed jobs, the (passed) tests take much more time than in a healthy job. So we can just increase the timeout or reduce the number of tests." to the agenda
15:20:00 <ralonsoh> yeah, this is about the gate queue
15:20:06 <ralonsoh> where we execute more fullstack tests
15:20:35 <ralonsoh> where the tests don't fail but last longer than the timeout
15:20:38 <slaweq> https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-with-uwsgi&branch=master&result=TIMED_OUT&skip=0
15:20:46 <slaweq> last timeout there is from 30.03
15:20:55 <slaweq> and last one in gate queue is 08.03
15:21:00 <slaweq> so is it still an issue?
15:21:16 <ralonsoh> TBH, I added this topic a couple of weeks ago
15:21:20 <slaweq> btw. we are executing the same number of tests in both check and gate queue in this job AFAIK
15:21:27 <ralonsoh> so +2 to this magic solver
15:21:31 <slaweq> or at least we should have the same tests running
15:21:48 <ralonsoh> hmmm I need to check that, I think we execute more tests
15:22:38 <slaweq> it's the same job in check: https://github.com/openstack/neutron/blob/master/zuul.d/project.yaml#L22 and gate https://github.com/openstack/neutron/blob/master/zuul.d/project.yaml#L72
15:22:40 <ralonsoh> dsvm-fullstack-gate is different to dsvm-fullstack
15:22:56 <slaweq> ahh, right but it's not used in ci gate
15:23:11 <ykarel> Pass 129 Skip 3 in both check and gate
15:23:14 <slaweq> it should be more "dsvm-fullstack-ci" rather than "dsvm-fullstack-gate"
15:23:20 <slaweq> the name may be missleading
15:23:25 <ralonsoh> perfect
15:23:30 <ralonsoh> understood
15:23:35 <slaweq> but in ci job it's always  the same tox env env used
15:24:26 <slaweq> I don't have any new functional tests issues this week
15:24:29 <slaweq> which is great news :)
15:24:34 <mlavalle2> +1
15:24:44 <slaweq> I had prepared one issue from grenade job, but it was already discussed
15:25:13 <slaweq> and lajoskatona IIRC, You are going to check it, right?
15:25:44 <lajoskatona> exactly
15:25:57 <slaweq> thx
15:26:21 <slaweq> #action lajoskatona to check neutron-ovs-grenade-dvr-multinode job's failures
15:26:39 <slaweq> so lets now go to the last topic for today
15:26:41 <slaweq> #topic Periodic
15:26:51 <slaweq> here I have couple of issues noted
15:27:10 <slaweq> but most of them are already fixed by ykarel :)
15:27:14 <slaweq> first one:
15:27:21 <slaweq> neutron-ovn-grenade-multinode-skip-level failing
15:27:36 <slaweq> I reported https://bugs.launchpad.net/neutron/+bug/2015850 but it's dup of https://bugs.launchpad.net/bugs/2015364
15:27:51 <slaweq> and it's assigned to ykarel
15:28:19 <ykarel> yes i will check it
15:29:08 <slaweq> neutron-ovn-tempest-ovs-master-centos-9-stream and neutron-ovn-tempest-ipv6-only-ovs-masternext are failures in
15:29:10 <slaweq> sorry
15:29:12 <slaweq> next are failures in neutron-ovn-tempest-ovs-master-centos-9-stream and neutron-ovn-tempest-ipv6-only-ovs-master
15:29:17 <slaweq> both caused by same issue, and ykarel already proposed fixes https://review.opendev.org/q/topic:bug%252F2015728
15:29:20 <slaweq> and last one on my list is
15:29:21 <slaweq> neutron-functional-with-sqlalchemy-master
15:29:21 <slaweq> bug reported https://bugs.launchpad.net/neutron/+bug/2015847
15:29:47 <slaweq> and this one seems like some miscounfiguration and problems with access to db
15:29:53 <ralonsoh> ^^ fixed by new oslo.db release
15:29:57 <ralonsoh> I'll take it and check
15:30:04 <slaweq> ahh, great then
15:30:14 <ykarel> may be worth to try alembic main version like https://review.opendev.org/c/openstack/oslo.db/+/879546
15:30:20 <slaweq> so we should be good with those periodic jobs then
15:30:42 <ykarel> oslo.db master is used in those jobs, so current issues are not yet fixed
15:30:43 <ralonsoh> ykarel, exactly
15:30:52 <ralonsoh> I'll propose this patch too
15:31:05 <slaweq> ok
15:31:06 <ykarel> ack +1
15:31:22 <slaweq> and with that we got to the end of the agenda for today
15:31:35 <slaweq> any other topics related to ci You want to discuss today?
15:32:20 <slaweq> if not, I will give You back some time today
15:32:26 <slaweq> thx for attending the meeting
15:32:30 <slaweq> and have a great week :)
15:32:37 <slaweq> o/
15:32:39 <slaweq> #endmeeting