15:00:12 <slaweq> #startmeeting neutron_ci
15:00:12 <opendevmeet> Meeting started Tue Aug  1 15:00:12 2023 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:12 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:12 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:00:24 <slaweq> ping bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira
15:00:28 <lajoskatona> o/
15:00:30 <mtomaska> o/
15:00:35 <slaweq> Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1
15:00:35 <slaweq> Please open now :)
15:00:39 <bcafarel> o/
15:01:34 <slaweq> ralonsoh and mlavalle will not be there this week
15:01:47 <slaweq> I hope ykarel will join soon
15:01:54 <slaweq> but I think we can start
15:01:58 <slaweq> #topic Actions from previous meetings
15:02:26 <slaweq> first one was on ralonsoh and I will assign it to him for next week just to not forget
15:02:36 <slaweq> #action ralonsoh to check failed neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_keepalived_multiple_sighups_does_not_forfeit_primary test
15:02:43 <ykarel> o/
15:02:49 <slaweq> and the second one is:
15:02:54 <slaweq> mtomaska will check fullstack timeout while waiting for fake server to be alive
15:03:19 <mtomaska> yes.  I did look into this but honestly, there was nothing in the logs... that would help
15:04:02 <mtomaska> the test logs shows retrying to "get network" but non of those requests are getting to neutron servers based on neutron logs. After 60 seconds everything times out
15:04:34 <slaweq> hmm, interesting
15:04:59 <lajoskatona> Something appears in my mind with simlar logs, but that time with ports perhaps
15:05:04 <slaweq> did You check in neutron-server log in the directory of that specific test which failed?
15:05:23 <mtomaska> yes, downloaded logs from the "controller"
15:06:12 <mtomaska> the only logs that exists for the failing test is the neutron-server log and the test log itself
15:06:26 <slaweq> ok, lets see if that will happen more often
15:06:48 <slaweq> but can You also give me link to logs which You checked? I will maybe take a look at them too
15:06:54 <mtomaska> also checked system log but nothing interesting around that timestamp
15:06:59 <mtomaska> sure
15:07:12 <slaweq> thx for checking this mtomaska
15:07:23 <slaweq> next topic then
15:07:31 <slaweq> #topic Stable branches
15:07:38 <slaweq> bcafarel any updates?
15:07:52 <bcafarel> low activity this week on backports (probably because of the time of year!)
15:08:01 <bcafarel> all looked good
15:08:16 <slaweq> thx
15:08:54 <slaweq> #topic Stadium projects
15:09:05 <slaweq> networking-bagpipe still broken due to new sqlalchemy
15:09:08 <slaweq> everything else seems to be fine
15:09:11 <lajoskatona> bagpipe is fialing with new sqlalchemy
15:09:17 <lajoskatona> yes, bagpipe
15:09:41 <lajoskatona> I checked yesterday, but to tell the truth I need more time, as the failure seems weird
15:09:55 <slaweq> ok
15:10:12 <lajoskatona> I guest it is some transaction guard as the issue appears but not always in the same test(s)
15:10:13 <slaweq> probably ralonsoh will be able to help You when he will come back :)
15:10:21 <lajoskatona> yeah in worst case
15:10:38 <lajoskatona> otherwise I sent out the mail for bgpvpn/bagpipe victoria: https://lists.openstack.org/pipermail/openstack-discuss/2023-July/034527.html
15:11:23 <slaweq> ++ for EOLing it
15:11:36 <lajoskatona> elodilles highlighted for me that we still have ussuri branch for them, but when ralonsoh will be back we can discuss with him also
15:11:40 <lajoskatona> agree
15:12:30 <lajoskatona> one more thing, but that is more general, so I think when more people will be back from summer vacation I will bring it to dirvers or to team meeting
15:13:19 <lajoskatona> there are patches to have OVN support for some of the stadiums (bgpvpn, vpnaas, and taas) and some common view would be good perhaps to help these efforts, and the coming ones
15:13:53 <lajoskatona> I think some basic guidelines would be enough, but I am just learning now this creating some plugin for OVN driver
15:14:04 <lajoskatona> that's it for stadiums
15:14:21 <slaweq> ok, thx a lot for all updates
15:15:03 <slaweq> #topic Grafana
15:15:40 <slaweq> #link https://grafana.opendev.org/d/f913631585/neutron-failure-rate
15:16:11 <slaweq> it looks ok'ish for me this week
15:16:55 <slaweq> anything else You want to add there?
15:17:00 <slaweq> or can we move on?
15:17:13 <bcafarel> looks good enough to me
15:17:20 <bcafarel> nothing too scary :)
15:17:25 <slaweq> next topic then
15:17:29 <slaweq> #topic Rechecks
15:18:11 <slaweq> rechecks don't looks good in general but it's not just neutron problem
15:18:26 <slaweq> gate is very unstable generally
15:18:50 <lajoskatona> is there some common pattern, issue?
15:18:59 <slaweq> I don't think there is a lot to add there
15:19:13 <slaweq> lajoskatona a lot of issues due to timeouts and slow nodes basically
15:19:32 <slaweq> I think that this is biggest issue currently
15:19:39 <lajoskatona> ok
15:20:42 <slaweq> I think we can move on to the specific issues
15:20:47 <slaweq> #topic Tempest/Scenario
15:20:55 <slaweq> https://bugs.launchpad.net/neutron/+bug/2008062 - this is hitting (again) tempest-full jobs so it impacts other projects too,
15:21:30 <slaweq> and there seems to be fix for it in tempest https://review.opendev.org/c/openstack/tempest/+/889713
15:21:36 <slaweq> thx ykarel
15:22:20 <ykarel> i checked opensearch and don't see the same failures since the fix merged
15:22:41 <ykarel> notice one failure in that test but was different issue
15:23:06 <slaweq> great to hear that
15:24:23 <slaweq> that's all issues from the check/gate queues which I have today
15:24:42 <slaweq> there is not many patches proposed recently TBH
15:24:51 <slaweq> probably because of the summer time
15:24:55 <slaweq> #topic Periodic
15:25:10 <slaweq> Centos jobs broken since around 3 days:
15:25:10 <slaweq> https://zuul.openstack.org/build/e5086a4f2cea43adad4f2487db7da53b
15:25:10 <slaweq> https://zuul.openstack.org/build/e89ae8e46b0b49bfbddb2039e8c88d26
15:25:26 <slaweq> any volunteer to report LP bug and fix it?
15:25:48 <ykarel> i can check
15:25:58 <slaweq> thx ykarel
15:26:13 <slaweq> #action ykarel to check broken centos periodic jobs
15:27:00 <slaweq> and another issue which happens since few days is in the neutron-functional-with-sqlalchemy-master job:
15:27:00 <slaweq> https://zuul.openstack.org/build/89f5e465495a4f0ab4674dced711fe15
15:27:16 <slaweq> anyone wants to report and check that one maybe?
15:27:23 <mtomaska> I can do it
15:27:41 <slaweq> thx mtomaska
15:28:29 <slaweq> #action mtomaska to check failing neutron-functional-with-sqlalchemy-master periodic job
15:28:42 <slaweq> that was the last thing I had for today
15:28:51 <slaweq> #topic on demand
15:29:01 <slaweq> anything else You want to discuss today?
15:29:17 <slaweq> if not I will give You back about 30 minutes (again) :)
15:29:28 <lajoskatona> nothing from me
15:30:24 <slaweq> ok, so lets close the meeting for today
15:30:33 <slaweq> have a great week :)
15:30:37 <slaweq> #endmeeting