15:00:12 #startmeeting neutron_ci 15:00:12 Meeting started Tue Aug 1 15:00:12 2023 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:12 The meeting name has been set to 'neutron_ci' 15:00:24 ping bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira 15:00:28 o/ 15:00:30 o/ 15:00:35 Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1 15:00:35 Please open now :) 15:00:39 o/ 15:01:34 ralonsoh and mlavalle will not be there this week 15:01:47 I hope ykarel will join soon 15:01:54 but I think we can start 15:01:58 #topic Actions from previous meetings 15:02:26 first one was on ralonsoh and I will assign it to him for next week just to not forget 15:02:36 #action ralonsoh to check failed neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_keepalived_multiple_sighups_does_not_forfeit_primary test 15:02:43 o/ 15:02:49 and the second one is: 15:02:54 mtomaska will check fullstack timeout while waiting for fake server to be alive 15:03:19 yes. I did look into this but honestly, there was nothing in the logs... that would help 15:04:02 the test logs shows retrying to "get network" but non of those requests are getting to neutron servers based on neutron logs. After 60 seconds everything times out 15:04:34 hmm, interesting 15:04:59 Something appears in my mind with simlar logs, but that time with ports perhaps 15:05:04 did You check in neutron-server log in the directory of that specific test which failed? 15:05:23 yes, downloaded logs from the "controller" 15:06:12 the only logs that exists for the failing test is the neutron-server log and the test log itself 15:06:26 ok, lets see if that will happen more often 15:06:48 but can You also give me link to logs which You checked? I will maybe take a look at them too 15:06:54 also checked system log but nothing interesting around that timestamp 15:06:59 sure 15:07:12 thx for checking this mtomaska 15:07:23 next topic then 15:07:31 #topic Stable branches 15:07:38 bcafarel any updates? 15:07:52 low activity this week on backports (probably because of the time of year!) 15:08:01 all looked good 15:08:16 thx 15:08:54 #topic Stadium projects 15:09:05 networking-bagpipe still broken due to new sqlalchemy 15:09:08 everything else seems to be fine 15:09:11 bagpipe is fialing with new sqlalchemy 15:09:17 yes, bagpipe 15:09:41 I checked yesterday, but to tell the truth I need more time, as the failure seems weird 15:09:55 ok 15:10:12 I guest it is some transaction guard as the issue appears but not always in the same test(s) 15:10:13 probably ralonsoh will be able to help You when he will come back :) 15:10:21 yeah in worst case 15:10:38 otherwise I sent out the mail for bgpvpn/bagpipe victoria: https://lists.openstack.org/pipermail/openstack-discuss/2023-July/034527.html 15:11:23 ++ for EOLing it 15:11:36 elodilles highlighted for me that we still have ussuri branch for them, but when ralonsoh will be back we can discuss with him also 15:11:40 agree 15:12:30 one more thing, but that is more general, so I think when more people will be back from summer vacation I will bring it to dirvers or to team meeting 15:13:19 there are patches to have OVN support for some of the stadiums (bgpvpn, vpnaas, and taas) and some common view would be good perhaps to help these efforts, and the coming ones 15:13:53 I think some basic guidelines would be enough, but I am just learning now this creating some plugin for OVN driver 15:14:04 that's it for stadiums 15:14:21 ok, thx a lot for all updates 15:15:03 #topic Grafana 15:15:40 #link https://grafana.opendev.org/d/f913631585/neutron-failure-rate 15:16:11 it looks ok'ish for me this week 15:16:55 anything else You want to add there? 15:17:00 or can we move on? 15:17:13 looks good enough to me 15:17:20 nothing too scary :) 15:17:25 next topic then 15:17:29 #topic Rechecks 15:18:11 rechecks don't looks good in general but it's not just neutron problem 15:18:26 gate is very unstable generally 15:18:50 is there some common pattern, issue? 15:18:59 I don't think there is a lot to add there 15:19:13 lajoskatona a lot of issues due to timeouts and slow nodes basically 15:19:32 I think that this is biggest issue currently 15:19:39 ok 15:20:42 I think we can move on to the specific issues 15:20:47 #topic Tempest/Scenario 15:20:55 https://bugs.launchpad.net/neutron/+bug/2008062 - this is hitting (again) tempest-full jobs so it impacts other projects too, 15:21:30 and there seems to be fix for it in tempest https://review.opendev.org/c/openstack/tempest/+/889713 15:21:36 thx ykarel 15:22:20 i checked opensearch and don't see the same failures since the fix merged 15:22:41 notice one failure in that test but was different issue 15:23:06 great to hear that 15:24:23 that's all issues from the check/gate queues which I have today 15:24:42 there is not many patches proposed recently TBH 15:24:51 probably because of the summer time 15:24:55 #topic Periodic 15:25:10 Centos jobs broken since around 3 days: 15:25:10 https://zuul.openstack.org/build/e5086a4f2cea43adad4f2487db7da53b 15:25:10 https://zuul.openstack.org/build/e89ae8e46b0b49bfbddb2039e8c88d26 15:25:26 any volunteer to report LP bug and fix it? 15:25:48 i can check 15:25:58 thx ykarel 15:26:13 #action ykarel to check broken centos periodic jobs 15:27:00 and another issue which happens since few days is in the neutron-functional-with-sqlalchemy-master job: 15:27:00 https://zuul.openstack.org/build/89f5e465495a4f0ab4674dced711fe15 15:27:16 anyone wants to report and check that one maybe? 15:27:23 I can do it 15:27:41 thx mtomaska 15:28:29 #action mtomaska to check failing neutron-functional-with-sqlalchemy-master periodic job 15:28:42 that was the last thing I had for today 15:28:51 #topic on demand 15:29:01 anything else You want to discuss today? 15:29:17 if not I will give You back about 30 minutes (again) :) 15:29:28 nothing from me 15:30:24 ok, so lets close the meeting for today 15:30:33 have a great week :) 15:30:37 #endmeeting