14:00:23 #startmeeting networking 14:00:23 Meeting started Tue Mar 18 14:00:23 2025 UTC and is due to finish in 60 minutes. The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:23 The meeting name has been set to 'networking' 14:00:26 Ping list: bcafarel, elvira, frickler, mlavalle, mtomaska, obondarev, slaweq, tobias-urdin, ykarel, lajoskatona, jlibosva, averdagu, haleyb, ralonsoh 14:00:28 o/ 14:00:33 o/ 14:00:34 o/ 14:00:35 hello 14:00:35 o/ 14:00:37 \o 14:01:01 o/ 14:01:28 #announcements 14:01:32 o/ 14:01:39 We are currently in week R-2 of Epoxy 14:01:59 RC1 was tagged and stable/2025.1 branch created 14:02:26 so anything critical should be cherry-picked to get into RC2 14:02:44 i know there is the IPv6 metadata one 14:03:15 the dhcp eventlet removal 14:03:23 https://review.opendev.org/c/openstack/neutron/+/944826/2 14:03:28 Final RC deadline: March 28th, 2025 (R-1 week) 14:04:09 ralonsoh: oh, the revert 14:04:29 yes, both patches 14:04:34 must be in 2025.1 too 14:04:45 +1 14:05:07 glad we figured that out 14:05:40 after half openstack collapsed, but in time anyway :-) 14:06:04 https://review.opendev.org/c/openstack/neutron/+/944634/2 is the parent change, got it 14:06:31 Final 2025.1 Epoxy release: April 2nd, 2025 14:07:20 So please ping people here if you need reviews on critical gate fixes, and/or get them on the priority board 14:08:00 Project Teams Gathering (virtual): April 7th-11th, 2025 14:08:06 #link https://ptg.openinfra.dev/ to sign up 14:08:23 #link https://etherpad.opendev.org/p/apr2025-ptg-neutron 14:08:49 #link https://ptg.opendev.org/ptg.html for room assignments 14:09:34 I added two slots Tuesday, 4 on Wednesday and Thursday 14:09:41 did not want to conflict with eventlet 14:10:22 please add topics to the etherpad, think we should just push all RFE discussions until then 14:11:16 that was all i had for announcements today, any others? 14:11:29 thanks for PTG management :-) 14:12:02 that's what you pay me for, oh wait... :) 14:12:17 o/ late 14:12:29 o/ 14:12:36 #topic bugs 14:12:45 mlavalle was deputy last week, report is at 14:12:49 \o 14:12:51 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/UA5Q7RUKIIWHZLYIEERYHB4T5NQANEPP/ 14:14:02 so the high have been addressed 14:14:14 and the dhcp one is in flight 14:15:09 #link https://bugs.launchpad.net/neutron/+bug/2101871 14:15:10 Use of revision_number for router update on 2023.2 Bobcat cause error 500 14:15:29 that was an interesting one, still waiting for the commands to reproduce 14:15:33 we are waiting for a response from submitter 14:16:36 and this one is incomplete as well 14:16:39 #link https://bugs.launchpad.net/neutron/+bug/2102044 14:16:46 Nova-API problem - cannot list instances anymore 14:16:58 might be config, i'm assuming it's not packages 14:18:01 but do we have the Neutron APi exception? 14:18:12 ralonsoh: no he never pasted it 14:18:23 ok, we need it, for sure 14:18:56 yes. i'm watching as i want to make sure it's not a distro problem since it was ubuntu, but don't think it is 14:19:47 i did start seeing random batch_notifier unit test failures, one was in the metadata patch 14:19:50 #link https://bugs.launchpad.net/neutron/+bug/2100806 14:19:58 saw them again i meant to say 14:20:14 ralonsoh: can you see my comment in https://review.opendev.org/c/openstack/neutron/+/943212 ? maybe we can discuss after meeting 14:20:24 sure 14:20:31 Candido Campos Rivas proposed x/whitebox-neutron-tempest-plugin master: Increase burst window size to avoid unstabilities in slow envs https://review.opendev.org/c/x/whitebox-neutron-tempest-plugin/+/944616 14:21:14 any other bugs to discuss? 14:21:28 yes, two 14:21:33 sure 14:21:50 https://bugs.launchpad.net/neutron/+bug/2102090 14:22:03 I initially set it as low and no RFE 14:22:15 because that could be a bug, not a RFE 14:22:44 the problem: with eventlet module, all workers (API, maintenance, rpc, periodic) were executed in the same process 14:23:12 now this is not happening, but the OVN placement initial config is in the maintenance worker 14:23:32 but anyone will expect that, if you restart the Neutron API, that will be executed 14:23:44 so the question is: should we move this to the Neutron API code? 14:24:08 now, if we want to re-execute this initial config, we need to restart the maintenance process 14:25:18 ralonsoh: it seems it was almost an unintended change and should be fixed 14:25:48 perfect, I had the same idea 14:25:53 this is why I opened the bug 14:26:17 anyone else if free to comment on the LP bug, if needed 14:26:25 the second one: 14:26:31 this is the upper patch: https://review.opendev.org/c/openstack/neutron/+/944053 14:26:37 it needs to be backported 14:26:53 but the first patch (singleton one) could be intrusive 14:27:13 I implemented that for the tests, but I can re-implement the tests for stable branches without it 14:27:41 so please, review these patches and I'll backport them (2nd and 3rd patch) without the 1st one 14:27:56 that's all 14:28:02 about https://review.opendev.org/c/openstack/neutron/+/944763 / https://launchpad.net/bugs/2103417 14:28:19 a) it is blocking kolla and we have no feasible workaround 14:28:35 b) I wonder how that issue could have been detected within neutron 14:29:29 we use metadata in the L3 agent 14:29:43 I don't know if we have any test using the dhcp one 14:30:29 well the issue in kolla isn't that metadata is broken, but that the dhcp agent is failing and thus instances don't get a dhcp response 14:30:33 the patch needs a respin, but looks fine. Could be merged today and backported 14:30:43 I'm not sure why that doesn't happen in a devstack setup 14:31:25 we would need to check all our tempest tests 14:31:31 the changes to the unit test would trigger it from waht i remember from an early PS 14:31:39 the dhcp failure is not related to the eventlet change? 14:31:45 or that is a parallel one< 14:31:48 not this one 14:31:52 ack 14:31:56 it's from my metadata ipv6 change 14:32:12 i fixed one bug but caused another 14:32:25 that's life :-) 14:32:55 haleyb, will you push the new PS? 14:33:01 the dragon has endles new heads, and we had them just before release this time it seems 14:33:10 ralonsoh: sure i can do right after meeting if i don't see an update 14:33:22 I'm not sure whether running a (n-v) kolla job on neutron changes would be an option, or do you think that that would be too much? 14:34:16 you can have your own neutron-master CI job 14:34:27 with a daily execution 14:34:42 well we consume neutron from master anyway, so that would be just our normal jobs 14:35:34 the idea would be able to recognize issues before they are merged into master 14:36:05 yes but this is expensive, just when we are removing jobs 14:36:18 when was the last time Neutron introduced an error in kolla this big? 14:36:46 also, as you said, you have your own jobs and we responded quick to this problem 14:37:24 like we have master oslo or main pyroute2 jobs to have quick alert 14:38:10 frickler we had in the past such kind of cross project non-voting jobs 14:38:30 but in order to save some infra resources we moved those jobs to periodic queue 14:38:53 so maybe kolla-neutron job can be in periodic line too? 14:39:14 and we are checking those periodic jobs weekly 14:39:21 so IMO it is good tradeoff 14:39:49 +1 for periodic weekly and let's check them on weekly CI meeting 14:41:10 I think we have those already, but querying zuul is a bit slow for me right now 14:41:51 anyway, fine for me for now 14:42:49 ok, we can discuss in CI meeting to see if there is any work here to help 14:43:08 that was all the bugs i think 14:43:26 this week jlibosva is the bug deputy, but i don't see him 14:43:35 I'll ping him later 14:43:41 can someone ping him on slack channel, thanks 14:43:59 next week is obondarev, is that good with you? 14:44:08 yep 14:44:15 thanks! 14:44:29 #topic community goals 14:45:25 lajoskatona: i don't see any neutronclient changes in-flight, so that's good :) 14:45:34 is it something to continue next cycle? 14:46:18 we can come back to that 14:46:27 yes, all open things merged, I have to go and find the next api topic to change to sdk 14:46:45 lajoskatona: ack, thanks 14:47:15 ralonsoh: evenlet removal is next, i'll let you give status 14:47:24 yes, in bullet points 14:47:28 SRIOV: merged 14:47:43 DHCP: reverted due to missing backend in oslo.service 14:47:58 MacVTap: I need to check, but same ^^ problem 14:48:13 L3: no activity (I had bug fixes to push) 14:48:16 OVS: https://review.opendev.org/q/topic:%22bug/2087939%22+status:open 14:48:20 that's all 14:48:30 yes we said macvtap also uses oslo.services, so have to wait with it 14:48:44 yes 14:49:40 ralonsoh: in your opinion, should be merge any/all of the ovs changes and get into 2025.1? i did have some comments out to sahid on one of the changes at least 14:50:07 #link https://review.opendev.org/c/openstack/neutron/+/940983 - i need to look again 14:50:23 no, we don't need these changes in stable 14:50:55 ack, that's what i thought, plus i don't want to cause breakeage 14:51:01 right 14:51:42 ok, anything else for community topics? 14:52:00 Pierre Riteau proposed openstack/neutron master: Spawn IPv6 metadata proxy only when required https://review.opendev.org/c/openstack/neutron/+/944763 14:52:05 #topic on-demand 14:52:20 we already discussed the OVN placement bug 14:52:53 ah yes, I added it in this list 14:53:24 ralonsoh: regarding the DB bug you fixed (i can't find the patch) - is that something we need to backport? 14:53:41 which one? 14:53:51 https://review.opendev.org/c/openstack/neutron/+/944803 14:54:24 would that affect a db migration? 14:54:44 pffff to be honest, I don't know why this was not affecting the migration 14:55:09 I'll push stable only patches for the affected ones 14:55:17 that should be 2025.1 and 2024.2 14:55:30 and 2023.1 14:55:41 ack, thanks 14:56:11 * haleyb was thinking no one uses openstack? :-p 14:56:34 s/newer openstack 14:56:51 but then i'm not a comedian 14:57:03 anyways, any other topics to discuss? 14:57:05 that should be affecting the migration in the CI 14:57:16 but as I said, I have no idea why is not... 14:57:25 i don't either 14:58:58 ok, thanks for attending everyone, and ping here if there are critical things for RC2 14:59:01 #endmeeting