13:02:22 <haleyb> #startmeeting networking
13:02:22 <opendevmeet> Meeting started Tue May 27 13:02:22 2025 UTC and is due to finish in 60 minutes.  The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:02:22 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:02:22 <opendevmeet> The meeting name has been set to 'networking'
13:02:24 <haleyb> Ping list: bcafarel, elvira, frickler, mlavalle, mtomaska, obondarev, slaweq, tobias-urdin, ykarel, lajoskatona, jlibosva, averdagu, haleyb, ralonsoh
13:02:26 <mlavalle> \o
13:02:30 <elvira> o/
13:02:38 <ralonsoh> hello
13:02:47 <lajoskatona> o/
13:02:52 <sahid> o/
13:03:10 <haleyb> #announcements
13:03:15 <haleyb> We are currently in Week R-17 of Flamingo
13:03:27 <haleyb> Our next milestone in this development cycle will be Flamingo-2, on July 3rd
13:03:46 <haleyb> Final 2025.2 Flamingo release: October 3rd, 2025
13:03:52 <haleyb> #link https://releases.openstack.org/flamingo/schedule.html
13:04:01 <cbuggy> o/
13:04:01 <haleyb> Reminder: If you have a topic for the drivers meeting on Friday, please add it to the wiki @ https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
13:04:35 <haleyb> Plan is to have a meeting this friday to discuss a recent bgp rfe (and any others)
13:05:36 <haleyb> that was all i had for announcements, any others?
13:06:14 <haleyb> #topic bugs
13:06:30 <haleyb> elvira was the deputy last week, her report is at
13:06:32 <rubasov> o/
13:06:37 <haleyb> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/QOLR2TOQZ7PFTZ7ZR7HPKTB7JWNJ4TVB/
13:07:33 <haleyb> there were quite a few, will go through the unassigned ones
13:09:00 <haleyb> rodolfo has picked up a bunch already i see, but i found one
13:09:03 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2111572
13:09:14 <haleyb> [OVN] OVN ML2 mech driver forces port update to networking generic switch (NGS) on OVS restart
13:09:52 <ralonsoh> I have a question on this one
13:10:04 <ralonsoh> because as elvira mentioned, this is not affecting anything
13:11:34 <frickler> bbezak might be around to answer?
13:11:40 <haleyb> right, is this a bug in NGS?
13:12:03 <ralonsoh> I don't think this is a bug in NGS, as long as is reacting to the port update
13:12:08 <ralonsoh> we can discuss if that is needed
13:12:24 <ralonsoh> but a vswitchd restart is not in the normal operation
13:12:38 <ralonsoh> (I would like to know what vswitchd service, in what host)
13:12:47 <ralonsoh> compute, most probably
13:13:45 <ralonsoh> anyway, we can ask in the LP bug, I'll do it today (why this restart is needed, if traffic is affected, etc)
13:15:06 <haleyb> ack, thanks
13:16:22 <haleyb> next one
13:16:34 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2111584
13:16:38 <haleyb> Failed get list of ports due to removed network
13:17:43 <haleyb> guess i would have expected some type of error when doing this operation, but we should be more graceful
13:19:08 <elvira> that is what I thought too, maybe it is low prio and not medium indeed
13:19:15 <haleyb> would agree with the low hanging fruit tag, so if someone wants to pick it up...
13:19:48 <ralonsoh> we have recently optimized this code
13:19:57 <ralonsoh> maybe with the optimizations we can skip it
13:20:28 <bbezak> actually restart of ovs_vswitchd on the controllers
13:20:50 <ralonsoh> --> https://review.opendev.org/q/owner:jimin3.shin@samsung.com+status:merged
13:20:58 <bbezak> invokes the port update
13:21:19 <bbezak> (sorry for the delay) - it is about bug 2111572
13:21:32 <ralonsoh> bbezak, one sec, let's finish the current bug
13:21:49 <ralonsoh> I can try creating 1000 ports and doing the same operation
13:22:05 <ralonsoh> I think we'll need some min qos rules (maybe)
13:22:13 <haleyb> ralonsoh: oh, it might not happen now
13:22:30 <ralonsoh> but with the optimizations we no longer retrieve anything if there are no min-bw rules
13:22:50 <ralonsoh> anyway, I'll add it for next friday morning, to deploy a test env
13:22:59 <haleyb> ack
13:23:10 <ralonsoh> bbezak, hi
13:23:11 <haleyb> we can loop back to other bug
13:23:29 <ralonsoh> yes, we know the restart triggers the update
13:23:33 <ralonsoh> why this restart?
13:24:22 <bbezak> well - during some maintenances - containers upgrades - hosts restart
13:25:05 <ralonsoh> and this update affects the traffic? I guess nothing in this host is running during the restart
13:26:02 <bbezak> it is not affecting traffic on baremetal host, cause ngs applies the same config - but still, in scale - for instance ~500 baremetal hosts, ngs would hammer all switches. which can be problematic
13:27:07 <ralonsoh> bbezak, if you can identify what event is triggered during the vswitchd restart, maybe we can debug this problem
13:27:59 <ralonsoh> (Neutron event, you can enable DEBUG logs)
13:28:47 <bbezak> yeah, I'll reproduce it and update the bug
13:28:47 * slaweq needs to drop for another meeting but I am aware of bug deputy this week :)
13:28:47 <bbezak> thx
13:28:56 <ralonsoh> thansk
13:29:10 <haleyb> slaweq: ack, thanks
13:30:00 <haleyb> ok, so we have some debugging to do on 2111572
13:30:09 <haleyb> moving along to next one
13:30:26 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2111496
13:30:27 <haleyb> Neutron port stuck in DOWN state after update router external connectivity
13:30:57 <ralonsoh> yes, that is a bug
13:31:09 <ralonsoh> we, most probably, are not detecting the port update
13:31:19 <ralonsoh> and we are not activating again the port
13:32:24 <ralonsoh> (and, just thinking loud, 6 and 7 are not needed)
13:32:41 <ralonsoh> if this is a port in the internal network
13:33:29 <haleyb> so just down/up of a port? ok
13:33:37 <ralonsoh> maybe
13:34:01 <haleyb> i'll ask submitter to try just so we know
13:34:03 <ralonsoh> I don't have an OVS env right now, but I can deploy one
13:34:12 <ralonsoh> but not this week...
13:35:03 <lajoskatona> I can try, I usually have OVS env
13:35:11 <ralonsoh> cool!
13:35:47 <haleyb> lajoskatona: ack, thanks
13:36:09 <haleyb> last bug in list
13:36:21 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2111660
13:36:22 <haleyb> OVN_AGENT_NEUTRON_SB_CFG_KEY doesn't set when Neutron OVN Agent starts
13:36:55 <ralonsoh> there is a patch proposed
13:37:03 <ralonsoh> and I think a good one, that fixes the problem
13:37:16 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2111475
13:37:25 <haleyb> oops, that's the rlated bug
13:37:31 <ralonsoh> yes
13:37:41 <ralonsoh> ahhhhh sorry
13:37:43 <haleyb> #link https://review.opendev.org/c/openstack/neutron/+/950673
13:38:04 <ralonsoh> Ok, I asked Min to open a LP bug if he was going to propose a patch only for ovn metadata
13:38:17 <ralonsoh> so this is why we have https://bugs.launchpad.net/neutron/+bug/2111660
13:39:13 <ralonsoh> Ok, I'll take this one, the fix should be similar to https://review.opendev.org/c/openstack/neutron/+/950673
13:39:35 <haleyb> do we need a fix for ovn-agent as well? i'm reading between the lines
13:39:43 <ralonsoh> yes, the issue is the same
13:39:54 <haleyb> ack, thanks
13:40:24 <haleyb> so i think that was all the unassigned bugs
13:40:33 <haleyb> any other bugs to discuss?
13:41:09 <haleyb> slaweq is the deputy this week, i am next week
13:41:48 <haleyb> #topic community goals
13:42:26 <haleyb> lajoskatona: sorry, i haven't been able to review any of your neutronclient patches, can look today
13:42:36 <haleyb> have they made forward progress from other reviews?
13:42:39 <lajoskatona> here they are: https://review.opendev.org/q/topic:%22bug/1999774%22+status:open+project:openstack/python-openstackclient+or+status:open+owner:self+project:openstack/openstacksdk
13:43:40 <lajoskatona> I think these are the ones for sec-groups is_shared field, suggested by ralonsoh and stephenfin (perhaps another depends-on can be in one of the commit-msg but let's wait)
13:44:02 <haleyb> #link https://review.opendev.org/q/topic:%22bug/1999774%22
13:44:03 <lajoskatona> that's all this topic from me, slow progress
13:44:10 <haleyb> that shows them all
13:44:43 <lajoskatona> yes, but the filtred list is the hot this week :-)
13:45:01 <haleyb> it is sometimes hard to move a large boulder :)
13:46:09 <haleyb> ok, finally eventlet
13:46:15 <ralonsoh> hi
13:46:16 <haleyb> i know there were new bugs files
13:46:25 <ralonsoh> yes, related to the test FWs
13:46:38 <ralonsoh> and this going to be a nightmare... in particular fullstack
13:47:07 <ralonsoh> I've proposed https://review.opendev.org/c/openstack/neutron/+/938404 to remove eventlet from the l3 agent
13:47:21 <ralonsoh> and right now I need to skip many FT and fullstack tests
13:47:26 <ralonsoh> this is explained in the docs
13:47:54 <ralonsoh> we also have other patches: https://review.opendev.org/q/topic:%22eventlet-removal%22+project:openstack/neutron+status:open
13:48:07 <ralonsoh> in particular the ones related to unit and functional tests
13:48:19 <lajoskatona> I slowly progress with the DHCP one in the background....
13:48:22 <ralonsoh> and lajoskatona is working on the DHCP agent: https://review.opendev.org/c/openstack/neutron/+/950498
13:48:24 <ralonsoh> yeah
13:48:51 <ralonsoh> the first step, same as in the l3 agent: to move the RPC initialization out of the init methods
13:49:08 <ralonsoh> this is because oslo.service is spawning several new processes and threads after that
13:49:09 <lajoskatona> yes, I am on it to make it final
13:49:20 <ralonsoh> and the RPC connections do not like that
13:49:35 <ralonsoh> --> please review this: https://review.opendev.org/c/openstack/neutron/+/949135
13:49:42 <ralonsoh> (is ready now)
13:50:01 <ralonsoh> and last topic
13:50:12 <haleyb> ralonsoh: ack, can look later today
13:50:23 <ralonsoh> we have oslo.service 4.2.0, that implements "threading" backend
13:50:36 <ralonsoh> that means: all service FW is written without evnetlet
13:50:41 <ralonsoh> that's all I have
13:52:07 <haleyb> right. so i'm remembering an older change we never pushed for os-ken that i think was related to threading backend, will have to find it
13:52:24 <haleyb> sahid: i think that was yours ^^ just wondering if we need to move it forward now
13:53:05 <lajoskatona> yeah, some  patches are hanging for ovs-agent and os-ken
13:53:05 <haleyb> i will look after meeting
13:53:15 <sahid> yes i think that could make sense
13:53:53 <lajoskatona> +1
13:54:38 <haleyb> ok, if we can get those in ready-to-merge state that would be great (if not ther already)
13:54:51 <sahid> i will double check
13:54:57 <haleyb> thanks
13:55:02 <sahid> sure
13:55:11 <haleyb> i will move on since i have a hard stop at top of hour
13:55:15 <haleyb> #topic on-demand
13:55:20 <haleyb> lajoskatona: you had an item
13:55:32 <lajoskatona> yes
13:55:36 <lajoskatona> regarding trunks
13:55:52 <lajoskatona> rubasov already sent out a mail to summarize: https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/W3CNUX2QPKLKWY6UV4ZGR3E65TVE4BLC/
13:56:35 <lajoskatona> and we have some patches to fix different aspects of the issue which happens when we have many subports for a trunk *actually thats just for easy reproduction)
13:57:11 <lajoskatona> the rootcause is that the status of the trunk set to ACTIVE when the parent goes active not when all the posts goes to active
13:57:27 <lajoskatona> and this was intriduced for a fix with OVN: https://review.opendev.org/c/openstack/neutron/+/853779
13:58:38 <lajoskatona> so this breaks the OVS driver for sure but the OVN also, so one fix from rubasov is to move this status change to OVN only part, and for OVS to fix this in actually os-vif and some extra code doc for code archeologists after us :-)
13:59:08 <lajoskatona> and the patch series: https://review.opendev.org/q/topic:%22bug/2095152%22
13:59:32 <lajoskatona> please check if you have time ,and read the mail I linked with bug report :-)
13:59:42 <lajoskatona> That's it, thanks for the attention
14:00:21 <ralonsoh> qq, the os-vif patch is related to the OVN issue?
14:00:25 <haleyb> lajoskatona: thanks will take a look, and i see you added the right people to the review i think
14:00:33 <lajoskatona> no that is for OVS
14:00:36 <ralonsoh> thanks!
14:00:53 <lajoskatona> thanks
14:01:32 <haleyb> ok, that's all i had, any other topics
14:02:05 <haleyb> ok, thanks for attending everyone, have a good week
14:02:07 <haleyb> #endmeeting