13:00:16 <ralonsoh> #startmeeting networking
13:00:16 <opendevmeet> Meeting started Tue May 13 13:00:16 2025 UTC and is due to finish in 60 minutes.  The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:16 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:16 <opendevmeet> The meeting name has been set to 'networking'
13:00:24 <ralonsoh> Ping list: bcafarel, elvira, frickler, mlavalle, mtomaska, obondarev, slaweq, tobias-urdin, ykarel, lajoskatona, jlibosva, averdagu, haleyb, ralonsoh, sahid
13:00:30 <ralonsoh> hello all
13:00:36 <obondarev> o/
13:00:39 <mlavalle> \o
13:00:39 <mtomaska> o/
13:00:40 <cbuggy> o/
13:00:53 <rubasov> o/
13:01:12 <frickler> \o
13:01:27 <ralonsoh> let's start
13:01:40 <ralonsoh> #topic announcements
13:01:46 <ralonsoh> #link https://releases.openstack.org/flamingo/schedule.html
13:01:47 <bcafarel> o/
13:01:52 <ralonsoh> we are in R20
13:02:01 <ralonsoh> that means flamingo-1 milestone
13:02:37 <ralonsoh> a heads-up about the Neutron drivers meeting
13:02:42 <lajoskatona> o/
13:02:42 <ralonsoh> the agenda is here: https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
13:02:55 <ralonsoh> if you want to add any topic, please do it before this Friday
13:03:07 <elvira> o/
13:03:12 <slaweq> o/
13:03:23 <ralonsoh> Bobcat is now unmaintained: https://review.opendev.org/q/topic:%22bobcat-stable%22
13:03:35 <frickler> ralonsoh: nope, eoled
13:03:36 <ralonsoh> sorry, EOL
13:03:49 <ralonsoh> yes, I was writing that
13:04:23 <ralonsoh> and that's all I have, any other announcement?
13:05:07 <ralonsoh> ok, let's move on
13:05:09 <ralonsoh> #topic bugs
13:05:25 <ralonsoh> last report was done by mtomaska
13:05:29 <ralonsoh> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/MHNCBYOVZPJJS7IETORBQRTLDLJMBGKP/
13:05:47 <ralonsoh> there are some unassigned bugs
13:05:56 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2110094
13:06:01 <ralonsoh> OVN AgentCache get_agents method filters agents incorrectly if hostname overlaps
13:06:16 <ralonsoh> well, not this one, I've proposed a patch today
13:06:17 <ralonsoh> sorry
13:06:29 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2110225
13:06:33 <ralonsoh> Floating IPs attached to Octavia LB VIPs are not reachable across routers using different external subnets (OVN + Octavia)
13:06:44 <ralonsoh> This one has been moved from Octavia to Neutron
13:07:00 <mtomaska> yep. I am not sure if that is a bug to be honest
13:07:14 <ralonsoh> I didn't have time to reproduce the scenario
13:07:36 <ralonsoh> and maybe it could be done without deploying octavia
13:07:58 <ralonsoh> if I'm not wrong, it is adding an extra subnet to a GW network, right?
13:08:51 <mtomaska> yes. the GW network (aka public) has two subnets and each subnet is connected to a router which is then connected to its own private network
13:10:08 <ralonsoh> ok, I need to spend sometime on this one
13:10:34 <ralonsoh> I know for sure there are some patches related to the presence of two subnets in the GW network in an OVN router
13:10:38 <ralonsoh> but I need to find them
13:10:57 <ralonsoh> in any case, has anyone time to check this bug?
13:11:05 <mtomaska> Ill do it
13:11:11 <ralonsoh> mtomaska, thanks!
13:11:19 <ralonsoh> I'll ping you later with the related patches
13:11:22 <mtomaska> ill setup the network they described
13:11:26 <mtomaska> ack
13:11:44 <ralonsoh> next bug
13:11:50 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2110087
13:11:57 <ralonsoh> OVN log plugin merges log records from different log objects across projects
13:12:31 <ralonsoh> so basically, as written at the end:
13:12:32 <mtomaska> I was not 100% sure what the reporter claims as the bug. But I think it is what I posted in my comment
13:12:37 <ralonsoh> "all log entries are being attributed to only one of the Neutron log objects, despite being from different security groups and different projects/domains"
13:12:56 <mtomaska> I.e. we group all logs under the same "neutron object" in the ovn-controller log.
13:13:53 <ralonsoh> so you tried pinging from vm-a to to vm-b
13:14:00 <ralonsoh> both with different SGs, right?
13:14:03 <mtomaska> yes
13:14:11 <ralonsoh> and the logs are always associated to one SG
13:14:28 <mtomaska> they got associated to object "neutron-fd9caf20-08b6-40e1-bde8-c33aee1f675a"
13:14:40 <ralonsoh> that is SGa
13:14:40 <mtomaska> which is wrong... and I think that is the core of this bug
13:15:09 <ralonsoh> right, I think that deserves a question in the ovn mailing group
13:15:25 <mtomaska> yes in this case. What I saw is that it will always be the first uuid of the first neutron object name
13:16:08 <mtomaska> ^ that seems wrong
13:16:14 <ralonsoh> qq: when should we see logs from the other SG?
13:16:51 <mtomaska> IDK :) . I dont know too much about this feature to tell what is wrong and what is right :)
13:16:53 <ralonsoh> because I see no logs related to the ping in the other direction
13:17:07 <ralonsoh> elvira, maybe you can check this one first?
13:17:08 <mtomaska> I'll talk to elvira
13:17:11 <ralonsoh> right
13:17:13 <mtomaska> right
13:17:13 <frickler> could this be an information leak? i.e. is this a security issue?
13:17:25 <ralonsoh> it doesn'
13:17:35 <ralonsoh> doesn't look like a security issue
13:17:38 <ralonsoh> but a lack of logging
13:17:40 <elvira> sure, I can take a look
13:17:44 <mtomaska> frickler I dont think so. After all, only admins should have access to ovn-controller.log, right?
13:18:11 <mtomaska> elvira: let discuss offline. Maybe a quick video call or something
13:18:20 <frickler> mtomaska: ah, right
13:18:33 <ralonsoh> cool, thanks, let's talk about this one offline
13:18:51 <ralonsoh> the last one is
13:18:55 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2110306
13:18:56 <elvira> Note that dropped traffic will be logged for all sec groups since we have that traffic being monitored by the general drop acl
13:19:01 <elvira> will continue offline :)
13:19:14 <ralonsoh> [RFE][OVN] subnet onboarding is not supported
13:19:26 <ralonsoh> so this is a RFE
13:19:31 <mtomaska> seems like a decision for RFE to me.
13:19:49 <ralonsoh> hmmm
13:20:00 <ralonsoh> maybe is just a matter of adding the extension to OVN
13:20:19 <ralonsoh> becauseI think this is only API related
13:20:21 <ralonsoh> (I think)
13:20:40 <mtomaska> what is really subnet onboarding? :)
13:20:52 <ralonsoh> "The subnet onboard feature allows you to take existing subnets that have been created outside of a subnet pool and move them into an existing subnet pool. "
13:21:15 <frickler> it changes the address scope, so maybe OVN will have to do something?
13:21:16 <lajoskatona> https://docs.openstack.org/neutron/latest/admin/config-subnet-onboard.html
13:22:33 <ralonsoh> frickler, yeah, that maybe would need some driver interaction, not just API actions
13:22:44 <ralonsoh> ok, I'll check it after the meeting
13:23:06 <ralonsoh> if this bug requires any kind of driver interaction, that will be a RFE for sure
13:23:25 <ralonsoh> otherwise, it will be just a oneliner patch
13:23:33 <mtomaska> ack
13:23:51 <ralonsoh> and that's all I have
13:23:57 <ralonsoh> any other bug you want to discuss?
13:24:37 <ralonsoh> and the "winner" this week is bcafarel, next week will be elvira
13:24:40 <ralonsoh> ack?
13:24:43 <elvira> ack
13:24:58 <bcafarel> what is the winning prize? :)
13:25:06 <bcafarel> (and also ack for this week)
13:25:15 <ralonsoh> spend wonderful time reviewing launchpad...
13:25:33 <mtomaska> bcafarel: 10+ bugs assigned
13:25:53 <ralonsoh> ok, let's move to the next topic
13:26:04 <ralonsoh> #topic community-goals
13:26:15 <ralonsoh> the first one is the Neutronclient deprecation
13:26:18 <ralonsoh> lajoskatona, any update?
13:26:36 <lajoskatona> nothing this week
13:26:47 <lajoskatona> I pinged hoeirzon team, thats all
13:26:53 <ralonsoh> link to any patch?
13:26:55 <lajoskatona> horizon
13:27:10 <lajoskatona> https://review.opendev.org/c/openstack/horizon/+/946269
13:27:18 <ralonsoh> cool thanks!
13:27:24 <lajoskatona> to go back to fullstack I had no time
13:27:41 <lajoskatona> FYI: https://review.opendev.org/c/openstack/neutron/+/947851
13:28:03 <ralonsoh> ah yes, complex one
13:28:47 <ralonsoh> the next one goal is the eventlet removal
13:29:07 <ralonsoh> I've pushed a new patch this week, related to the L3 agent LP
13:29:10 <ralonsoh> #link https://review.opendev.org/c/openstack/neutron/+/949135
13:29:11 <lajoskatona> the bigest proble is tha I have no idea how to do no-auth connection with SDK, and seems it is not easy and known by somebody
13:29:53 <ralonsoh> lajoskatona, this?
13:29:53 <ralonsoh> https://review.opendev.org/c/openstack/neutron/+/947851/1/neutron/tests/fullstack/resources/process.py
13:30:08 <lajoskatona> yes, exactly
13:30:22 <lajoskatona> but sorry go ahaead with eventlet, that's i t from me for SDK
13:30:49 <ralonsoh> lajoskatona, qq: did you bring this topic to any team meeting?
13:30:57 <ralonsoh> slaweq, ^ could be in TC?
13:31:21 <lajoskatona> ralonsoh: you mean the SDK vs no-auth question?
13:31:37 <ralonsoh> sorry, not TC, SDK
13:31:43 <slaweq> ralonsoh I am not in TC anymore and didn't attend meetings recently
13:31:51 <ralonsoh> sorry, not TC, my bad
13:31:57 <lajoskatona> I asked on SDK channel, but plan to ask more to see if it is possible at all
13:32:00 <slaweq> frickler should know better maybe
13:32:19 <frickler> I also only know that it seems to be difficult
13:32:41 <ralonsoh> but that was possible with neutronclient
13:33:03 <lajoskatona> we call that progress usually (sorry just sarcasm....)
13:33:12 <ralonsoh> hahahaha
13:33:49 <ralonsoh> maybe, just for testing, I would be needed to implement this in SDK
13:34:02 <ralonsoh> I'll ping stephenfin after this meeting
13:34:07 <lajoskatona> I run another round with SDK people, how should we progress with it
13:34:28 <lajoskatona> thanks, I had a chat with gtema last week, or before about this
13:34:52 <ralonsoh> ok, about the L3 patch (https://review.opendev.org/c/openstack/neutron/+/949135). Just a bit of context:
13:35:40 <ralonsoh> it is described in the commit message. With the oslo.service threading backend (patch under review), the service (l3 agent) is forked in the start method
13:35:59 <ralonsoh> any object created in the init method will be forked too
13:36:06 <ralonsoh> but RPC clients doesn't like this
13:36:25 <ralonsoh> I've moved all the RPC client initialization to the init_host method, that is called from Service.start()
13:36:47 <ralonsoh> that patch MUST work with both backends (in the CI is working now with eventlet)
13:37:03 <ralonsoh> in the upper patch is working with theading backends
13:37:06 <ralonsoh> that's all
13:37:30 <ralonsoh> any other comment?
13:37:41 <lajoskatona> thanks ralonsoh, I check this one, I played also with dhcp-agent and the oslo.service patch
13:37:52 <ralonsoh> yeah, that will need exactly the same
13:37:56 <lajoskatona> but just locally at the moment
13:38:05 <ralonsoh> to move all RPC clients to the init_host methods
13:38:25 <lajoskatona> I was able to start the agent with it, with some exceptions, but will check yours of course
13:38:35 <ralonsoh> did you use the oslo.service patch?
13:38:41 <lajoskatona> yes, I did
13:38:47 <ralonsoh> cool!
13:39:03 <ralonsoh> it seems that we'll have it soon in a new oslo.service release
13:39:10 <ralonsoh> that will be easier for testing
13:39:29 <lajoskatona> +1
13:39:37 <ralonsoh> ok, let's move to the last topic
13:39:42 <ralonsoh> #topic on-demand
13:39:51 <ralonsoh> anything you want to add?
13:40:47 <ralonsoh> Ok, I'm closing the meeting for today. Thank you all for attending!
13:40:53 <ralonsoh> #endmeeting