14:00:42 <ralonsoh> #startmeeting networking
14:00:42 <opendevmeet> Meeting started Tue Jun 20 14:00:42 2023 UTC and is due to finish in 60 minutes.  The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:42 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:42 <opendevmeet> The meeting name has been set to 'networking'
14:00:44 <ralonsoh> hello all
14:00:48 <obondarev> hi
14:00:50 <lajoskatona> o/
14:01:14 <ralonsoh> slaweq, is not going to attend today
14:01:46 <isabek> o/
14:02:05 <ralonsoh> ok, let's start
14:02:12 <ralonsoh> #topic announcements
14:02:24 <ralonsoh> #link https://releases.openstack.org/bobcat/schedule.html
14:02:26 <bcafarel> late o/
14:02:46 <ralonsoh> in 2 weeks, we have bobcat-2 milestone
14:03:08 <ralonsoh> we officially don't have a spec freeze
14:03:25 <ralonsoh> but we should do the same as Nova, using this week as spec freeze
14:03:31 <ralonsoh> but we'll talk later about this
14:03:39 <matfechner> o/
14:03:50 <ralonsoh> and the PTG, of course!
14:03:56 <ralonsoh> this is my summary: https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034161.html
14:04:07 <lajoskatona> thanks for the summary
14:04:11 <ralonsoh> many people attending the forum and talk sessions
14:04:40 <ralonsoh> and quite a few joining us, slaweq and me (the only Neutron core reviewers) in the PTG table
14:05:20 <ralonsoh> that was very interesting. Of course, that was more a issue/problems forum rather than a program PTG
14:05:28 <ralonsoh> but we had it 2 months ago
14:05:49 <ralonsoh> and that's all, anything else missing here?
14:06:26 <ralonsoh> ok, let's move to the next topic
14:06:29 <ralonsoh> #topic bugs
14:06:34 <ralonsoh> we have 2 reports
14:06:41 <ralonsoh> from bcafarel: https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034086.html
14:06:48 <ralonsoh> from lajoskatona: https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034147.html
14:06:49 <opendevreview> Lajos Katona proposed openstack/python-neutronclient master: OSC: Remove FWAAS V2 calls to neutronclient  https://review.opendev.org/c/openstack/python-neutronclient/+/880629
14:06:58 <bcafarel> for once I had a quiet week :)
14:07:11 <ralonsoh> yeah hehehe I noticed that
14:07:13 <lajoskatona> agree
14:07:29 <ralonsoh> we have 2 pending bugs to be discussed
14:07:35 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2024251
14:08:11 <ralonsoh> If no one is taking it, I'll assign it to myself but for the next week
14:08:30 <ralonsoh> it seems that is easy to reproduce it
14:09:20 <ralonsoh> the most important bug we have this week is
14:09:22 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2024160
14:09:40 <ralonsoh> it seems that a new version of OVN is affecting the OVN ports
14:09:56 <ralonsoh> in particular the subports, that are never set as ACTIVE
14:10:18 <ralonsoh> but if I'm not wrong, this is affecting any subport, before being migrated
14:10:21 <ralonsoh> lajoskatona, right?
14:11:10 <lajoskatona> yes thats visible from the bug
14:11:38 <lajoskatona> another occurance is this bug:https://bugs.launchpad.net/neutron/+bug/2023634
14:11:56 <ralonsoh> yeah, this is related to the OVN version
14:12:04 <lajoskatona> this appears only in our functional job, and it seems changing the test is enough
14:12:16 <ralonsoh> but this code https://review.opendev.org/c/openstack/neutron/+/883681 is affecting only to the virtual ports
14:12:50 <ralonsoh> OVN has changed how the DB port binding registers are updated
14:13:05 <ralonsoh> I'll check the virtual port feature but seems to be not affected
14:13:07 <ralonsoh> only the test
14:13:18 <ralonsoh> but the issue with the subports is legit and is critical
14:13:21 <lajoskatona> yes that seems from the functiona ltests, sometimes 2 delete event arrives for the same port
14:13:28 <ralonsoh> right ^
14:13:58 <lajoskatona> yes and the trunk/subport issue affects other projects so attention is bigger
14:14:01 <ralonsoh> I think that your patch could add this comment there, https://review.opendev.org/c/openstack/neutron/+/886167/1/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py
14:14:18 <ralonsoh> just to let anyone know that this assert changes depending on the OVN version
14:14:27 <ralonsoh> but this fix for the FT test is OK IMO
14:14:53 <ralonsoh> I'll start working today to figure out what is happening with the subports
14:15:14 <ralonsoh> ok, any other bug to be discussed here?
14:15:21 <frickler> for my bgp bug, it would be nice if someone with more DB insight could take a look at https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/885993
14:15:40 <ralonsoh> frickler, let me check the bug first
14:16:28 <frickler> that's https://launchpad.net/bugs/2023632 just ftr
14:17:22 <ralonsoh> but it seems to be a problem with the IP parsing
14:17:26 <ralonsoh> well, IP version
14:19:02 <frickler> ralonsoh: that may be either another issue in os-ken or maybe n-d-r, but the v4 prefix should not get announced at all in that scenario
14:19:33 <frickler> because it doesn't have the matching address-scope
14:19:48 <frickler> and afaict n-d-r also doesn't do MP-BGP at all yet
14:19:52 <ralonsoh> but this os-ken call is a callback?
14:19:59 <ralonsoh> or are you calling it from n-d-r?
14:20:22 <frickler> I would have to check in detail, but I think it is called from n-d-r
14:20:40 <ralonsoh> in that case, this is easier if you filter that from n-d-r
14:20:52 <ralonsoh> well, that's easy to say
14:21:02 <frickler> my patch filters it in the DB query
14:21:41 <frickler> which is already happening in the usual workflow. the bug only seems to happen if a router is updated with multiple interfaces
14:21:58 <frickler> I'll work on a tempest test to expose the bug, too
14:22:26 <ralonsoh> ^^ cool. Can you print the output of this query? as debug, temporarily
14:22:47 <frickler> anyway my patch fixes the issue in my deployment where the bug was discovered, I just need a bit of confirmation that it looks correct
14:22:55 <ralonsoh> perfect
14:23:26 <frickler> I can try to add in some debugging, too, yes
14:23:47 <ralonsoh> perfect, I'll wait too for this tempest test, if possible
14:23:49 <ralonsoh> thanks!
14:24:10 <ralonsoh> this week slaweq is the deputy, next week will be haleyb
14:24:18 <ralonsoh> (I'll ping slaweq tomorrow)
14:24:21 <ralonsoh> haleyb, ?
14:24:36 <haleyb> ack
14:24:39 <ralonsoh> thanks1
14:24:54 <ralonsoh> so let's jump to our next topic
14:24:59 <ralonsoh> #topic specs
14:25:12 <ralonsoh> as commented before, we should start closing the spec reviews
14:25:19 <ralonsoh> we have one pending right now
14:25:20 <ralonsoh> https://review.opendev.org/q/project:openstack%252Fneutron-specs+status:open
14:25:26 <ralonsoh> #link https://review.opendev.org/c/openstack/neutron-specs/+/885324
14:25:53 <ralonsoh> ok, if I'm not wrong, is ready for review, mlavalle right?
14:26:07 <mlavalle> it is
14:26:33 <ralonsoh> perfect then, I'll most probably check it this week on Friday morning
14:26:48 <mlavalle> thanks!
14:26:56 <ralonsoh> please, spend some time reviewing it in order to merge it, if possible, in the next 2 weeks
14:27:07 <lajoskatona> ack
14:27:13 <ralonsoh> thanks folks
14:27:29 <ralonsoh> next topic
14:27:38 <ralonsoh> #topic community_goals
14:27:46 <ralonsoh> 1) Consistent and Secure Default RBAC
14:28:06 <ralonsoh> as commented last meeting, slaweq is working now in the service to service role
14:28:36 <ralonsoh> because this is something new and still under study, I'll skip any update of this topic until new advice (from slaweq)
14:28:44 <ralonsoh> the next one is
14:28:48 <ralonsoh> 2) Neutron client deprecation
14:28:57 <ralonsoh> I see new/updated patches, right?
14:29:14 <lajoskatona> the usual etherpad: https://etherpad.opendev.org/p/python-neutronclient_deprecation
14:29:32 <lajoskatona> yes, 2 are open:
14:29:52 <lajoskatona> https://review.opendev.org/c/openstack/python-neutronclient/+/880629
14:30:02 <lajoskatona> https://review.opendev.org/c/openstack/python-neutronclient/+/884180
14:30:40 <ralonsoh> the first one is straight forward, I think
14:30:57 <ralonsoh> do we have the sdk version released?
14:31:11 <lajoskatona> yes, I hope it will be green, as all deppendencies are merged now
14:31:15 <ralonsoh> perfect
14:31:47 <ralonsoh> if the CI is happy, I'll be too
14:31:52 <lajoskatona> I have to check the sdk release
14:32:06 <ralonsoh> yeah, maybe bump it in the requiremtns
14:32:29 <ralonsoh> sorry no, not in the neutron client reqs
14:33:48 <lajoskatona> that's it for this topic from me
14:33:54 <ralonsoh> thanks!
14:34:31 <ralonsoh> ok, and that's I'll I have
14:34:35 <ralonsoh> #topic on_demand
14:34:41 <ralonsoh> do you have any topic?
14:34:54 <frickler> I have a question about draining subnets
14:34:57 <ralonsoh> sure
14:35:25 <frickler> like when you increase your public pool from /24 to /23 by adding a new subnet and then gradually empty the old subnet
14:35:53 <frickler> so you want to stop allocating router gateway and FIPs from it, but leave the existing ones working for a while
14:36:28 <frickler> I found that this can be achieved by setting service-type for the old subnet to some weird value like network:dummy
14:36:45 <frickler> but I feel like I'm abusing that feature and wonder if there is a better solution
14:36:52 <frickler> or if it should be implemented
14:36:56 <opendevreview> NickKush proposed openstack/neutron master: Handle fixed_ip delete in port with FIP  https://review.opendev.org/c/openstack/neutron/+/885999
14:38:07 <ralonsoh> to be honest, I don't know how to do this
14:38:19 <ralonsoh> I mean, how to enforce the IPAM module to use one of the subnets
14:39:10 <frickler> well that's what the subnet service-type extension kind of does
14:39:32 <frickler> except I'm now telling it to not use one of the subnets
14:39:38 <obondarev> the workaround with service-type does not look so ugly imo, like setting it to network:no_use or like that
14:40:21 <lajoskatona> agree, perhaps the doc should be updated with the example or similar
14:40:31 <ralonsoh> I'm trying to find the service type for GW ports or FIPs or external networks
14:40:35 <ralonsoh> if any
14:41:30 <ralonsoh> do you have some reference?
14:41:35 <frickler> those are all nicely listed here https://docs.openstack.org/neutron/latest/admin/config-service-subnets.html
14:42:08 <frickler> there is a check in place that prevents some random device owner being used. like "no-use" is denied
14:42:30 <frickler> but "neutron:no_use" is kind of cheating a bit and allowed
14:42:48 <frickler> because there is no detailed list of allowed owners
14:43:04 <opendevreview> NickKush proposed openstack/neutron master: Handle fixed_ip delete in port with FIP  https://review.opendev.org/c/openstack/neutron/+/885999
14:43:25 <ralonsoh> maybe we can make it "official", having a neutron:no_use constant for this purpose
14:43:40 <ralonsoh> the subnet will be valid but won't be scheduled by the IPAM module
14:44:00 <frickler> so add that to that doc page or what else to make it official?
14:44:30 <ralonsoh> and make that official in the code: if no_use service type, this subnet is not usable
14:44:39 <ralonsoh> _validate_segment will reject it
14:45:10 <frickler> well it should not reject existing FIPs, we'd have to be careful there
14:45:31 <ralonsoh> no no, just the IP allocation, that is what you want, right?
14:45:43 <ralonsoh> just prevent new IPs to be allocated in the old subnet
14:45:52 <frickler> yes, so that customers can migrate to new addresses at their own pace
14:45:53 <ralonsoh> but any existing IP will be valid
14:46:05 <ralonsoh> I think this is a valid use for the service types
14:46:41 <ralonsoh> let's open it as RFE, comment it in the neutron drivers meeting and most probably implement it without spec
14:46:48 <ralonsoh> just the needed documentation
14:46:50 <ralonsoh> and the code
14:47:03 <haleyb> s/no_use/do_not_use? but we can talk in the RFE about the details
14:47:11 <frickler> o.k., I'll do an RFE, ack
14:47:30 <ralonsoh> exactly, we can define the constant in the review or the meeting
14:47:36 <ralonsoh> cool!
14:47:40 <frickler> thx
14:47:51 <ralonsoh> any other topic?
14:48:17 <ralonsoh> remember today we don't have CI meeting
14:48:29 <ralonsoh> thank you all for attending and see you online
14:48:39 <ralonsoh> #endmeeting