14:00:38 <ralonsoh> #startmeeting networking
14:00:38 <opendevmeet> Meeting started Tue Feb 21 14:00:38 2023 UTC and is due to finish in 60 minutes.  The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:38 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:38 <opendevmeet> The meeting name has been set to 'networking'
14:00:41 <lajoskatona> o/
14:00:42 <ralonsoh> hello all
14:00:44 <isabek> Hi
14:00:47 <obondarev> hi
14:00:56 <rubasov> o/
14:01:04 <elvira> o/
14:01:49 <ralonsoh> let's start
14:01:59 <ralonsoh> #topic announcements
14:02:02 <frickler> \o
14:02:08 <ralonsoh> Antelope / 2023.1 schedule: https://releases.openstack.org/antelope/schedule.html
14:02:19 <ralonsoh> we are in R-4
14:02:26 <ralonsoh> this week I'll push https://releases.openstack.org/antelope/schedule.html#a-cycle-highlights
14:02:42 <ralonsoh> once I have this patch, i'll ping you to review it and comment
14:02:52 <ralonsoh> and remember next week is the RC-1
14:03:21 <lajoskatona> +1, thanks
14:03:24 <ykarel> o/
14:03:28 <ralonsoh> same as last week, I'll remember here the bobcat etherpads
14:03:35 <ralonsoh> for the PTG: https://etherpad.opendev.org/p/neutron-bobcat-ptg
14:03:43 <ralonsoh> for Vancouver: https://etherpad.opendev.org/p/neutron-vancouver-2023
14:04:10 <ralonsoh> please, don't hesitate to add your topics (I'll send a mail today again)
14:04:59 <ralonsoh> and that's all I have in this topic. Good news are that the CI is working 1 week before the RC1 and nothing seems to be broken
14:05:02 <ralonsoh> fingers crossed
14:05:34 <ralonsoh> next topic
14:05:38 <ralonsoh> #topic bugs
14:05:46 <ralonsoh> ykarel, had a very busy week
14:05:51 <ralonsoh> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032314.html
14:06:09 <ralonsoh> there are some bugs that need to be commented here
14:06:18 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2007357
14:06:28 <ralonsoh> ykarel, do you have any update?
14:06:45 <ykarel> ralonsoh, for that i pushed a patch in grenade to catch console log to debug further
14:06:52 <ykarel> https://review.opendev.org/c/openstack/grenade/+/874417
14:06:53 <ralonsoh> do you have the link?
14:06:55 <ralonsoh> perfect
14:07:01 <ykarel> seems not linked in bug, will update
14:07:03 <ralonsoh> I'll add it to the bug
14:07:38 <ralonsoh> ok, with this patch we'll have more info
14:07:39 <ralonsoh> thanks!
14:08:24 <ralonsoh> next one
14:08:33 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2007353
14:08:45 <ralonsoh> I'll take this one this week
14:08:56 <ralonsoh> it's failing too often
14:09:09 <slaweq> ++ thx
14:09:17 <ralonsoh> next one
14:09:20 <ralonsoh> #topic https://bugs.launchpad.net/neutron/+bug/1934666
14:09:35 <ralonsoh> this is something slaweq is working on
14:09:46 <ralonsoh> patch: https://review.opendev.org/c/openstack/neutron/+/874536
14:10:06 <ralonsoh> that used a 3 node set and only the controller is dvr_snat
14:10:14 <ralonsoh> ^^ please review this patch
14:10:17 <slaweq> I'm working only on change for the ci job, not bug itself
14:10:23 <ykarel> k let's link that too to the bug
14:10:24 <slaweq> just to be clear :)
14:10:47 <ralonsoh> yeah but this is the bug itself: that we can't use snat_dvr on computes
14:10:55 <opendevreview> Slawek Kaplonski proposed openstack/neutron master: Change neutron-ovs-tempest-dvr-ha-multinode-full job's config  https://review.opendev.org/c/openstack/neutron/+/874536
14:11:08 <ralonsoh> I'll add this link in the bug too
14:11:36 <ralonsoh> thanks!
14:11:40 <slaweq> ykarel done
14:11:52 <ykarel> thx
14:12:16 <ralonsoh> next one
14:12:19 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2007414
14:12:28 <ralonsoh> a low hanging fruit one
14:12:52 <ralonsoh> next one
14:12:54 <ralonsoh> #link https://bugs.launchpad.net/nova/+bug/2006467
14:12:57 <ralonsoh> ykarel, ^^
14:12:59 <ralonsoh> any update?
14:13:17 <ralonsoh> is the cirros 0.6.1 image affecting nova-next?
14:13:28 <ykarel> ralonsoh, no cirros 0.6.1 not related there
14:13:33 <ralonsoh> ok
14:13:42 <ykarel> i commented what i found, can you please check based on that comment
14:14:17 <ralonsoh> ok, I'll check last comment this afternoon (I see my name there)
14:14:24 <ykarel> yeap, Thanks
14:14:39 <ralonsoh> cool, next one
14:14:41 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2007674
14:14:50 <ralonsoh> IMO, this could be a RFE
14:15:01 <ralonsoh> good to have but not a bug itself
14:15:29 <ralonsoh> so if you agree, I'll add the RFE tag
14:16:09 <ykarel> i didn't digged much there on why those many connections are opened by openvswitch-agent,
14:16:16 <ykarel> so those are for purpose?
14:16:39 <ralonsoh> we create one per topic
14:16:47 <ralonsoh> ports, sg, networks, etcv
14:16:49 <lajoskatona> We can make it an RFE, but first should be to understand the current situation
14:17:04 <lajoskatona> and what can we cause if we change it
14:17:25 <ralonsoh> agree. To be honest, I don't know if we can't make this change with our current RPC implementation
14:17:33 <ralonsoh> but I didn't spend too much time on this
14:18:13 <lajoskatona> +1
14:18:18 <ralonsoh> so first thing is to tag it as RFE and then investigate if that is possible
14:18:28 <ykarel> +1
14:18:32 <ralonsoh> I'll do it after this meeting
14:19:19 <ralonsoh> last one is
14:19:21 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2007938
14:19:31 <ralonsoh> rubasov, ^^ can you check that?
14:19:43 <ralonsoh> did you check this feature with multiple DHCP agents?
14:19:49 <rubasov> yep
14:20:04 <rubasov> probably not
14:20:06 <ralonsoh> because when I have 2, the second one can't spawn the metadata service
14:21:09 <ralonsoh> when using the metadata service + ipv6 from the dhcp agent, we'll probably need to check that and spawn only one (despite the DHCP agent redundancy)
14:21:23 <rubasov> I'll put this on my todo list
14:21:25 <ralonsoh> this is because we can only have one ipv6 metadata IP
14:21:29 <ralonsoh> rubasov, thanks a lot!
14:21:42 <frickler> how is that different with v4?
14:22:01 <rubasov> I have a vague memory that we already had a similar bug, I will check more throughly
14:22:14 <ralonsoh> I would need to check that, yes I remember somethign related
14:22:29 <ralonsoh> I just reproduced this issue and reported the cause
14:23:14 <rubasov> thanks
14:23:38 <frickler> last week we talked about https://bugs.launchpad.net/neutron/+bug/2006145, I responded on the bug, but I'm still not sure whether this is a bug really or an RFE
14:24:29 <ralonsoh> I think the goal is, as in other agents, to keep the agent functionality without RPC communication
14:24:38 <ralonsoh> but I understand the current implementation
14:25:02 <ralonsoh> to be honest, not having RPC link is a broken env, so the current behaviour is valid
14:25:08 <ralonsoh> I would consider that as a RFE
14:25:49 <frickler> ok, I'd be fine with that. question either way would be who is willing to work on that
14:26:39 <ralonsoh> I'll check if ltomasbo can (or maybe he can point me to someone else)
14:27:33 <frickler> if not, we could also just keep it as wishlist bug
14:27:36 <lajoskatona> can't we ask the reporter if they can work on this?
14:27:46 <ralonsoh> for sure, I'll do it too
14:27:59 <ralonsoh> new Neutron policy: you report, you fix!!
14:28:02 <lajoskatona> agree, if nobody works on it the RFE is still valid, just hangs
14:28:14 <slaweq> :D
14:28:39 <frickler> note to self: stop reporting bugs at once ;)
14:28:58 <lajoskatona> we can be honest that currently nobody has free time for it, if you need it try to fix it :-)
14:29:15 <ralonsoh> exactly
14:29:26 <ralonsoh> ok, any other bug to be discussed?
14:30:20 <ralonsoh> this week bcafarel is the deputy, next week will be lajoskatona
14:30:44 <ralonsoh> let's move to the next topic
14:30:51 <ralonsoh> #topic community_goals
14:30:57 <ralonsoh> 1) Consistent and Secure Default RBAC
14:31:03 <lajoskatona> ralonsoh: ack
14:31:14 <ralonsoh> there are some backports related to RBAC
14:31:50 <ralonsoh> and a new one reported this morning: https://review.opendev.org/c/openstack/neutron/+/874398
14:32:02 <ralonsoh> slaweq, ^ do we need to backport it to previous releases?
14:32:51 <slaweq> that's good question
14:33:09 <slaweq> I didn't propose it to other branches basically because it changes default policies
14:33:19 <slaweq> I don't even know if we should backport it to Zed
14:33:35 <slaweq> especially that new RBACs are still documented as "experimental" in Neutron
14:34:29 <ralonsoh> so this patch should only be in master (actually in bobcat that is when we'll enable sRABC by default)
14:34:33 <ralonsoh> right?
14:35:23 <slaweq> I proposed it to Zed as Zed may have all other fixes too so it should works there
14:35:24 <slaweq> but I'm also fine with abandoning that backport
14:35:35 <slaweq> I just want to hear about others' opinions
14:35:58 <slaweq> I can also backport it to older branches if that's ok for everyone
14:36:11 <ralonsoh> if sRBAC is not available in Zed, that will affect the current policy behaviour
14:36:27 <ralonsoh> so we should not backport it
14:36:41 <frickler> I think mnaser had some interest in getting that to work on zed?
14:36:48 <lajoskatona> if I understand well, I agree
14:37:02 <slaweq> yes, I can try to change that backport to not delete old default policy but maybe modify just new default's part
14:37:07 <lajoskatona> I mean if that is not supported on older branches.....
14:37:37 <ralonsoh> frickler, I would think in other deployments that won't use sRBAC
14:37:44 <ralonsoh> because this is not supported in Zed
14:37:54 <slaweq> starting from 2023.1 we have tempest job which tests new defaults, but in Zed we didn't had it
14:38:04 <ralonsoh> that's the point
14:38:09 <slaweq> we may propose that job to Zed also and backport all to Zed
14:38:26 <slaweq> but I'm not sure if we should go to older branches too
14:38:41 <ralonsoh> this is a community effort
14:38:46 <frickler> maybe leaving zed backports to interested downstreams would be fine, too
14:38:47 <ralonsoh> sRBAC is not supported in Zed
14:39:47 <lajoskatona> frickler: +1
14:39:58 <ralonsoh> slaweq, I would check that with gmann but he +1 this patch
14:40:21 <slaweq> ok, I will discuss it with gmann
14:40:37 <ralonsoh> thanks, please ping us with the conclusion
14:41:06 <slaweq> sure
14:41:23 <ralonsoh> and this is all I have in this topic, so let's move to the next one
14:41:30 <ralonsoh> #topic on_demand
14:41:37 <ralonsoh> I have nothing in the agenda
14:41:46 <ralonsoh> please present your topics
14:41:52 <slaweq> just one thing/reminder
14:42:06 <slaweq> I saw email that this week is deadline for cycle highlights
14:42:19 <ralonsoh> yes, I'm doing it now
14:42:25 <slaweq> ahh, great
14:42:26 <slaweq> thx
14:42:30 <slaweq> so that's all from me
14:42:33 <slaweq> :)
14:42:38 <ralonsoh> I'll ping all of you with the patch to add any comment
14:42:45 <slaweq> ++
14:42:55 <lajoskatona> thanks
14:43:49 <ralonsoh> remember the CI meeting is in 15 mins (today on video)
14:43:55 <ralonsoh> thank you all for attending
14:44:03 <slaweq> yep
14:44:06 <ralonsoh> #endmeeting