14:01:04 <ralonsoh> #startmeeting networking
14:01:04 <opendevmeet> Meeting started Tue Apr 25 14:01:04 2023 UTC and is due to finish in 60 minutes.  The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:04 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:04 <opendevmeet> The meeting name has been set to 'networking'
14:01:05 <ralonsoh> hello
14:01:07 <sahid> o/
14:01:08 <ykarel> o/
14:01:10 <obondarev> hi
14:02:20 <ralonsoh> lajoskatona told me that, most probably, won't attend
14:02:32 <ralonsoh> ok, let's start
14:02:38 <ralonsoh> #topic announcements
14:02:48 <ralonsoh> #link https://releases.openstack.org/bobcat/schedule.html
14:03:07 <rubasov> o/
14:03:19 <lajoskatona> o/
14:03:19 <ralonsoh> next week is the Bobcat Cycle-Trailing Release Deadline
14:03:22 <ralonsoh> https://releases.openstack.org/bobcat/schedule.html#b-cycle-trail
14:03:41 <ralonsoh> and the PTG is closer
14:03:44 <ralonsoh> #link https://etherpad.opendev.org/p/neutron-vancouver-2023
14:03:55 <ralonsoh> slaweq presented two forum sessions (still under review)
14:04:04 <ralonsoh> 1) OpenStack Neutron onboarding
14:04:09 <slaweq> o/
14:04:11 <ralonsoh> 2) Neutron meet and greet: Operators feedback session
14:04:20 <bcafarel> late o/
14:04:24 <ralonsoh> thanks slaweq for the document and presenting them
14:04:28 <lajoskatona> Thanks for proposing
14:04:49 <ralonsoh> and as usual, please check https://openinfra.dev/live/#all-episodes
14:05:20 <ralonsoh> any other announcement?
14:05:43 <ralonsoh> ok, let's move on
14:05:46 <ralonsoh> #topic bugs
14:05:57 <ralonsoh> last week report is from lucasgomes
14:06:03 <ralonsoh> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033448.html
14:06:22 <ralonsoh> there are some bugs not assigned yet
14:06:30 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2016413
14:06:41 <ralonsoh> a low hanging fruit one
14:07:28 <ralonsoh> ok, next one
14:07:34 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2016504
14:07:39 <ralonsoh> Support specify fixed_ip_address for DHCP or Metadata port
14:08:24 <ralonsoh> for ovn, when you create a network, the metadata port is created at the same time
14:08:36 <ralonsoh> and then, when the first subnet is created, the IP is assigned
14:08:53 <ralonsoh> the point is that you can then change the assigned IP
14:08:55 <lajoskatona> but this sounds more an RFE, am I wrong?
14:08:57 <ralonsoh> yes
14:09:00 <ralonsoh> is an RFE
14:09:06 <lajoskatona> ahh, ok
14:09:49 <ralonsoh> I'll comment in the bug
14:10:06 <ralonsoh> if Liu makes a good proposal, we can discuss it in the drivers meeting
14:10:31 <ralonsoh> feel free to comment on the bug
14:10:38 <ralonsoh> next one
14:10:40 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2016960
14:10:45 <ralonsoh> [RFE] neutron fwaas support l2 firewall for ovn driver
14:10:50 <ralonsoh> same as before, is a RFE
14:11:12 <ralonsoh> if I'm not wrong, what is expecting is to have white lists, not present in the in-tree driver
14:11:33 <ralonsoh> sounds good but to be approved needs an assignee and to be presented in the drivers meeting
14:11:39 <ralonsoh> I'll comment in the lp bug
14:11:51 <ralonsoh> next one
14:11:55 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2017023
14:12:00 <ralonsoh> lajoskatona, is ^^ still valid?
14:12:23 <lajoskatona> yes, I discussed it with kopecmartin and on #nova channel sean
14:12:52 <lajoskatona> I added the conclusion from those discussions so far as comment #2
14:12:56 <ralonsoh> but we need first to bump the min version of nova api, right?
14:12:59 <lajoskatona> but as I see gmann also commented
14:13:16 <lajoskatona> No, that is not necessary
14:13:20 <ralonsoh> ah ok, I see
14:13:46 <lajoskatona> what is lightweight and good for us is that we stop executing those tests for Neutron, and keep the test coverage only in Nova
14:14:15 <lajoskatona> the same is true for glance for example as there's a similar legacy API in nova for images and tempest tests also
14:14:33 <ralonsoh> so do we need to remove this test from our jobs?
14:14:35 <lajoskatona> but that is a separate story and not covered in this bug, just as interesting story
14:14:53 <ralonsoh> this/these
14:15:17 <lajoskatona> yes as I understand, but have to double check with QA, as they wanted to have it as topic on their meeting
14:15:24 <lajoskatona> have to check with them again
14:15:53 <ralonsoh> perfect, can I assign the neutron bug to you? just to track it
14:16:00 <lajoskatona> sure
14:16:04 <ralonsoh> thanks a lot
14:16:21 <ralonsoh> ok, last one
14:16:26 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2017131
14:16:34 <ralonsoh> ykarel, fixed the issue with loki jobs
14:16:45 <ralonsoh> now both (ovs and ovn) are working again
14:16:54 <ralonsoh> now the problem is in the OVN one, when creating the router
14:16:55 <ykarel> Thanks ralonsoh
14:17:16 <ralonsoh> I'm testing several options: https://review.opendev.org/q/topic:test_loki
14:17:21 <ralonsoh> I'll assign this bug to myself
14:17:36 <ralonsoh> in any case, this is not critical
14:18:02 <ralonsoh> and now the CI "party" we had, a short report
14:18:21 <ralonsoh> after these 2 reverts
14:18:22 <ralonsoh> Revert "Move to python3.9 as minimal python version": https://review.opendev.org/c/openstack/neutron/+/881430
14:18:29 <ralonsoh> Revert "update constraint for tooz to new release 4.0.0": https://review.opendev.org/c/openstack/requirements/+/881329
14:18:39 <ralonsoh> we were able to execute py38 jobs again
14:18:55 <ralonsoh> this is because we are still running on Focal, not in Jammy
14:19:09 <ralonsoh> ykarel, is testing a Nova patch https://review.opendev.org/c/openstack/nova/+/868419
14:19:18 <ralonsoh> that could fix the nested virt nodes issue
14:19:35 <ralonsoh> because of this, during the last release we reverted the migration to Jammy nodes
14:19:50 <ralonsoh> and, of course, we have new problems in the CI
14:20:02 <ralonsoh> that are being addressed by
14:20:03 <ralonsoh> Revert pysaml2 to 7.3.1: https://review.opendev.org/c/openstack/requirements/+/881466
14:20:08 <ralonsoh> adn
14:20:09 <ralonsoh> Revert "Drop check-uc jobs for py38": https://review.opendev.org/c/openstack/requirements/+/881433
14:20:41 <ykarel> yes will be discussing that nova patch in nova meeting today
14:21:04 <ralonsoh> right, we need to keep investigating this issue to be able to migrate to Jammy asap
14:21:09 <lajoskatona> thanks for the summary and for pushing this topic
14:21:33 <mlavalle> so hold re-checks until further notice?
14:21:39 <ralonsoh> yes, exactly
14:21:49 <mlavalle> ack
14:21:55 <ralonsoh> check upper patches in requirements
14:21:58 <mlavalle> thanks for the update
14:22:11 <ralonsoh> and that's all
14:22:12 <ralonsoh> ah
14:22:15 <ralonsoh> This week jlibosva is the deputy, next week will be obondarev
14:22:25 <obondarev> ++
14:22:39 <ralonsoh> I'll ping Jakub after this meeting
14:22:57 <ralonsoh> let's jump to the next topic
14:23:06 <ralonsoh> #topic community_goals
14:23:17 <ralonsoh> there are no new specs nor ryu patches
14:23:23 <ralonsoh> so no need to stop there
14:23:33 <ralonsoh> 1) Consistent and Secure Default RBAC
14:23:42 <ralonsoh> there is an active list of patches
14:23:50 <ralonsoh> https://review.opendev.org/c/openstack/neutron/+/879827
14:23:54 <ralonsoh> ^^ this one is the last one
14:24:03 <ralonsoh> to enable sRBAC in master
14:24:05 <slaweq> and the most important one
14:24:06 <ralonsoh> by default
14:24:13 <ralonsoh> exactly hehehe
14:24:20 <slaweq> it's very big but changes are mostly in tests
14:24:37 <ralonsoh> slaweq, if I'm not wrong, this is the summary now, right?
14:25:19 <slaweq> as I had to change many tests so it is passing proper context while doing requests to neutron now
14:25:37 <slaweq> logically tests were not changed IIRC
14:25:44 <slaweq> that's all from me about it
14:25:52 <ralonsoh> just a few tests...
14:26:05 <slaweq> ahh, and once this will be merged we will can say that we completed phase1 of S-RBAC goal
14:26:22 <slaweq> next one is to introduce service-to-service role and identify some APIs for such role
14:26:26 <slaweq> but that's next step
14:26:48 <ralonsoh> that second one is more obscure, we need to identify these calls
14:26:50 <lajoskatona> I forgot from last time, is that task for this cycle (bobcat)?
14:26:56 <ralonsoh> the first one
14:27:04 <lajoskatona> ack
14:27:17 <lajoskatona> so for now the goal is to identify only those calls
14:27:25 <ralonsoh> yeah
14:27:25 <slaweq> lajoskatona actually initial goal was to finish phase1 in Anthelope cycle
14:27:29 <slaweq> but we didn't made it
14:27:34 <slaweq> so we are catching up
14:27:36 <lajoskatona> ok
14:27:58 <lajoskatona> thanks for refreshing the history :-)
14:28:42 <ralonsoh> slaweq, thanks for working on this
14:28:58 <ralonsoh> ok, next one
14:29:01 <ralonsoh> 2) Neutron client deprecation
14:29:07 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/1999774
14:29:18 <lajoskatona> and the usual etherpad: https://etherpad.opendev.org/p/python-neutronclient_deprecation
14:29:38 <lajoskatona> I pushed a patch for fwaas: https://review.opendev.org/c/openstack/python-neutronclient/+/880629
14:29:55 <lajoskatona> but I have to test a few things on it, so it is WIP
14:30:08 <ralonsoh> I'll add it to the list in the agenda
14:30:43 <lajoskatona> thanks, the ocata patch is still open, but ready for review from Tom Weninger: https://review.opendev.org/c/openstack/octavia/+/866327
14:30:54 <lajoskatona> octavia, not ocata, sorry
14:31:17 <ralonsoh> ah yes
14:31:27 <slaweq> qq about neutronclient
14:31:35 <slaweq> I see that we finally merged https://review.opendev.org/c/openstack/python-neutronclient/+/871711
14:31:53 <slaweq> should we maybe propose new release of neutronclient with that removed? or it's not needed?
14:32:14 <ralonsoh> right, we should push a new release with a new mayor version, I think
14:32:20 <lajoskatona> +1
14:32:20 <ralonsoh> I'll push this patch today
14:32:23 <lajoskatona> good idea
14:32:26 <slaweq> ++
14:32:47 <ralonsoh> perfect, thanks lajoskatona for working on this
14:33:01 <ralonsoh> ok, let's move to the last topic
14:33:06 <ralonsoh> #topic on_deman
14:33:14 <ralonsoh> ltomasbo, froyo would be interested
14:33:20 <froyo> o/ sure
14:33:22 <ralonsoh> #link https://bugzilla.redhat.com/show_bug.cgi?id=2186758
14:33:41 <ralonsoh> in a nutshell: a non-admin port in a private network
14:34:04 <ralonsoh> the admin creates a FIP and associates the FIP (admin project) to the non-admin port
14:34:24 <ralonsoh> --> the user cannot delete the port because cannot disassociate the FIP
14:34:32 <ralonsoh> (btw, I'll create a LP bug for this)
14:34:34 <ralonsoh> question
14:34:45 <ralonsoh> should we allow this disassociation in any case?
14:35:04 <ralonsoh> in order to  allow the non-admin user to delete his/her port in any case
14:36:38 <ralonsoh> (my opinion: we should allow the disassociation during the port deletion)
14:36:43 <lajoskatona> if it is only to allow to delete the one port for the FIP, why not?
14:37:09 <ralonsoh> because that was an admin action, the association
14:37:17 <ralonsoh> but, IMO, this port belongs to the user
14:37:45 <lajoskatona> ack
14:38:31 <froyo> yeah, the main options are:
14:38:33 <froyo> 1) it was an admin action, it an admin issue...
14:38:33 <froyo> 2) it is a port belong to user, so user can delete it (elevating context to disassociate FIP)
14:39:35 <ralonsoh> ok, I think froyo you can propose 2) in a patch and wait for feedback. IMO, the association is not covered by the policy rules
14:39:45 <ralonsoh> so it should be undone during the deletion
14:39:55 <ralonsoh> slaweq, any feedback?
14:40:20 <slaweq> isn't accociation done as FIP update?
14:40:23 <slaweq> from policy PoV
14:40:41 <ralonsoh> hmmm let me check
14:41:21 <ralonsoh> yes, the association is done during the FIP update
14:41:25 <slaweq> I don't think any special API endpoint for that in https://docs.openstack.org/api-ref/network/v2/index.html
14:41:33 <ralonsoh> no, there isn't
14:42:03 <slaweq> ok, so disassociation of FIP from port is actually removing port_id from fip's attributes, right?
14:42:11 <ralonsoh> yes
14:42:13 <froyo> yes
14:42:24 <slaweq> so who is owner of FIP? is it admin in Your case?
14:42:28 <ralonsoh> yes
14:42:33 <froyo> yes
14:42:54 <slaweq> so by elevating context we will allow regular user to update fip which belongs to other project
14:43:08 <slaweq> I know it's kind of "special" case but I'm not sure we should do that
14:43:19 <frickler> imo 1) is a better option then, just make a proper 403 error instead of 500
14:43:22 <froyo> just on delete_port (to remove port_id and put FIP in DOWN state)
14:44:00 <slaweq> froyo but still, admin user probably did it for some reason
14:44:09 <froyo> slaweq, sure!
14:46:38 <ralonsoh> ok, let's change then the scope and the output of the server
14:46:38 <ralonsoh> but the port deletion can't be done
14:46:38 <frickler> for the use case, is it mandatory the FIP belongs to admin? they could also create FIP in the tenant project?
14:46:38 <slaweq> maybe better option would then be to introduce some api rule like "PORT_OWNER" and allow update FIPs for port owners
14:46:38 <ralonsoh> this is a particular case
14:46:38 <ralonsoh> that was proposed yes
14:46:38 <slaweq> similar what we have for some port related apis where it can be done by "network owner"
14:46:38 <ralonsoh> the FIp can be assigned to the project
14:46:38 <ralonsoh> that is easier than creating a new particular rule type
14:46:58 <ralonsoh> ok, let's propose the error change to 4xx
14:47:15 <ralonsoh> and you'll probably need to reconsider your deployment and how the FIP is created
14:47:20 <froyo> when the FIP is assigned to the project, the tenant user will be manage it completely?
14:47:25 <ralonsoh> yes
14:47:40 <froyo> good, so move the responsability to the admin ... good for me!
14:47:53 <ralonsoh> cool, thanks!
14:48:02 <ralonsoh> please, open a lp bug for this
14:48:03 <slaweq> I would just try to avoid doing hardcoded context.elevate() as much as possible
14:48:04 <froyo> thanks for comments!
14:48:11 <slaweq> and use API rules as much as we can
14:48:21 <slaweq> it's IMO better for S-RBAC policies
14:48:25 <ralonsoh> right
14:48:37 <ralonsoh> ok, let's move ibn
14:48:38 <ralonsoh> on
14:48:41 <ralonsoh> #link https://review.opendev.org/c/openstack/neutron/+/875809
14:48:52 <ralonsoh> qq: should we allow to backport this one?
14:48:59 <ralonsoh> this is implicitly an API change
14:49:16 <ralonsoh> but we in the other hand, this limitation is needed
14:49:37 <ralonsoh> because this is a bug fix, I would accept it, but waiting for opinions
14:50:11 <slaweq> I think we would need opinion from stable-maint team on this
14:50:19 <lajoskatona> +1
14:50:20 <slaweq> maybe elodilles can take a look
14:50:32 <ralonsoh> I'll ping him
14:50:34 <lajoskatona> quite a corner case as ralonsoh said
14:50:34 <mlavalle> yeah, we've sought their opinion before
14:50:44 <frickler> is there an associated api-ref change?
14:50:49 <ralonsoh> no
14:50:56 <ralonsoh> this is an implicit API change
14:51:03 <ralonsoh> because that introduces a limitation
14:51:17 <ralonsoh> that is implicit by the standard, btw
14:51:28 <frickler> but api-ref should specify conditions when api calls fail?
14:51:36 <frickler> or at least it could
14:52:17 <ralonsoh> I don't know if we report that correctly in the API
14:52:28 <ralonsoh> the neutron server logs that correctly
14:52:35 <frickler> that would IMO improve the feasibility of a backport
14:52:48 <ralonsoh> ok, I'll check that first
14:53:10 <ralonsoh> and that's all I have in the agenda
14:53:13 <ralonsoh> anything else??
14:53:42 <ralonsoh> so thank you very much and see you in 6 mins in the CI meeting (IRC type this week)
14:53:45 <ralonsoh> #endmeeting