14:00:04 <slaweq> #startmeeting networking
14:00:04 <opendevmeet> Meeting started Tue Aug 24 14:00:04 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:04 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:04 <opendevmeet> The meeting name has been set to 'networking'
14:00:13 <mlavalle> o/
14:00:20 <ralonsoh> hi
14:00:22 <slaweq> hi
14:00:23 <ganso> hi
14:00:32 <lajoskatona> Hi
14:00:51 <bcafarel> o/
14:01:10 <obondarev> hi
14:01:10 <amotoki> o/
14:01:10 <slaweq> let's wait 2 more minutes for others
14:01:13 <slaweq> and we will start
14:02:15 <slaweq> ok, let's go
14:02:19 <slaweq> #topic Announcements
14:02:43 <slaweq> first of all Xena cycle calendar https://releases.openstack.org/xena/schedule.html
14:02:58 <slaweq> last week we released final releases for non-client libraries
14:03:15 <slaweq> but I think I made some mistake when I was checking calendar
14:03:16 <thomasb06> hi
14:03:21 <manubk> hi
14:03:47 <slaweq> and deadline for non-client libs is this week really :)
14:04:04 <slaweq> I really don't know how I made that mistake
14:04:07 <lajoskatona> there was  a mail about it: http://lists.openstack.org/pipermail/openstack-discuss/2021-August/024291.html
14:04:23 <lajoskatona> so it was an openstack global miscommunication :-)
14:04:41 <slaweq> lajoskatona thx
14:04:49 <slaweq> I missed that email as on Friday I was off
14:04:59 <slaweq> so it's not that bad with me at least :)
14:05:27 <mlavalle> lol
14:05:30 <slaweq> so, if You have any last minute think which You would like to include in Xena e.g. in neutron-lib, please ping me - we can make another release if needed
14:05:52 <slaweq> otherwise we are good with those libs already :)
14:06:21 <slaweq> it's better to made mistake that way than in the other one :D
14:06:31 <lajoskatona> +1
14:07:09 <slaweq> next week we will have Xena-3 milestone and final release for client libraries
14:07:29 <slaweq> so we will need to cut python-neutronclient but I don't think there is anything urgent waiting for review there
14:07:47 <slaweq> if there is something important for You, please ping me about it on irc or by email
14:08:08 <slaweq> ok, moving on to the next announcement
14:08:19 <slaweq> TC & PTL Nominations ends today: https://governance.openstack.org/election/
14:08:47 <slaweq> we have great candidate already - thx lajoskatona for stepping in :)
14:09:00 <ralonsoh> +1
14:09:17 <mlavalle> +1
14:09:21 <obondarev> +1
14:09:23 <slaweq> so it's probably that next week we will have new PTL already :)
14:09:29 <lajoskatona> hope that can't destroy years work in half a year....
14:09:52 <ralonsoh> I'll help you to do it (to destroy, I mean)
14:09:56 <slaweq> lajoskatona I'm sure You will do great :)
14:10:18 <slaweq> ralonsoh: LOL
14:10:18 <bcafarel> I'm not worried either :)
14:10:23 <obondarev> lajoskatona: didn't see that in your goals! :D
14:10:33 <amotoki> let's break our bad things and improve them :)
14:11:46 <lajoskatona> +1
14:11:51 <slaweq> next one
14:12:01 <slaweq> October PTG
14:12:07 <slaweq> etherpad https://etherpad.opendev.org/p/neutron-yoga-ptg
14:12:21 <slaweq> Please add Your topics there
14:12:43 <slaweq> Operator pain points etherpad https://etherpad.opendev.org/p/pain-point-elimination
14:12:55 <slaweq> please add Yours if You have any
14:13:35 <slaweq> and that's are all announcements/reminders which I had for You
14:13:48 <slaweq> anything else You want to add in that section?
14:15:04 <slaweq> if not, let's move on to the next topic
14:15:05 <slaweq> #topic Blueprints
14:15:25 <slaweq> for Xena-3 we still have those BPs https://bugs.launchpad.net/neutron/+milestone/xena-3
14:15:39 <slaweq> I don't think we will be able to complete any of them in this cycle really
14:16:08 <slaweq> for BGP related stuff I updated status to "good progress" today as all of them have at least api-ref merged already
14:16:24 <slaweq> but there's no neutron implementation for any of them now
14:16:37 <slaweq> next week I will probably move them to neutron-next
14:16:55 <slaweq> as I don't think we need to schedule any of them for the Xena RC milestone
14:17:38 <slaweq> are You ok with that?
14:18:04 <lajoskatona> for me ok
14:18:25 <ralonsoh> +1
14:19:09 <mlavalle> +1
14:19:50 <slaweq> thx
14:20:01 <slaweq> so, I think we can move on
14:20:06 <slaweq> to the next topic
14:20:13 <slaweq> #topic Bugs
14:20:24 <slaweq> I was bug deputy last week. Report http://lists.openstack.org/pipermail/openstack-discuss/2021-August/024308.html
14:20:39 <slaweq> there is couple of bugs which I wanted to raise today
14:20:54 <slaweq> https://bugs.launchpad.net/neutron/+bug/1940071 - that is vpnaas issue
14:21:02 <slaweq> seems like some memory leak in vpnaas
14:21:34 <slaweq> maybe someone who is more familiar with vpnaas could take a look
14:22:14 <ralonsoh> I wish this could help: https://review.opendev.org/c/openstack/neutron/+/803034
14:22:41 <slaweq> ralonsoh yes, hopefully it will :)
14:23:57 <slaweq> next one
14:24:13 <slaweq> https://bugs.launchpad.net/neutron/+bug/1940425 - that is gate failure, not very often but I think it's worth to take a look
14:25:00 <ralonsoh> could be related to a bug we have in RH
14:25:07 <ralonsoh> this is related to the RPC timeout
14:25:10 <ralonsoh> I'll check the logs
14:25:25 <slaweq> ralonsoh thx
14:25:36 <ralonsoh> (in a nutshell, the parent port activation waits until all subports are bound)
14:26:10 <slaweq> next we have 2 low-hanging-fruits (IMO):
14:26:12 <slaweq> https://bugs.launchpad.net/neutron/+bug/1940074
14:26:16 <slaweq> https://bugs.launchpad.net/neutron/+bug/1940073
14:26:45 <slaweq> for https://bugs.launchpad.net/neutron/+bug/1940074 there is actually patch already
14:27:06 <slaweq> so only https://bugs.launchpad.net/neutron/+bug/1940073 is still free to take :)
14:27:56 <slaweq> and the last one: https://bugs.launchpad.net/neutron/+bug/1940086 - that is api-ref thing
14:28:09 <slaweq> maybe someone wants to propose update there
14:28:31 <slaweq> all other bugs from last week are already assigned to someone
14:29:24 <slaweq> any other bugs You want to discuss today?
14:30:51 <slaweq> ok, I guess that this means "no"
14:31:39 <slaweq> this week bug deputy is hongbin
14:31:47 <slaweq> and I already asked him last week about it
14:31:54 <slaweq> he confirmed me that he will do it
14:32:05 <slaweq> next week will be haleyb
14:32:21 <slaweq> I will ask him if he will be able to do it
14:32:42 <slaweq> and that's basically all what I have for today
14:32:52 <slaweq> #topic On Demand Agenda
14:33:02 <slaweq> do You have any other topics to discuss today?
14:33:14 <ganso> I do
14:33:27 <ganso> sorry I updated the agenda after the meeting had started
14:33:28 <slaweq> ganso go on then :)
14:33:32 <ganso> thanks
14:33:38 <opendevreview> Rodolfo Alonso proposed openstack/neutron master: Replace "tenant_id" with "project_id" in Quota engine  https://review.opendev.org/c/openstack/neutron/+/805849
14:34:03 <ganso> I'm interested in implementing pagination across some pages in horizon, main network list
14:34:18 <ganso> but also routers, SGs, FIPs
14:34:47 <ganso> one thing that drew my attention about network list is that the horizon network list is different than the CLI list
14:34:50 <ganso> because of this: https://github.com/openstack/horizon/blob/1800750804502adf9ff31366daa987aeb9acba31/openstack_dashboard/api/neutron.py#L1075
14:35:11 <ganso> basically CLI list lists everything, even across tenants, if the user has those privileges
14:35:26 <ganso> the horizon one doesn't, it filters by tenants, and by doing that it causes 2 things
14:35:56 <ganso> 1) the initial query filters out shared and external (especially when those are not created by the tenant listing the networks)
14:36:39 <ganso> 2) It makes it very complicated ( impossible perhaps) to paginate this consistently, because what it does today it performs other separate queries to integrate the shared and external networks in the list
14:37:07 <ganso> since we don't know how many external / shared ones there are beforehand, we would need to do very hacky things to paginate
14:37:33 <ganso> so, I would like to ask: why do we need horizon to be different than the CLI? can't we make them behave similarly and get rid of this tenant filter?
14:38:00 <amotoki> it is due to the nature of horizon "project" panel. The project panel focuses on operations owned by a target tenant.
14:38:28 <amotoki> "admin" users can list all networks (or resources) from all projects. It is confuisng as the "project" panel.
14:38:45 <amotoki> this is the reason I added such hacky thing you explained.
14:38:51 <slaweq> personally from Neutron pov I don't see any reason why it shouldn't be the same
14:39:01 <slaweq> it's more question to the Horizon team IMO
14:39:14 <slaweq> so amotoki is the best one to answer :)
14:39:25 <ganso> amotoki: looking at the other side of this, the neutron CLI is the only one that does this, when compared to nova and cinder. The Horizon way of doing is actually more consistent with how nova and cinder do
14:40:07 <ganso> nova and cinder's CLI does not list resources of other tenants, despite the user being admin, unless you specify "--all-projects"
14:40:17 <amotoki> "CLI" can be reworeed with "API" as neturon CLI does not filter anything.
14:40:29 <amotoki> compared to the nova/cinder API, the neutron API behavior differently.
14:40:46 <ganso> yes
14:41:20 <amotoki> in case of nova/clnder, even though an API consumer has "admin"-ness, nova/cinder API returns resources owned by a target project.
14:41:30 <amotoki> this is the different point from neutron.
14:41:32 <ganso> I see the neutron API behavior as the official and desired one by its community, so I wouldn't say we would  have to change how the API behaves. Horizon on the other hand, could do the same as the API instead of doing something different
14:42:48 <opendevreview> Bernard Cafarelli proposed openstack/neutron-tempest-plugin master: Check if advanced image flavor already exists  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/805864
14:42:56 <amotoki> when nova/cinder/neutron API were designed there was no consensus on how API should work when called with "admin"-ness. this leads to different behaviors between nova/cinder and neutron APIs.
14:43:41 <ganso> so, how do we move forward from here? I understand Horizon's premise of being different, but do we all agree with this? is this better than not having pagination? My main motivation is because customers are having timeouts listing networks while loading the network list page
14:43:51 <amotoki> horizon implementation is just to handle these differences. Basically horizon "project" panel behavior assumes nova/cinder API behaviors and the logic for neutron APi is to handle it.
14:44:38 <amotoki> ganso: I don't have a clear answer right now.
14:45:00 <amotoki> at least changing the existing API behaviors in nova/cinder/neutron is really confusing.
14:46:35 <obondarev> can we add a parameter to API, that horizon can use to get similar to nova/cinder bahavior?
14:46:37 <slaweq> just an idea: maybe we could add some flag in neutron API, something like "include_shared" so user could do call like "network list --tenant-id <tenant> --include-shared True" to get all shared and own networks in one request
14:47:27 <amotoki> it is one possible idea but on the other hand it is not a good thing to change the API behavior beased on configurations.
14:47:40 <obondarev> not config
14:47:49 <amotoki> ah, I misunderstood.
14:47:51 <slaweq> it's API parameter
14:47:53 <slaweq> not config
14:47:56 <amotoki> it is about "API parameter"
14:48:01 <slaweq> so horizon could use it
14:48:46 <ganso> alternatively add a "--all-projects" that default to True (defaulting to True is weird, but trying to maintain the same behavior)
14:48:55 <ganso> then horizon would say "--all-projects" false
14:49:30 <slaweq> that's also some possibility
14:49:53 <slaweq> for sure we will need RFE for that
14:50:04 <slaweq> and we will need to discuss it carefully in drivers meeting :)
14:50:21 <slaweq> ganso would that be ok to propose such RFE for Neutron?
14:51:29 <slaweq> amotoki and would it be ok for You to work with ganso on that RFE? You are the best guy in our team to help with that :)
14:51:43 <amotoki> slaweq: yes
14:51:52 <ganso> Let me ask something on the other extreme first, as I assume we would all like to avoid making API changes for this. If I am able to implement something hacky that accomplishes pagination despite being inefficient, would that be acceptable?
14:52:21 <slaweq> in Neutron or Horizon You mean?
14:52:24 <ganso> Horizon
14:52:33 <ganso> only touching Horizon
14:52:40 <slaweq> that's question to Horizon team I guess :)
14:53:08 <amotoki> horizon is also studying the support of system-scope and the situation may change as API reuqests with a regular user no longer have admin-ness (after switching to system-scope token)
14:53:35 <ganso> oh ok. And the question of if that would be backportable (given that pagination is perhaps a new feature, but addressing the lack of pagination as being a bug), is also up to Horizon team?
14:54:12 <slaweq> ganso all changes in Horizon are up to Horizon team
14:54:19 <ganso> ok
14:54:37 <slaweq> from us here only amotoki is part of Horizon team also
14:54:38 <amotoki> ganso: yes. it is up to horizon team but generally speaking it depends on how big the change is. the stable policy is not break a released deliverable.
14:54:58 <ganso> I will spend some more time on that and see what I can come up with, now knowing that the alternative route is through adding parameters to Neutron API
14:55:30 <slaweq> ganso that's some possiblitity which we may explore more for sure
14:55:52 <amotoki> ganso: thanks for raising this. I am glad to help
14:56:09 <ganso> ok, thanks! that's all I had, I will keep in touch with amotoki and slaweq. =)
14:56:26 <slaweq> thx ganso
14:56:41 <slaweq> any other last minute topics?
14:57:02 <amotoki> for long term, it may be a good thing to achieve consistency between nova/cinder/neutron APi behaviors.
14:57:18 <amotoki> it would not happen in short term, but it is worth considering it as long term
14:57:34 <amotoki> no more from me :p
14:57:52 <slaweq> amotoki thx a lot
14:58:07 <slaweq> so I think we are good to go today
14:58:32 <slaweq> thx for attending the meeting and see You next week (hopefully with our new great PTL already :))
14:58:35 <slaweq> o/
14:58:39 <slaweq> #endmeeting