opendevreview | Przemyslaw Szczerbik proposed openstack/neutron-lib master: Use os-resource-classes lib to avoid duplication https://review.opendev.org/c/openstack/neutron-lib/+/799034 | 05:16 |
---|---|---|
opendevreview | liuyulong proposed openstack/neutron master: Add devstack local.conf sample for ML2 OVS https://review.opendev.org/c/openstack/neutron/+/799159 | 05:22 |
*** gthiemon1e is now known as gthiemonge | 06:32 | |
opendevreview | liuyulong proposed openstack/neutron master: Add devstack local.conf sample for ML2 OVS https://review.opendev.org/c/openstack/neutron/+/799159 | 06:34 |
opendevreview | Rabi Mishra proposed openstack/neutron master: Add fake_project_id middleware for noauth https://review.opendev.org/c/openstack/neutron/+/799162 | 06:42 |
opendevreview | Rabi Mishra proposed openstack/neutron master: Add fake_project_id middleware for noauth https://review.opendev.org/c/openstack/neutron/+/799162 | 06:51 |
opendevreview | Rabi Mishra proposed openstack/neutron master: Add fake_project_id middleware for noauth https://review.opendev.org/c/openstack/neutron/+/799162 | 07:09 |
opendevreview | liuyulong proposed openstack/neutron master: Add devstack local.conf sample for ML2 OVS https://review.opendev.org/c/openstack/neutron/+/799159 | 07:34 |
opendevreview | yangjianfeng proposed openstack/neutron stable/victoria: Keepalived version check https://review.opendev.org/c/openstack/neutron/+/799164 | 07:35 |
opendevreview | yangjianfeng proposed openstack/neutron stable/ussuri: Keepalived version check https://review.opendev.org/c/openstack/neutron/+/798335 | 07:43 |
opendevreview | yangjianfeng proposed openstack/neutron stable/ussuri: HA-non-DVR router don't need manually add static route https://review.opendev.org/c/openstack/neutron/+/792876 | 07:44 |
opendevreview | Akihiro Motoki proposed openstack/networking-odl stable/pike: Fix networking-l2gw location https://review.opendev.org/c/openstack/networking-odl/+/799007 | 07:59 |
hemanth_n | jlibosva: lucasagomes: can you take a look on this patch when you get sometime https://review.opendev.org/c/openstack/neutron/+/796613, thanks | 08:25 |
opendevreview | Akihiro Motoki proposed openstack/networking-midonet stable/rocky: Fix networking-l2gw location https://review.opendev.org/c/openstack/networking-midonet/+/798993 | 08:25 |
jlibosva | hemanth_n: looking | 08:30 |
opendevreview | Akihiro Motoki proposed openstack/networking-midonet stable/rocky: Fix networking-l2gw location https://review.opendev.org/c/openstack/networking-midonet/+/798993 | 08:35 |
lucasagomes | hemanth_n, will do | 08:42 |
hemanth_n | thank you both | 08:42 |
opendevreview | liuyulong proposed openstack/neutron master: Add devstack local.conf sample for ML2 OVS https://review.opendev.org/c/openstack/neutron/+/799159 | 08:49 |
opendevreview | Slawek Kaplonski proposed openstack/neutron stable/wallaby: Use "multiprocessing.Queue" for "TestNeutronServer" related tests https://review.opendev.org/c/openstack/neutron/+/799149 | 09:47 |
opendevreview | XiaoYu Zhu proposed openstack/neutron master: L3 router support ECMP https://review.opendev.org/c/openstack/neutron/+/743661 | 09:49 |
opendevreview | Slawek Kaplonski proposed openstack/neutron stable/victoria: Use "multiprocessing.Queue" for "TestNeutronServer" related tests https://review.opendev.org/c/openstack/neutron/+/799150 | 10:00 |
opendevreview | Slawek Kaplonski proposed openstack/neutron stable/ussuri: Use "multiprocessing.Queue" for "TestNeutronServer" related tests https://review.opendev.org/c/openstack/neutron/+/799151 | 10:00 |
opendevreview | sean mooney proposed openstack/os-vif master: update os-vif ci to account for devstack default changes https://review.opendev.org/c/openstack/os-vif/+/798038 | 10:03 |
opendevreview | sean mooney proposed openstack/os-vif master: add configurable per port bridges https://review.opendev.org/c/openstack/os-vif/+/798055 | 10:03 |
opendevreview | Slawek Kaplonski proposed openstack/neutron stable/train: Use "multiprocessing.Queue" for "TestNeutronServer" related tests https://review.opendev.org/c/openstack/neutron/+/799191 | 10:05 |
opendevreview | sean mooney proposed openstack/os-vif master: update os-vif ci to account for devstack default changes https://review.opendev.org/c/openstack/os-vif/+/798038 | 10:06 |
opendevreview | sean mooney proposed openstack/os-vif master: add configurable per port bridges https://review.opendev.org/c/openstack/os-vif/+/798055 | 10:06 |
opendevreview | Slawek Kaplonski proposed openstack/neutron stable/queens: Call install_ingress_direct_goto_flows() when ovs restarts https://review.opendev.org/c/openstack/neutron/+/783543 | 10:23 |
opendevreview | Rabi Mishra proposed openstack/neutron master: Add fake_project_id middleware for noauth https://review.opendev.org/c/openstack/neutron/+/799162 | 10:46 |
opendevreview | Hemanth N proposed openstack/neutron master: Update arp entry of snat port on qrouter ns https://review.opendev.org/c/openstack/neutron/+/799197 | 11:34 |
opendevreview | Hemanth N proposed openstack/neutron master: Update arp entry of snat port on qrouter ns https://review.opendev.org/c/openstack/neutron/+/799197 | 11:37 |
opendevreview | Rodolfo Alonso proposed openstack/neutron-specs master: [WIP]Create intermediate OVS bridge to improve live-migration in OVN https://review.opendev.org/c/openstack/neutron-specs/+/799198 | 11:37 |
hemanth_n | slaweq: I didnt see you have assigned yourself for bug 1933092 and I worked on a patch. Sorry for not looking at it before working on patch. Please do check on the patch when you get time if the change makes sense. | 11:52 |
slaweq | hemanth_n: I wanted to check if as next thing but I didn't yet | 11:54 |
slaweq | so thx for the patch | 11:54 |
slaweq | I will take a look at it today | 11:54 |
hemanth_n | ack and thanks | 11:54 |
slaweq | please assign Yourself to that bug | 11:54 |
slaweq | I'm now working on https://bugs.launchpad.net/neutron/+bug/1933273 btw :) | 11:54 |
slaweq | so also dvr related thing | 11:55 |
hemanth_n | hack | 11:56 |
hemanth_n | ack* | 11:56 |
opendevreview | Rodolfo Alonso proposed openstack/neutron stable/wallaby: [OVN] Do not fail when processing SG rule deletion https://review.opendev.org/c/openstack/neutron/+/799209 | 13:57 |
slaweq | #startmeeting neutron_drivers | 14:00 |
opendevmeet | Meeting started Fri Jul 2 14:00:59 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:00 |
opendevmeet | The meeting name has been set to 'neutron_drivers' | 14:00 |
mlavalle | o/ | 14:01 |
opendevreview | Rodolfo Alonso proposed openstack/neutron stable/victoria: [OVN] Do not fail when processing SG rule deletion https://review.opendev.org/c/openstack/neutron/+/799210 | 14:01 |
slaweq | hi | 14:01 |
ralonsoh | hi | 14:01 |
obondarev | hi | 14:01 |
slaweq | let's wait few more minutes for people to join | 14:01 |
slaweq | I know that haleyb and njohnston are on PTO today | 14:01 |
seba | hi | 14:01 |
slaweq | but amotoki and yamamoto will maybe join | 14:02 |
amotoki | hi | 14:02 |
opendevreview | Pedro Henrique Pereira Martins proposed openstack/neutron master: Extend database to support portforwardings with port range https://review.opendev.org/c/openstack/neutron/+/798961 | 14:02 |
manub | hi | 14:02 |
slaweq | ok, let's start | 14:03 |
slaweq | agenda for today is at https://wiki.openstack.org/wiki/Meetings/NeutronDrivers | 14:04 |
mlavalle | do we have quorum? | 14:04 |
slaweq | mlavalle: I think so | 14:04 |
slaweq | there is You, ralonsoh amotoki and me | 14:04 |
mlavalle | ok | 14:04 |
slaweq | so minimum but quorum, right? | 14:04 |
amotoki | yeah, I think so | 14:04 |
slaweq | #topic RFEs | 14:04 |
ralonsoh | actually I'm presenting a RFE, so I should not vote | 14:04 |
slaweq | ralonsoh: sure, so with Your rfe we can wait for next meeting | 14:05 |
ralonsoh | perfect | 14:05 |
slaweq | we have then one rfe for today :) | 14:05 |
slaweq | https://bugs.launchpad.net/neutron/+bug/1930866 | 14:05 |
ralonsoh | who is presenting it? | 14:07 |
mlavalle | doesn't matter | 14:07 |
mlavalle | we can discuss it | 14:07 |
ralonsoh | ok, perfect | 14:07 |
mlavalle | that's the usual approach | 14:07 |
slaweq | yeah, personally I think it is totally valid issue | 14:07 |
slaweq | I didn't know that nova have something like "lock server" | 14:08 |
mlavalle | it is. we should worry about the complete end user experience accross all projects, not only Neutron | 14:08 |
slaweq | mlavalle: yes, exactly :) | 14:08 |
mlavalle | end users don't use Neutron. They use OpenStack | 14:09 |
ralonsoh | because this is part of the Nova API, can we ask them to modify the VM ports? as obondarev suggested | 14:09 |
ralonsoh | or should be us responsible of checking this state? | 14:10 |
slaweq | ralonsoh: yes, but looking from the neutron PoV only, we should provide some way to "lock" port in such case | 14:10 |
amotoki | the bug is reported about locked instances. what I am not sure is whether we need to handle ports used by locked instances specially. | 14:10 |
mlavalle | similar to what we do with the dns_name attribute when NOva creates a port for an instance | 14:10 |
slaweq | then nova could "lock" port as part of server lock | 14:10 |
amotoki | potentially endusers can hit similar issues even for non-lcoked instances. | 14:11 |
mlavalle | yeap | 14:11 |
ralonsoh | I don't see in what scenario, sorry | 14:12 |
slaweq | amotoki: IMHO we should, I'm not sure if forbid to delete any port which is attached to the instance would be good idea as that would be pretty big change in API | 14:12 |
mlavalle | but in the case of locked instances we really deliver an awful end user experience, because OpenStack made a promise that gets broken | 14:12 |
jkulik | looking from Cinder perspective, if a Volume is attached, you cannot just delete it. would the same make sense for a port? | 14:12 |
obondarev | neutron already forbids deleting port of certain types, right? | 14:12 |
slaweq | if we will change neutron so it will forbid deletion all ports which are attached to vms, we will break nova I think | 14:13 |
slaweq | and nova will need to adjust own code to first detach port and then delete it | 14:13 |
amotoki | yes, we already forbit deleting ports used by router interfaces (and otherr maybe) | 14:13 |
slaweq | and that will make problems during e.g. upgrade | 14:13 |
slaweq | or am I missing something? | 14:14 |
ralonsoh | slaweq, right we need to mark those ports somehow | 14:14 |
mlavalle | no | 14:14 |
amotoki | slaweq: I haven't checked the whole procedure in server delete. It may affect nova precedures in deelting ports attached to instances. | 14:14 |
mlavalle | no, you are not missing anything | 14:15 |
mlavalle | maybe we want to discuss this with Nova folks | 14:15 |
mlavalle | is gibi around? | 14:15 |
gibi | mlavalle: hi | 14:15 |
slaweq | hi gibi :) | 14:15 |
jkulik | tbh, I always found it confusing that there are 2 interfaces to attach/detach a port/network to/from an instances - Nova and Neutron directly | 14:16 |
amotoki | jkulik: precisely speaking there is no two ways to attach ports. neutron port deletion is not visilbe to nova, so it confuses users. | 14:17 |
sean-k-mooney | for what its worth we have suggested bocking port deletion of in use port in the past | 14:17 |
sean-k-mooney | amotoki: actully it is | 14:17 |
sean-k-mooney | neutron send a network-vif-delete event to nova when the neutron prot is delete | 14:17 |
amotoki | sean-k-mooney: ah, good point. i totally forget it. | 14:18 |
sean-k-mooney | amotoki: form a nova point of vew we have never really support this usecause though we would really prefer if you detached it first then deleted it if you needed too | 14:18 |
ralonsoh | sorry but I don't think we should go this way, making this change in Neutron/Nova | 14:19 |
gibi | I agree with sean-k-mooney, while deleting a bound port is possible today and there is some level of support for it in nova, this is something that complicates things | 14:19 |
sean-k-mooney | regarding https://bugs.launchpad.net/neutron/+bug/1930866 is there an object to just blocking port delete while it has the device-owner and device id set | 14:19 |
slaweq | gibi: sean-k-mooney: but today nova, when e.g. vm is deleted will just call neutron once to delete port, right? | 14:20 |
slaweq | or will it first detach port and then delete it? | 14:20 |
sean-k-mooney | slaweq: that is a good question | 14:20 |
gibi | slaweq: nova will unbind the port during VM delete, and if the port was actually created by nova during the boot with a network, then nova will delete the port too | 14:21 |
sean-k-mooney | we proably dont do a port update and then a delete but we could | 14:21 |
sean-k-mooney | gibi: oh we do unbind i was just going to check that | 14:21 |
seba | disallowing port delete for ports with deviceowner/device_id set would mean that a user could not remove dangling ports anymore without having write access to those fields | 14:21 |
sean-k-mooney | seba: those feild i belive are writable by the user. and they still could by doing a nova server delete or port detach via nova | 14:22 |
amotoki | seba: no, what we discuss is just about port deletion. | 14:22 |
gibi | the above RFE talks about locked instances specifically. Nova could reject the network-vif-delete event if the instance is locked. It could be a solution | 14:23 |
slaweq | device_owner is writable by NET_OWNER and ADMINs https://github.com/openstack/neutron/blob/master/neutron/conf/policies/port.py#L380 | 14:23 |
ralonsoh | gibi but at this point the port is already gone | 14:23 |
sean-k-mooney | gibi: i think neutron sends that asyc form the deletion of the port | 14:23 |
slaweq | so in typical use case, user will be able to clean it | 14:23 |
mlavalle | I agree with gibi ... let's reduce the scope of this to locked instances | 14:23 |
gibi | ralonsoh: ahh, then never mind, we cannot do that | 14:23 |
amotoki | gibi: but neutron still can delete a port though? | 14:23 |
sean-k-mooney | mlavalle: well neutron realy shoudl not care or know if an instance is locked | 14:24 |
amotoki | ralonsoh commented the same thing already :) | 14:24 |
gibi | my suggestion can only be implemented if nova can prevent the port deletion by rejecting the netwokr-vif-delete event | 14:24 |
sean-k-mooney | gibi: that would require the neutron server to send that event first and check the return before proceeding with the db deletion | 14:25 |
gibi | sean-k-mooney: yeah, I realize that | 14:25 |
sean-k-mooney | i dont think neutroc currently check the status code of those notificaitons | 14:25 |
slaweq | personally I like the idea of not allowing port deletion if it is attached to vm but to avoid problems during e.g. upgrades we could add temporary config knob to allow old behaviour | 14:25 |
slaweq | if we would forbid deletion of such ports it would be more consistent with what cinder do | 14:26 |
gibi | slaweq: if we go that way the I thin Octavia folks should be involved, I think they also depend on deleting a bound port | 14:26 |
slaweq | so IMHO more consisten UX in general :) | 14:26 |
ralonsoh | qibi do Octavia use Nova or Neutron API? | 14:26 |
slaweq | gibi: ouch, I didn't know that | 14:26 |
slaweq | so I wonder who else we may break :) | 14:27 |
gibi | I have a faint recollection from a PTG where they approached us with this port delete case as nova had some issue with it | 14:27 |
jkulik | https://github.com/sapcc/nova/blob/cd084aeeb8a2110759912c1b529917a9d3aac555/nova/network/neutron.py#L1683-L1686 looks like nova unbinds pre-existing ports, but directly deletes those it created without unbind. looks like an easy change though. | 14:27 |
gibi | I have to dig if I found a recording of it | 14:27 |
slaweq | jkulik: that's what I though :) | 14:27 |
gibi | jkulik: good reference, and I agree we can change that sequence | 14:27 |
slaweq | so nova would need changes too | 14:27 |
sean-k-mooney | yes it look like it would we coudl backport that however | 14:29 |
sean-k-mooney | maybe you could contol this tempoerally on the neutron side with a workaround config option | 14:29 |
sean-k-mooney | i hate config driven api behavior but since neutron does not use microverions | 14:29 |
sean-k-mooney | the only other way to do this would be with a new extsion but tha tis not backportable | 14:30 |
ralonsoh | but we have extensions | 14:30 |
jkulik | if it's not a bugfix, but a change to not allow deletion of ports with device-owner and -id, can this even be supported by a new API version? or would all old API versions also behave differently, then? | 14:30 |
slaweq | ralonsoh yes, but imagine upgrade case: | 14:30 |
slaweq | older nova, new neutron | 14:30 |
obondarev_ | so what are downsides of: "nova sets 'locked' flag in port_binding dict for locked instances, neutron checks that flag on port delete"? | 14:30 |
slaweq | old nova wants to delete bound port and it fails | 14:31 |
slaweq | and old nova don't know about extension at all :) | 14:31 |
sean-k-mooney | slaweq: so the extion would hae to be configurable | 14:31 |
ralonsoh | slaweq, right. So let's make this configurable for the next release | 14:31 |
slaweq | sean-k-mooney: yes, that's what I wrote few minutes ago also :) I think that we could add temporary config option for that | 14:31 |
sean-k-mooney | slaweq: yep for xena and then make it mandaroy for Y | 14:32 |
slaweq | I know we did that already with some other things between nova and neutron | 14:32 |
sean-k-mooney | that would allow nova to always unbidn before deleting | 14:32 |
slaweq | sean-k-mooney++ for me would be ok | 14:32 |
jkulik | iirc, locked instances in Nova can still be changed by an admin | 14:32 |
jkulik | would Nova then be able to delete the locked port, too? | 14:32 |
sean-k-mooney | jkulik: well instances are locked not ports right | 14:32 |
slaweq | exactly | 14:33 |
jkulik | sean-k-mooney: if we would go for "nova locks the port in neutron" | 14:33 |
sean-k-mooney | ok so new neutron extion for locked ports | 14:33 |
sean-k-mooney | if nova detect it we lock them automiatcly if you lock the vm? | 14:33 |
mlavalle | that seems reasonable | 14:34 |
mlavalle | nova already detects other neutron extensions, like extended port binding | 14:34 |
sean-k-mooney | and then neutron just prevents updating locked ports | 14:34 |
sean-k-mooney | mlavalle: yep that shoudl not be hard to add on the nova side | 14:34 |
gibi | sean-k-mooney: does it mean nova needs an upgrade step where it updates the ports of already locked instances? | 14:34 |
gibi | like syncing this state for exising instances | 14:35 |
sean-k-mooney | good question | 14:35 |
sean-k-mooney | we could do that in init host i guess | 14:35 |
sean-k-mooney | our we could just not and document that you will need to lock them again | 14:36 |
gibi | I think locking / unlocking happens in the API so it would be strange to do the state synch in compute side | 14:36 |
gibi | anyhow, this needs a nova spec | 14:36 |
gibi | I dont want to solve all the open question on friday afternoon :D | 14:36 |
mlavalle | nad I suggest a Neutron spec as well | 14:36 |
sean-k-mooney | well i was going to say technially its not an api change on the nova side so it could be a specless blueprint but ya for the upgrade question a spec is needed | 14:37 |
slaweq | so do we want to add "lock port" extension to neutron or forbid deletion of ports in-use? | 14:37 |
slaweq | IIUC we have such 2 alternatives now, right? | 14:37 |
mlavalle | add lock port extension | 14:37 |
sean-k-mooney | yes | 14:37 |
amotoki | yes | 14:37 |
sean-k-mooney | either seam vaild but lock-port is proably a better mapping to the rfe | 14:38 |
jkulik | if I'm admin, I can delete a locked instance. Neutron needs to take this into account for the "locked" port | 14:38 |
sean-k-mooney | i would still personally like to forbid deliting in use ports | 14:38 |
sean-k-mooney | jkulik: well no nova can unlock them in that case | 14:39 |
amotoki | the bug was reported for locked instances, but I personally prefer to "block deletion of ports used by nova". | 14:39 |
slaweq | but what "locked port" would mean - it can't be deleted only? can't be updated at all? | 14:39 |
jkulik | sean-k-mooney: yeah, makes sense | 14:39 |
sean-k-mooney | slaweq: i would assume cant be updated at all but we could detail that in the spec | 14:39 |
slaweq | IMHO blocking deletion of ports in use is more straight forward solution, but from the other hand it may break more people so it's more risky :) | 14:40 |
slaweq | sean-k-mooney: yeah, we could try to align with nova's behaviour for locked instances | 14:40 |
amotoki | neutron already blocks direct port deletion of router interfaces. in this case we disallow to delete ports used as router interface, but we still allow to update device_owner/id of such ports. if users would like to delete such ports expliclity they first need to clear device_owner/id and then they can delete these ports. | 14:40 |
slaweq | and that can be clarified in the spec | 14:40 |
sean-k-mooney | i know that doing a delete this way used ot leak some resouces on the nova side in the past like sriov resouces | 14:41 |
sean-k-mooney | slaweq: unfrotunetly i dont knwo off hand what the nova behivor actully is | 14:41 |
sean-k-mooney | but yes it would be nice to keep them consitnet | 14:41 |
slaweq | sean-k-mooney: np, it can be discussed in the spec as You said :) | 14:41 |
slaweq | so, do we want to vote for prefered option? | 14:44 |
mlavalle | if we have to vote I lean towrds lock port extension | 14:45 |
obondarev | if use existing port's binding_profile dict - do we need a neutron API extension at all? | 14:45 |
slaweq | obondarev: I would say yes, to make discoverable new behaviour in neutron | 14:45 |
mlavalle | another key - value pair there? | 14:45 |
slaweq | it's still API change | 14:45 |
obondarev | ok, makes sense | 14:46 |
amotoki | slaweq: +1 | 14:46 |
slaweq | so You need to somehow tell users that neutron supports that | 14:46 |
ralonsoh | IMO this conversation has been a but chaotic: we started with the "lock port" idea, then we moved to block a bounf port deletion and now we are voting for a "lock port" extension | 14:46 |
ralonsoh | I really don't understand what happened in the middle | 14:46 |
slaweq | ralonsoh: :) | 14:46 |
mlavalle | then let's not vote and decide the entire thing in the spec | 14:46 |
ralonsoh | we were going to implement this RFE by blocking the deletion of a bound port | 14:47 |
amotoki | yeah, we discussed two approaches | 14:47 |
ralonsoh | I know | 14:47 |
ralonsoh | but mixing both | 14:47 |
ralonsoh | so the point is to provide a transition knob from neutron to Nova | 14:47 |
ralonsoh | or extension | 14:47 |
ralonsoh | to know if this is actually supported in Neutron | 14:48 |
ralonsoh | and then implement the port deletion block | 14:48 |
ralonsoh | (that will also comply with the RFE) | 14:48 |
sean-k-mooney | obondarev: the content of the binding_profile is own by nova and is one way | 14:48 |
slaweq | so I propose that: | 14:48 |
sean-k-mooney | it provide info from nova to the neutron backedn | 14:48 |
slaweq | 1. we will approve rfe and will continue work on it in spec - I think we all agree that this if valid rfe | 14:49 |
slaweq | 2. I will summarize this discussion in the LP's comment and will describe both potential solutions | 14:49 |
mlavalle | +1, yes it's a valid rfe | 14:50 |
ralonsoh | +1, the RFE is legit | 14:50 |
slaweq | if there is anyone who wants to work on it and propose spec for it, that's great but if not, I will propose something | 14:50 |
slaweq | *by something I mean RFE :) | 14:50 |
slaweq | sorry, spec :) | 14:50 |
mlavalle | and I can work on it | 14:50 |
slaweq | mlavalle: great, thx | 14:50 |
ralonsoh | thanks | 14:50 |
amotoki | totally agree what is proposed. | 14:51 |
slaweq | thx, so I think we have agreement about next steps with that rfe :) | 14:51 |
slaweq | according to second rfe from ralonsoh we will discuss it on next meeting | 14:52 |
ralonsoh | thanks | 14:52 |
ralonsoh | I'll update the spec | 14:52 |
slaweq | #topic On Demand agenda | 14:52 |
slaweq | seba: You wanted to discuss about https://review.opendev.org/c/openstack/neutron/+/788714 | 14:52 |
seba | yes! | 14:53 |
slaweq | so You have few minutes now :) | 14:53 |
seba | okay, so just so you understand where I come from: I maintain a neutron driver using hierarchical portbinding (HPB), which allocates second-level VLAN segments. If I end up with a (network, physnet) combination exists with different segmentation_ids my network breaks. | 14:53 |
seba | This can happen when using allocate_dynamic_segment(), so my goal would be to either get allocate_dynamic_segment() safe or find another way for safe segment allocation in neutron. | 14:53 |
seba | We discussed https://bugs.launchpad.net/neutron/+bug/1791233 on another driver meeting and the idea to solve this was to employ a constraint on the network segments table to make (network_type, network, physical_network) unique. | 14:54 |
ralonsoh | that could be an solution, but doesn't work for tunneled networks | 14:55 |
sean-k-mooney | well in that case physical network would be None | 14:55 |
ralonsoh | you'll have (vxlan, net_1, None) | 14:55 |
ralonsoh | yes and repeated several times | 14:56 |
seba | ralonsoh, that should not be a problem, two "None"s are never the same | 14:56 |
sean-k-mooney | we dont curently supprot have 2 vxlan secment for one neutron network | 14:56 |
seba | ralonsoh, jkulik wrote something about the NULL values in the bugreport and how they're not the same with most if not all major databases | 14:57 |
jkulik | so that use-case would not be hindered by the UniqueConstraint | 14:57 |
sean-k-mooney | there is no valid configuration wehre we can have 2 (vxlan, net_1, None) segment with different vxlan vids right | 14:57 |
seba | sean-k-mooney, I have one top-level vxlan segment and then a vlan segment below it for handoff to the next driver. I don't see though what would stop me from having a second level vxlan segment | 14:58 |
sean-k-mooney | well i was thinkign about what the segments exteion allows | 14:58 |
sean-k-mooney | when doing heracial port binding that is sligly idfferent | 14:58 |
seba | ah, so you're thinking about multiple vxlan segments without specifying a physnet? | 14:59 |
sean-k-mooney | seba: yes sicne tunnels do not have a physnet | 14:59 |
slaweq | we need to finish meeting now, but please continue discussion in the channel :) I have to leave now becuase I have another meeting. Have a great weekend! | 14:59 |
slaweq | #endmeeting | 14:59 |
opendevmeet | Meeting ended Fri Jul 2 14:59:52 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 14:59 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-07-02-14.00.html | 14:59 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-07-02-14.00.txt | 14:59 |
opendevmeet | Log: https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-07-02-14.00.log.html | 14:59 |
mlavalle | o/ | 15:00 |
seba | bye slaweq | 15:00 |
seba | so, tunnels don't have a physnet, what implications would that have on the constraint? | 15:00 |
amotoki | (this is a good point to have our meeting in #-neutron, we can continue a discussion in a same channel :)) | 15:00 |
sean-k-mooney | seba: so in our hiracical domain are teh vlans/vxlan ids shared between both levels | 15:00 |
sean-k-mooney | that is whwere the conflict is yes | 15:01 |
sean-k-mooney | seba: the tuple for a network with segmentaiton id 10 and segmenation type vxlan in your scheme would be (vxlan, net_1, None) | 15:02 |
seba | yes | 15:02 |
sean-k-mooney | it would also be the same tuple for segmentaiton id 42 and segmenation type vxlan | 15:02 |
jkulik | sean-k-mooney: and you want to prohibit this from happening? | 15:03 |
sean-k-mooney | no i think that was the edgecasse that ralonsoh was raising | 15:03 |
jkulik | ah. but that should work as mentioned in the bug | 15:03 |
ralonsoh | exactly | 15:03 |
ralonsoh | and I've explained that several times | 15:03 |
sean-k-mooney | by incudign net_1, in the touple i think it already is | 15:03 |
jkulik | https://stackoverflow.com/questions/3712222/does-mysql-ignore-null-values-on-unique-constraints/3712251#3712251 looks like it should work at least | 15:03 |
ralonsoh | we cannot add this constraint to the DB | 15:03 |
sean-k-mooney | since a signel neutron network cannot have to segmenation ids | 15:04 |
sean-k-mooney | unless we are talking about l3 routed networks | 15:04 |
jkulik | as I understood it, the NULL in the tuple makes the tuple unique every time | 15:04 |
sean-k-mooney | in which case the segments can have there own segmenation type and id | 15:04 |
sean-k-mooney | jkulik: i dont think it would | 15:05 |
sean-k-mooney | although that might be db dependent | 15:05 |
ralonsoh | this is possible | 15:05 |
ralonsoh | http://paste.openstack.org/show/807144/ | 15:05 |
ralonsoh | and that will be prevented with this patch | 15:05 |
ralonsoh | that's for routed networks | 15:05 |
sean-k-mooney | ralonsoh: is that vaildi today | 15:05 |
ralonsoh | what? | 15:06 |
sean-k-mooney | what you posted | 15:06 |
seba | ralonsoh, what's the physical_network column saying for these two networksegments? | 15:06 |
ralonsoh | copy/paste from my env | 15:06 |
sean-k-mooney | my understading was that the only way neutron had to map segments to host was the physnet | 15:06 |
sean-k-mooney | so how do you assocate thos segmetn with different hosts in that case | 15:06 |
jkulik | > Null values are not considered equal. (https://www.postgresql.org/docs/9.0/indexes-unique.html) | 15:07 |
jkulik | > A UNIQUE index permits multiple NULL values for columns that can contain NULL. (https://dev.mysql.com/doc/refman/8.0/en/create-index.html) | 15:07 |
sean-k-mooney | so it would block rodolfos case http://paste.openstack.org/show/807144/ but im not sure how useful that is | 15:08 |
ralonsoh | sean-k-mooney, no, that's right, you need physical nets | 15:08 |
sean-k-mooney | ralonsoh: so implciatly then for tunnels we assume host level conectivty across the entire cloud | 15:09 |
ralonsoh | yes | 15:09 |
sean-k-mooney | there is not way to map those segment to your underlying hardware | 15:09 |
ralonsoh | yes | 15:09 |
ralonsoh | (well, no, there is no way) | 15:09 |
sean-k-mooney | so does it add any useful benifity to have 2 tunneled segment on one network | 15:09 |
ralonsoh | let me check | 15:10 |
sean-k-mooney | it poteally partions the vxlan mesh into two i guess | 15:10 |
sean-k-mooney | and resuces the brodacst domain slightly | 15:10 |
sean-k-mooney | seba: would incuding the segmenation id break your fix | 15:11 |
seba | yes | 15:11 |
sean-k-mooney | ok so (vxlan,42, net_1, None) | 15:11 |
sean-k-mooney | is not something you can supprot | 15:11 |
seba | we can support that, no problem | 15:11 |
jkulik | (vxlan, net_1, None) and (vxlan, net_1, None) are not equal. | 15:12 |
sean-k-mooney | that would allow ralonsoh case to be supproted | 15:12 |
seba | (vxlan, 42, net_1, None), (vxlan, 23, net_1, None) does not break the constraint, as physical_network is None and two None/NULL values are not regarded as being equal | 15:12 |
ralonsoh | right | 15:12 |
jkulik | so this contraint should not break anything you're currently relying on, right? | 15:13 |
ralonsoh | exactly | 15:13 |
jkulik | then I don't get the point. you want to extend the fix to preventing something else? | 15:13 |
ralonsoh | (why I didn't think about it before?) | 15:13 |
sean-k-mooney | so (segmenation_type, segmenation_id, network_name, physnet) | 15:14 |
ralonsoh | that will prevent what you are describing in the bug | 15:14 |
seba | sean-k-mooney, that would be the format of the above mock-db-rows, yes | 15:14 |
seba | but segmentation_id must not be part of the unique constraint | 15:14 |
sean-k-mooney | why | 15:15 |
ralonsoh | no, that's incorrect, we can't include the tag id | 15:15 |
sean-k-mooney | (vlan, 10, net_1, datacenter) and (vlan, 20, net_1, datacenter) | 15:15 |
seba | because if we add segmentation_id to the constraint then (vlan, 23, net_1, physnet_X), (vlan, 42, net_1, physnet_X) would be possible, which I want to prevent | 15:15 |
ralonsoh | exactly | 15:15 |
sean-k-mooney | woudl be falid when using segmented networks | 15:16 |
ralonsoh | sean-k-mooney, I need to check if we can have two tunneled segments per network | 15:16 |
sean-k-mooney | ok but that is allows for l3 routed networks | 15:16 |
ralonsoh | as you said, those are vlan nets | 15:16 |
ralonsoh | that's in the spec | 15:16 |
ralonsoh | for host association | 15:16 |
seba | sean-k-mooney, how do l3 routed networks play into networksegments? | 15:17 |
ralonsoh | but as I said, we need to confirm that is not possible to have two tunneled segments (same type) in one net | 15:17 |
ralonsoh | seba, segments are the base for routed networks | 15:17 |
ralonsoh | this is how traffic is segregated | 15:17 |
ralonsoh | sorry, I'm closing now, I have an appointment. I'll review the patch next week | 15:19 |
jkulik | thank you | 15:19 |
seba | yeah, thanks ralonsoh | 15:19 |
ralonsoh | sean-k-mooney, I'll check your comments on the live migration spec | 15:19 |
ralonsoh | sean-k-mooney++ | 15:19 |
seba | I'm also available for further discussion in irc in normalâ„¢ CEST workhours | 15:19 |
ralonsoh | me too | 15:20 |
sean-k-mooney | seba: this is the routed network spec by the way https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html | 15:22 |
seba | tnx | 15:23 |
opendevreview | Slawek Kaplonski proposed openstack/neutron master: [DVR] Fix update of the MTU in the SNAT namespace https://review.opendev.org/c/openstack/neutron/+/799226 | 16:01 |
slaweq | mlavalle: if You will have few minutes, please check my last comment in https://bugs.launchpad.net/neutron/+bug/1930866 if I didn't miss something | 16:14 |
slaweq | thx in advance :) | 16:14 |
slaweq | and have a great weekend :) | 16:14 |
opendevreview | Merged openstack/neutron stable/wallaby: Copy existing IPv6 leases to generated lease file https://review.opendev.org/c/openstack/neutron/+/799067 | 17:57 |
opendevreview | Merged openstack/neutron stable/victoria: Copy existing IPv6 leases to generated lease file https://review.opendev.org/c/openstack/neutron/+/799068 | 18:29 |
simondodsley | Hi - newbie question. I have an instance that can ping the compute host it lives on, but I cannot ping beyond that host. What would cause that? This is a devstack environment running tempest. | 20:32 |
opendevreview | Merged openstack/neutron master: [OVN] neutron-ovn-metadat-agent add retry logic for sb_idl https://review.opendev.org/c/openstack/neutron/+/796613 | 20:52 |
opendevreview | Merged openstack/neutron stable/ussuri: Copy existing IPv6 leases to generated lease file https://review.opendev.org/c/openstack/neutron/+/799069 | 21:45 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!