14:00:17 #startmeeting neutron_drivers 14:00:17 Meeting started Fri Dec 16 14:00:17 2022 UTC and is due to finish in 60 minutes. The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:17 The meeting name has been set to 'neutron_drivers' 14:00:18 o/ 14:00:34 hello all 14:01:04 o/ 14:01:53 o/ 14:02:04 o/ 14:02:14 Brian cannot attend today 14:02:24 So we are 5, I think we have quorum 14:02:30 yeap 14:02:33 we have two topics today 14:02:42 first one 14:02:47 #link https://bugs.launchpad.net/neutron/+bug/1998609 14:02:57 racosta, please, you are welcome to present it 14:03:28 ok, thk. This RFE intends to implement distributed routing support for IPv6 only or dual-stack usage scenarios. 14:04:04 The proposal is to introduce a new NAT rule for the IPv6 GUA addresses that are allocated to VMs. 14:04:54 is it only for OVN? 14:04:55 The ovn-controller running on the chassis needs this rule to start responding Neighbor Advertisements for IPv6 14:05:14 Yes, only for OVN driver 14:05:55 thanks 14:06:09 why core OVN is not implementing this? 14:06:40 * mlavalle likes the pretty drawings 14:06:56 On the ovn side, the nat rule is the same for ipv4 or ipv6 14:07:51 It would be more of a CMS task to program how ovn handles flows on chassi 14:08:15 yeah but not the SB database 14:08:31 we should be modifying only the NB registers 14:08:42 yeap 14:08:44 no no, just in the NB database. 14:09:29 The NAT rule is associated with the router at the northbound database level. 14:10:01 ok, is this doing any IPv6 nating? 14:10:58 there is no NAT for IPv6, so this special rule that translates the GUA address to itself is only used to create the flows in the chassis where the VM resides 14:11:46 Without this rule, the compute node does not know how to respond to that IPv6 GUA and centralize the communication through the router's GW port. 14:12:00 ok, is it compatible with IPv4 FIP DVR? 14:12:47 yes, absolutely. Does not change anything for IPv4 FIP DVR. 14:13:26 one question (for drivers): should this be configurable? 14:13:33 IIUC from the LP description, it requires NAT like IPv6 GUA to the same IPv6 GUA address, will it require any changes in Neutron API, or do You want to make such nat entry for each IPv6 address automatically? 14:14:00 or enabled/disabled by config option for all ports/IPs 14:14:01 ? 14:14:10 right, similar question 14:14:45 I believe it must be configured, because the user may be using n-d-r, for example. 14:15:02 so a config knob? 14:15:54 but no API change or extension for it? 14:16:29 this is backend specific, actually apart from n-d-r, other traffics should work the same in both scenarios 14:16:36 The proposal is to enable by enable_distributed_ipv6 flag in the ml2_conf.ini file. 14:16:38 (that's what I think) 14:17:40 Yes, other backends may need the current centralized behavior. 14:17:58 ok, thx for explanation 14:18:20 racosta: I assume this comes from a well known use case to you. How much of a gain / benefit does this improvement represent? 14:18:34 have you tested it? 14:19:34 Yes, I already tested it in a deployment with openstack yoga. 14:20:33 and what was gained out of it? 14:20:43 The benefits of implementing DVR for IPv6 are the same as for IPv4 FIP DVR, it is a more generalized case for dual stack. 14:20:58 Fernando Royo proposed openstack/ovn-octavia-provider master: Fix listener provisioning_status after HM created/deleted https://review.opendev.org/c/openstack/ovn-octavia-provider/+/867974 14:21:58 Performance gain by distributing north/south traffic per chassis, and configuration gain by reducing dynamic routing complexity in the fabric. 14:22:37 no side effects so far? for how long? 14:24:34 No effect for now, I'm testing dataplane performance with automated shaker tool. 14:25:31 sounds great 14:25:48 no questions from me, looks reasonable and rather safe 14:25:53 do you think we can vote now? 14:25:55 I'm ok with this. 14:25:56 Would be good to have some tests finally for it in upstream CI also 14:26:04 +1 from me 14:26:09 +1 14:26:11 +1 14:26:20 racosta: if you can, please share your shaker results with us 14:26:36 +1 14:26:45 * mlavalle formalizes his vote 14:26:50 +1 14:26:57 thanks! approved then 14:26:58 about the spec, do you think we need a spec for this RFE? 14:27:12 no, it's pretty localized 14:27:19 I think it should be documented, that's all 14:27:26 yeap 14:27:39 I think there is one already https://launchpadlibrarian.net/639474950/rfe_ovn_ipv6_dvr.rst 14:27:41 and hopefully include the testing results in that document 14:28:13 racosta, why don't you present it? 14:28:20 obondarev, who's this spec? 14:28:33 it's linked in the bug 14:28:46 I believe it's from racosta 14:28:56 yeap 14:28:58 in the RFE* 14:29:10 ahh I didn't see c#3 14:29:41 racosta, ok, please, propose this spec formally (now you have it written) 14:29:51 I'll write in the bug how to do it 14:30:12 and if you could add some testing resuls to it, it would be great 14:30:31 racosta: thanks for this proposal. Nice! 14:31:06 Ok, thank you very much. I will add the test results ;) 14:31:21 ok, thanks for your proposal. I'll update the bug 14:31:33 the next topic is 14:31:37 #link https://bugs.launchpad.net/neutron/+bug/1998608 14:31:45 quick summary 14:31:46 racosta: to be clear, it is not a precondition. I just think it would be nice to share it with the community (and I'm curious) 14:32:32 we are "playing" with hardware offload cards with ML2/OVN. The problem we found is that QoS is still not working 14:32:57 HWOL drivers do not translate the OVS QoS rules to the interface 14:33:18 this is where I'm trying to create an OVN monitor 14:33:24 that will run on the compute node 14:33:38 this monitor will be generic (new features could be added) 14:33:49 by default, there will be NO need to spawn it 14:33:51 so You want to have yet another neutron-ovn-agent on each compute node 14:34:00 to do things like that on nodes, right? 14:34:03 yes 14:34:12 neutron-ovn-monitor-agent 14:34:22 and new RPC interface, correct? 14:34:25 no 14:34:35 no RPC, this is something we won't do 14:34:35 ok, dummy question - what about neutron-ovn-metadata-agent then? Can it be combined with this new "generic" agent maybe? 14:34:50 obondarev, anything we need should be stored in the OVN/OVS database 14:34:53 same as metadata 14:35:05 reacting to ovsdb events 14:35:31 ok, so the new agent will talk only to local ovsdb? 14:35:36 slaweq, no, metadata is specific for metadata only. Mixing features in this agent (that is also mandatory) is not a good idea 14:35:48 obondarev, not local ovsdb only, also remote OVN database 14:35:53 same as metadata agent 14:35:57 ah, got it 14:36:15 I have one small concern about scalling of this 14:36:27 yes, I have this concern too 14:36:28 we know that many connections to the ovn dbs can cause issues 14:36:38 I think it's a big concern 14:36:44 and this will be at least yet another connection from each compute node 14:36:52 yes, for sure. This is why this agent won't be necessary in all deployments 14:37:11 this monitor is necessary only, for now, for this specific feature 14:37:16 QoS with HWOL 14:37:23 but this is only necessary on hots where hw offload things are located, no? 14:37:27 yes 14:37:35 and if you want qos 14:37:46 for now, but who knows what else we will implement there :p 14:37:58 yes, of course 14:37:59 good comment 14:38:10 POC: (3 patches) https://review.opendev.org/c/openstack/neutron/+/866480 14:38:12 I'm not against but I just wanted to raise concern which I have :) 14:38:44 as long as the trade offs are well documented so deployers can understand the potential cost, I think it would be ok 14:38:53 +1 14:38:54 yes but overloading a necessary agent (metadata) that also has a specific task, is not a good idea 14:39:37 and, btw, this is a kind of workaround until driver manufacturers fix the drivers 14:40:21 +1 from me but with proper documentation as mlavalle mentioned :) 14:40:40 so: 1) it is optional 2) the performance trade offs are well documented. with this I'm +1 14:40:58 of course, it will be documented 14:41:59 any other question? 14:42:01 agree with the above limitations 14:42:46 none from me 14:43:18 I'm good 14:43:19 so just to make it explicit, can you vote? 14:43:24 +1 14:43:26 +1 14:43:49 +1 14:43:56 +1 14:44:01 (nothing from me, I proposed it) 14:44:08 so approved 14:44:20 do I need a spec? 14:45:07 good question 14:45:24 New agent seems a pretty big change, I believe some sort of design doc would be useful 14:45:30 I agree 14:45:37 as it introduces a new agent like thing would be good to write it 14:45:48 I'll push a spec next week 14:45:52 thanks 14:45:57 thx 14:46:00 I'd say given the potential performance concerns at scale, let's give ourseleves and the community some time to think about it and the spec process gives us that 14:46:17 thank you folks! I have nothing else in the agenda 14:46:23 so have a nice weekend! 14:46:29 thx, You too 14:46:31 Bye 14:46:32 #endmeeting