14:01:19 #startmeeting neutron_drivers 14:01:19 Meeting started Fri Sep 30 14:01:19 2022 UTC and is due to finish in 60 minutes. The chair is lajoskatona. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:19 The meeting name has been set to 'neutron_drivers' 14:01:21 o/ 14:01:22 o/ 14:01:25 o/ 14:01:34 o/ 14:01:36 hi 14:02:01 hi 14:02:02 hi 14:03:03 I think we can start 14:03:10 [RFE] Expose Open vSwitch other_config column in the API (#link https://bugs.launchpad.net/neutron/+bug/1990842 ) 14:04:10 rubasov: perhaps you have some background for this RFE ? 14:04:19 sure 14:04:45 this is basically another telco performance tuning idea 14:05:15 the biggest drawback is clearly that it would expose backend specific info in the API 14:05:29 but I could not come with an idea to make this backend agnostic 14:05:43 would it be exposed for everyone or admin only? 14:05:47 so instead tried to go the very explicit way 14:06:17 I believe binding_profile is accessible for every user 14:06:23 binding-profile is admin only, isn't it? 14:07:28 yes, it is only for admin by the default policy. 14:07:37 admin only, yes 14:07:40 I just checked it 14:07:48 sorry for the misinformation then :-) 14:07:55 so IMO it's not that big deal if we will expose it for admin users 14:08:00 we should avoid writing in the binding profile, this should be populated only by Nova 14:08:45 maybe we can add another field to the port? 14:08:48 if we want to add a specific port config, we can create a new extension and pass this in other parameter, not the binding profile 14:08:52 like "extra_config" or something like that 14:08:55 or "tunnables" 14:09:03 slaweq: thanks, I did the same to be sure :-) 14:09:20 and then each backend may use it for something else if needed 14:09:30 I am okay with that, though it's new to me that binding_profile is exclusive to nova 14:09:46 sounds reasonable, as binding profile seems sometimes overloaded 14:09:46 yeah, that makes it a bit more agnostic 14:10:08 rubasov, we have been using it incorrectly, but this field should be populated only by Nova 14:10:29 okay, that explains my vague memories 14:10:58 and also okay, I can come up with a new port attribute for this 14:11:29 In this case we should have a spec with the details 14:12:18 do we want to implement a generic field for this, that could be used in a future for other features? 14:13:02 ralonsoh: you mean to write any dict to other-config or similar? 14:13:20 e.g.: port[extra_config] = {ovs_other_config: {}, 'future_other_config': {} } 14:13:32 and I guess if the new field is also admin only, then we can expose the whole other_config, not just tx_steering 14:13:51 right 14:13:54 or port[backend_tuning] = {ovs: ..., ...} 14:14:14 that's a good proposal too 14:14:27 but yes, we need a spec for this, I think 14:14:55 yes, it should be admin only 14:14:57 sure, spec is the next step if drivers agree with the basic idea 14:15:26 and if we are going to add new field then question about api extension is already answered - yes, we need extension :) 14:15:35 okay :-) 14:15:48 Yeah, let's vote to have the formalities then 14:16:04 +1 from me for new extension and port field 14:16:08 and spec :-) 14:16:13 +1 for adding a new extension for a port field. 14:16:33 +1 14:16:39 +1 14:16:41 +1 14:17:26 +1 14:17:49 Thanks, I will update the RFE, and thanks rubasov for proposing this topic 14:18:09 I have one more (I hope) small thing which was not in the agenda originally 14:18:15 it seems we all love our telco users 14:18:26 Add new vif type linkvirt_ovs https://review.opendev.org/c/openstack/neutron-lib/+/859573 14:18:32 mlavalle yeah, we have to :) 14:18:35 nova-spec: https://review.opendev.org/c/openstack/nova-specs/+/859290 14:18:37 if anyone has opinions on the discussion topics left in the launchpad ticket, please share there even before I upload the spec 14:19:06 and thanks everyone for your attention 14:19:13 slaweq: come on, it's not that we have to. We genuinly love them ;-) 14:19:24 😀 14:19:30 they pay the bills :) 14:19:44 haleyb exactly ;) 14:20:43 ok, so the small topic is for "Add new vif type linkvirt_ovs", which is for napatech smartnic 14:21:01 The proposed a neutron-lib change: https://review.opendev.org/c/openstack/neutron-lib/+/859573 14:21:12 and they have a nova spec: https://review.opendev.org/c/openstack/nova-specs/+/859290 14:21:37 As I checked for similar changes we have no spec or RFE in the past (at least what I found) 14:22:07 question is it enough that the nova-spec describes this change or shall we ask some formal writing for neutron too? 14:22:23 IMHO it's enough 14:22:44 I found the metronome smartnic as similar "project: https://review.opendev.org/q/topic:netronome-smartnic-enablement 14:22:50 slaweq: ok, thanks 14:23:27 if each manufacturer is going to create its own plugin method, we'll have a problem in Nova-Neutron 14:23:36 then, let's continue with it on the n-lib change review 14:23:48 ralonsoh why? 14:23:59 because we'll need to create a plug method for each one 14:24:04 at least in os-vif 14:24:15 and probably in the ML2 plugin, like with smartnics 14:24:44 and, of course, we can't test it in the CI 14:25:04 anyway, this is rant, nothing else 14:25:22 ralonsoh: thanks 14:25:59 ralonsoh: good to keep in mind that we should have problems with the integration of other hardwares like this 14:26:41 that's it from me, thanks for the discussion of this topic 14:26:49 #topic On Demand Agenda 14:26:56 Do you have anything to discuss? 14:26:58 Hi, there 14:27:06 liuyulong: Hi 14:27:08 welcome 14:27:15 I have a technical question. 14:27:21 ralonsoh ok, I see now 14:27:43 Something about smart NIC offloading and neutron tunning bridge. 14:28:14 Since neutron can support vlan and vxlan tunnel in one same host 14:29:03 did you guys played with smartnics for vlan and vxlan offloading in same host? 14:29:25 We have tested neutron ovs bridge topology. 14:29:33 * slaweq needs to drop for another meeting, sorry 14:29:40 have a great weekend! 14:29:41 o/ 14:29:59 slaweq: Bye 14:30:00 But since the vxlan ports are set in br-tunning, but some NICs may need tunnel to be set in br-int. 14:30:03 o/ 14:30:10 I have a colleague who wanted to, but had problems with that nova unconditionally chooses the vlan segment, but he wanted offload on the vxlan segment 14:30:22 So I suppose if we can move the tunnels to br-int instead. 14:30:51 sorry, br-tunning is the bridge for tunneled networks? 14:31:40 yes, we test it, but the vxlan UDP_4789 packets cannot be transimit to vxlan tunnel in br-tunning. 14:31:41 (but his problem was only present when using multi-segment networks) 14:32:48 but why you can't transmit those packets to the tunnel bridge? 14:32:56 #link https://github.com/openvswitch/ovs/blob/master/Documentation/howto/userspace-tunneling.rst 14:33:08 this is the vtep_IP and bridge settings. 14:33:28 For vxlan offloading, we test it in same configuration. 14:33:59 Not sure why it is not, maybe its a ovs/ovs-dpdk bug. 14:34:17 #link https://docs.nvidia.com/networking/pages/viewpage.action?pageId=80592948#OVSOffloadUsingASAP%C2%B2Direct-OffloadingVXLANEncapsulation/DecapsulationActions 14:34:52 ok, so the problem is that you are using HW offload with DPDK 14:34:53 This is the NIC vendors' setting, it is basically same to the OVS guides. 14:35:12 in this case you need to manually use the trick used in DPDK 14:35:37 sending the tunneled traffic trhough the physical bridge 14:36:06 because you can't send the tunneled traffic to the kernel (you'll lost the advantage of DPDK) 14:36:39 ovs-dpdk with rte_flow to offload to the NIC eswitch. 14:37:12 The topoloy can work is: PF---> br-physial-> br-int(with vxlan ports) 14:37:47 The topology can not work: PF--->br-physial ----> br-int --- br-tun (with vxlan ports) 14:38:34 this is not the architecture (but I would need to check that) 14:38:45 PF -> br-phy -> br-tun -> br-int 14:38:56 PF---> br-physial-> br-int(with vxlan ports), in such case, both packets from vlan tenant networks and vxlan networks can all get offloaded. 14:39:03 you need to send the traffic to br-tun for the VXLAN-VLAN conversion 14:40:14 ralonsoh, yes, I want to send packets to br-tun in the 2nd topolgy. But it does not. 14:41:34 I don't find the document now, but please check how it was described in networkin-ovs-dpdk 14:42:00 So, my question will be, if it is acceptable to move vxlan tunnel ports to br-int with some configurations if users want to use offloading features. 14:42:26 why? you don't need that 14:42:27 ralonsoh, you can see the verdors' page. 14:43:16 Because "The topology can not work: PF--->br-physial ----> br-int --- br-tun (with vxlan ports)" 14:43:26 PF -> br-phy -> br-tun -> br-int 14:43:36 check how is done now with DPDK and tunneled networks 14:44:05 Then " PF -> br-phy -> br-tun -> br-int" how vlan networks go? 14:44:25 via br-phy 14:44:32 To where? 14:44:42 to br-int from br-phy, not? 14:44:55 Then a loop created. 14:45:19 you can use a second phy bridge for this tunneled traffic 14:45:39 But offloading packets can only to ONE single NIC... 14:45:58 ok, so you can have one network 14:46:05 choose one: vlan or tunneled 14:46:59 But why? Neutron natively supports vlan and vxlan in same node at same time. 14:47:11 but not via the same interface 14:48:26 ovs-dpdk can not offload flows to multiple NIC. 14:49:27 maybe you can use nic partioning 14:49:33 if that is supported in that HW 14:49:39 lajoskatona: maybe time to close the meeting? This conversation can continue in the channel anyway 14:49:44 right 14:49:48 OK 14:50:14 malavalle: yeah, thanks 14:50:38 ralonsoh, liuyulong: thanks for the discussion, but let's close the meeting to keep others time 14:50:38 I have no ideas about nic partioning. I will google it. But I don't think it can work in offloading case. 14:50:43 Sure 14:50:59 Thanks for the time. 14:51:03 Thanks for attending the meeting 14:51:07 #endmeeting