14:01:19 <lajoskatona> #startmeeting neutron_drivers
14:01:19 <opendevmeet> Meeting started Fri Sep 30 14:01:19 2022 UTC and is due to finish in 60 minutes.  The chair is lajoskatona. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:19 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:19 <opendevmeet> The meeting name has been set to 'neutron_drivers'
14:01:21 <lajoskatona> o/
14:01:22 <mlavalle> o/
14:01:25 <rubasov> o/
14:01:34 <amotoki> o/
14:01:36 <ralonsoh> hi
14:02:01 <haleyb> hi
14:02:02 <slaweq> hi
14:03:03 <lajoskatona> I think we can start
14:03:10 <lajoskatona> [RFE] Expose Open vSwitch other_config column in the API (#link https://bugs.launchpad.net/neutron/+bug/1990842 )
14:04:10 <lajoskatona> rubasov: perhaps you have some background for this RFE ?
14:04:19 <rubasov> sure
14:04:45 <rubasov> this is basically another telco performance tuning idea
14:05:15 <rubasov> the biggest drawback is clearly that it would expose backend specific info in the API
14:05:29 <rubasov> but I could not come with an idea to make this backend agnostic
14:05:43 <slaweq> would it be exposed for everyone or admin only?
14:05:47 <rubasov> so instead tried to go the very explicit way
14:06:17 <rubasov> I believe binding_profile is accessible for every user
14:06:23 <lajoskatona> binding-profile is admin only, isn't it?
14:07:28 <amotoki> yes, it is only for admin by the default policy.
14:07:37 <slaweq> admin only, yes
14:07:40 <slaweq> I just checked it
14:07:48 <rubasov> sorry for the misinformation then :-)
14:07:55 <slaweq> so IMO it's not that big deal if we will expose it for admin users
14:08:00 <ralonsoh> we should avoid writing in the binding profile, this should be populated only by Nova
14:08:45 <slaweq> maybe we can add another field to the port?
14:08:48 <ralonsoh> if we want to add a specific port config, we can create a new extension and pass this in other parameter, not the binding profile
14:08:52 <slaweq> like "extra_config" or something like that
14:08:55 <slaweq> or "tunnables"
14:09:03 <lajoskatona> slaweq: thanks, I did the same to be sure :-)
14:09:20 <slaweq> and then each backend may use it for something else if needed
14:09:30 <rubasov> I am okay with that, though it's new to me that binding_profile is exclusive to nova
14:09:46 <lajoskatona> sounds reasonable, as binding profile seems sometimes overloaded
14:09:46 <mlavalle> yeah, that makes it a bit more agnostic
14:10:08 <ralonsoh> rubasov, we have been using it incorrectly, but this field should be populated only by Nova
14:10:29 <rubasov> okay, that explains my vague memories
14:10:58 <rubasov> and also okay, I can come up with a new port attribute for this
14:11:29 <lajoskatona> In this case we should have a spec with the details
14:12:18 <ralonsoh> do we want to implement a generic field for this, that could be used in a future for other features?
14:13:02 <lajoskatona> ralonsoh: you mean to write any dict to other-config or similar?
14:13:20 <ralonsoh> e.g.: port[extra_config] = {ovs_other_config: {}, 'future_other_config': {} }
14:13:32 <rubasov> and I guess if the new field is also admin only, then we can expose the whole other_config, not just tx_steering
14:13:51 <ralonsoh> right
14:13:54 <rubasov> or port[backend_tuning] = {ovs: ..., ...}
14:14:14 <ralonsoh> that's a good proposal too
14:14:27 <ralonsoh> but yes, we need a spec for this, I think
14:14:55 <slaweq> yes, it should be admin only
14:14:57 <rubasov> sure, spec is the next step if drivers agree with the basic idea
14:15:26 <slaweq> and if we are going to add new field then question about api extension is already answered - yes, we need extension :)
14:15:35 <rubasov> okay :-)
14:15:48 <lajoskatona> Yeah, let's vote to have the formalities then
14:16:04 <lajoskatona> +1 from me for new extension and port field
14:16:08 <lajoskatona> and spec :-)
14:16:13 <amotoki> +1 for adding a new extension for a port field.
14:16:33 <mlavalle> +1
14:16:39 <haleyb> +1
14:16:41 <ralonsoh> +1
14:17:26 <slaweq> +1
14:17:49 <lajoskatona> Thanks, I will update the RFE, and thanks rubasov for proposing this topic
14:18:09 <lajoskatona> I have one more (I hope) small thing which was not in the agenda originally
14:18:15 <mlavalle> it seems we all love our telco users
14:18:26 <lajoskatona> Add new vif type linkvirt_ovs https://review.opendev.org/c/openstack/neutron-lib/+/859573
14:18:32 <slaweq> mlavalle yeah, we have to :)
14:18:35 <lajoskatona> nova-spec: https://review.opendev.org/c/openstack/nova-specs/+/859290
14:18:37 <rubasov> if anyone has opinions on the discussion topics left in the launchpad ticket, please share there even before I upload the spec
14:19:06 <rubasov> and thanks everyone for your attention
14:19:13 <mlavalle> slaweq: come on, it's not that we have to. We genuinly love them ;-)
14:19:24 <slaweq> 😀
14:19:30 <haleyb> they pay the bills :)
14:19:44 <slaweq> haleyb exactly ;)
14:20:43 <lajoskatona> ok, so the small topic is for "Add new vif type linkvirt_ovs", which  is for napatech smartnic
14:21:01 <lajoskatona> The proposed a neutron-lib change: https://review.opendev.org/c/openstack/neutron-lib/+/859573
14:21:12 <lajoskatona> and they have a nova spec:  https://review.opendev.org/c/openstack/nova-specs/+/859290
14:21:37 <lajoskatona> As I checked for similar changes we have no spec or RFE in the past (at least what I found)
14:22:07 <lajoskatona> question is it enough that the nova-spec describes this change or shall we ask some formal writing for neutron too?
14:22:23 <slaweq> IMHO it's enough
14:22:44 <lajoskatona> I found the metronome smartnic as similar "project: https://review.opendev.org/q/topic:netronome-smartnic-enablement
14:22:50 <lajoskatona> slaweq: ok, thanks
14:23:27 <ralonsoh> if each manufacturer is going to create its own plugin method, we'll have a problem in Nova-Neutron
14:23:36 <lajoskatona> then, let's continue with it on the n-lib change review
14:23:48 <slaweq> ralonsoh why?
14:23:59 <ralonsoh> because we'll need to create a plug method for each one
14:24:04 <ralonsoh> at least in os-vif
14:24:15 <ralonsoh> and probably in the ML2 plugin, like with smartnics
14:24:44 <ralonsoh> and, of course, we can't test it in the CI
14:25:04 <ralonsoh> anyway, this is rant, nothing else
14:25:22 <lajoskatona> ralonsoh: thanks
14:25:59 <lajoskatona> ralonsoh: good to keep in mind that we should have problems with the integration of other hardwares like this
14:26:41 <lajoskatona> that's it from me, thanks for the discussion of this topic
14:26:49 <lajoskatona> #topic On Demand Agenda
14:26:56 <lajoskatona> Do you have anything to discuss?
14:26:58 <liuyulong> Hi, there
14:27:06 <lajoskatona> liuyulong: Hi
14:27:08 <lajoskatona> welcome
14:27:15 <liuyulong> I have a technical question.
14:27:21 <slaweq> ralonsoh ok, I see now
14:27:43 <liuyulong> Something about smart NIC offloading and neutron tunning bridge.
14:28:14 <liuyulong> Since neutron can support vlan and vxlan tunnel in one same host
14:29:03 <liuyulong> did you guys played with smartnics for vlan and vxlan offloading in same host?
14:29:25 <liuyulong> We have tested neutron ovs bridge topology.
14:29:33 * slaweq needs to drop for another meeting, sorry
14:29:40 <slaweq> have a great weekend!
14:29:41 <slaweq> o/
14:29:59 <lajoskatona> slaweq: Bye
14:30:00 <liuyulong> But since the vxlan ports are set in br-tunning, but some NICs may need tunnel to be set in br-int.
14:30:03 <mlavalle> o/
14:30:10 <rubasov> I have a colleague who wanted to, but had problems with that nova unconditionally chooses the vlan segment, but he wanted offload on the vxlan segment
14:30:22 <liuyulong> So I suppose if we can move the tunnels to br-int instead.
14:30:51 <ralonsoh> sorry, br-tunning is the bridge for tunneled networks?
14:31:40 <liuyulong> yes, we test it, but the vxlan UDP_4789 packets  cannot be transimit to vxlan tunnel in br-tunning.
14:31:41 <rubasov> (but his problem was only present when using multi-segment networks)
14:32:48 <ralonsoh> but why you can't transmit those packets to the tunnel bridge?
14:32:56 <liuyulong> #link https://github.com/openvswitch/ovs/blob/master/Documentation/howto/userspace-tunneling.rst
14:33:08 <liuyulong> this is the vtep_IP and bridge settings.
14:33:28 <liuyulong> For vxlan offloading, we test it in same configuration.
14:33:59 <liuyulong> Not sure why it is not, maybe its a ovs/ovs-dpdk bug.
14:34:17 <liuyulong> #link https://docs.nvidia.com/networking/pages/viewpage.action?pageId=80592948#OVSOffloadUsingASAP%C2%B2Direct-OffloadingVXLANEncapsulation/DecapsulationActions
14:34:52 <ralonsoh> ok, so the problem is that you are using HW offload with DPDK
14:34:53 <liuyulong> This is the NIC vendors' setting, it is basically same to the OVS guides.
14:35:12 <ralonsoh> in this case you need to manually use the trick used in DPDK
14:35:37 <ralonsoh> sending the tunneled traffic trhough the physical bridge
14:36:06 <ralonsoh> because you can't send the tunneled traffic to the kernel (you'll lost the advantage of DPDK)
14:36:39 <liuyulong> ovs-dpdk with rte_flow to offload to the NIC eswitch.
14:37:12 <liuyulong> The topoloy can work is:  PF---> br-physial-> br-int(with vxlan ports)
14:37:47 <liuyulong> The topology can not work: PF--->br-physial ----> br-int --- br-tun (with  vxlan ports)
14:38:34 <ralonsoh> this is not the architecture (but I would need to check that)
14:38:45 <ralonsoh> PF -> br-phy -> br-tun -> br-int
14:38:56 <liuyulong> PF---> br-physial-> br-int(with vxlan ports), in such case, both packets from vlan tenant networks and vxlan networks can all get offloaded.
14:39:03 <ralonsoh> you need to send the traffic to br-tun for the VXLAN-VLAN conversion
14:40:14 <liuyulong> ralonsoh, yes, I want to send packets to br-tun in the 2nd topolgy. But it does not.
14:41:34 <ralonsoh> I don't find the document now, but please check how it was described in networkin-ovs-dpdk
14:42:00 <liuyulong> So, my question will be, if it is acceptable to move vxlan tunnel ports to br-int with some configurations if users want to use offloading features.
14:42:26 <ralonsoh> why? you don't need that
14:42:27 <liuyulong> ralonsoh, you can see the verdors' page.
14:43:16 <liuyulong> Because "The topology can not work: PF--->br-physial ----> br-int --- br-tun (with  vxlan ports)"
14:43:26 <ralonsoh> PF -> br-phy -> br-tun -> br-int
14:43:36 <ralonsoh> check how is done now with DPDK and tunneled networks
14:44:05 <liuyulong> Then " PF -> br-phy -> br-tun -> br-int" how vlan networks go?
14:44:25 <ralonsoh> via br-phy
14:44:32 <liuyulong> To where?
14:44:42 <lajoskatona> to br-int from br-phy, not?
14:44:55 <liuyulong> Then a loop created.
14:45:19 <ralonsoh> you can use a second phy bridge for this tunneled traffic
14:45:39 <liuyulong> But offloading packets can only to ONE single NIC...
14:45:58 <ralonsoh> ok, so you can have one network
14:46:05 <ralonsoh> choose one: vlan or tunneled
14:46:59 <liuyulong> But why? Neutron natively supports vlan and vxlan in same node at same time.
14:47:11 <ralonsoh> but not via the same interface
14:48:26 <liuyulong> ovs-dpdk can not offload flows to multiple NIC.
14:49:27 <ralonsoh> maybe you can use nic partioning
14:49:33 <ralonsoh> if that is supported in that HW
14:49:39 <mlavalle> lajoskatona: maybe time to close the meeting? This conversation can continue in the channel anyway
14:49:44 <ralonsoh> right
14:49:48 <liuyulong> OK
14:50:14 <lajoskatona> malavalle: yeah, thanks
14:50:38 <lajoskatona> ralonsoh, liuyulong: thanks for the discussion, but let's close the meeting to keep others time
14:50:38 <liuyulong> I have no ideas about nic partioning. I will google it. But I don't think it can work in offloading case.
14:50:43 <liuyulong> Sure
14:50:59 <liuyulong> Thanks for the time.
14:51:03 <lajoskatona> Thanks for attending the meeting
14:51:07 <lajoskatona> #endmeeting