17:00:13 #startmeeting service_chaining 17:00:14 Meeting started Thu Jul 7 17:00:13 2016 UTC and is due to finish in 60 minutes. The chair is cathy_. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:18 The meeting name has been set to 'service_chaining' 17:00:19 hi 17:00:22 hi 17:00:26 scsnow: hi 17:00:30 fsunaval: hi 17:00:34 hi everyone 17:00:34 hi 17:00:39 hi 17:00:57 hi 17:01:06 hi 17:01:27 Let's start 17:02:03 I will start with the second topic in the agenda since the first topic needs Mohan's feedback and he has not joined yet 17:02:19 #topic OVN driver for networking-sfc spec and implementation 17:02:46 LouisF: doonhammer how is this going? 17:03:29 Sorry been down with the flu and way behind on this and my day job :-( 17:03:30 It seems going well and need some final touch to be merged, right 17:03:35 cathy_: spec patch under review 17:03:40 https://review.openstack.org/#/c/333172/ 17:04:16 doonhammer: Oh, hope you feel well now. 17:04:37 cathy: thanks recovered 17:05:18 there have been some comments on it but no major issues 17:05:38 LouisF: OK, thanks. 17:05:40 please review 17:05:41 I need to talk some more of regXboi and resolve the issues on the ovs/ovn side 17:06:17 I believe Juno have been doing work on this also 17:06:27 doonhammer: Ok, thanks for the update. I see people are working actively on this on the OVN side 17:06:38 iLouisF: yes he is on PTO this week 17:07:11 #topic Add "NSH" support----'correlation-nsh' in the chain-parameters 17:07:12 i will ping him to review the latest spec patch (3) 17:07:15 It is working on the ovs/ovn side just need to get the arch cleaned up 17:07:55 igordcard: I know you are working on this. Maybe you would like to give the team a quick update on the progress? 17:08:20 cathy_: yeah - so I've been working on refactoring the non-ovs-specific code towards the plugin itself 17:08:41 i.e. mainly move the SFP generation/management code and models up to the plugin 17:08:50 igordcard: great 17:09:11 that is kind of a pre-requisite to the NSH enablement due to the close relationship with SFPs, so we should get that right first before actually enabling the NSH datapath 17:09:18 I have a question rg NSH. There is an option to enable nsh on agent side: 17:09:19 agent_opts = [ 17:09:19 cfg.StrOpt('sfc_encap_mode', default='mpls', 17:09:19 help=_("The encapsulation mode of sfc.")), 17:09:19 ] 17:09:54 What is the relationship between this option and correlation and chain params? 17:10:02 in chain params* 17:10:19 it's taking long indeed since a lot of changes must be made (some models from the ovs driver had to be split in half, with one half moved to the plugin) - as soon as this work is ready for a first review I'll submit it to gerrit, and then actually start on the sfc_encap_mode work 17:10:30 scsnow: this option maps to the correlation method of the chain param 17:10:54 Currently we only support MPLS, we are adding another option "nsh" 17:11:13 So what if sfc_encap_mode=nsh and correlation=mpls? 17:11:52 sfc_encap_mode should match the correlation 17:11:57 scsnow: cathy_ perhaps the option can be deprecated in the near future in favor of the driver appropriately configuring the agent? 17:12:38 igordcard: agree 17:12:53 igordcard, exactly. I think that this option should be obsolete. 17:14:08 scsnow: LouisF igordcard agree. 17:14:23 scsnow: definitely obsolete 17:14:24 We should have only one place to specify this. 17:14:29 but I'm wondering whether it's feasible to support both mpls and nsh chains simultaneously in agent code 17:15:03 I mean some chains may use nsh encap, while others - mpls 17:15:12 in terms of the sfp generation work more specifically, port-chains are being created with the correct DB data at the moment, I'm just correcting the communication of that data to the agent now, so create-port-chain is almost working again after the refactoring (then I still need to check the other CRUD methods and the unit tests) 17:15:18 scsnow: the encap method is per chain 17:15:36 scsnow: so the user can specify mpls for one chain and NSH for another chain 17:15:54 scsnow: should be doable as soon as OVS has NSH support 17:15:57 scsnow: would there be a use case for that? 17:16:20 I will create a bug to remove the agent_opts for encap which comes from the config. 17:16:41 LouisF, I have no idea who is going to use such case 17:16:54 #action cathy create a bug to remove the agent_opts for encap which comes from the config 17:17:23 LouisF, but this is about how things should work if encap type is per chain 17:17:47 scsnow: yes, it is how it works today. encap type per chain 17:19:18 yes but why would both types be needed simultaneously? 17:19:58 LouisF: as in 1 chain with 2 encaps? 17:20:04 igordcard: For backward compatibility we should keep the mpls encap there. Also some user might select to use it. No harm to keep it. 17:20:21 no different encaps for different chains 17:20:22 igordcard: not 1 chain with 2 encaps. one encap per chain 17:20:48 cathy, LouisF: are you considering different transport types between the switches (VxLAN-GPE+NSH) and the switch and the VM (Ethernet+NSH)? (even we are using same SFC encapsulation) 17:20:58 cathy_: yeah, I will keep mpls encap as is 17:21:16 juanmafg: yes 17:22:00 juanmafg: we should allow different external tunnel transport although currently we only support VXLAN 17:22:15 LouisF, yes, it's a bit crazy to have both mpls and nsh chains configured 17:22:21 cathy_: juanmafg that is a interesting discussion to have... we also need to look at how that fits with existing neutron ml2 vxlan segments 17:22:49 igordcard: yes, that is what we need think. 17:22:50 cathy_: well, this is also an interesting topic when it comes to OVS 17:23:12 juanmafg: depends on the nsh patches 17:23:19 juanmafg: yes 17:24:04 cathy_, LouisF: arer you using Yi Yang patch for your tests? 17:24:15 juanmafg: in OVS, a patch of "ethernet+NSH" is being proposed 17:24:17 scsnow: my feeling is to support both nsh and mpls but simultaneously in different chains 17:24:26 but not 17:24:43 igordcard, did you try VxLAN-GPE+NSH case for multi-node setup? 17:25:05 scsnow: no, I will start simple with Ethernet+NSH though 17:25:06 juanmafg: hopefully that patch can get approved and merged soon, then we can have the NSH encap enabled on our API. As to external transport, we will first use VXLAN. 17:25:16 scsnow: but I'll keep that in mind 17:25:23 igordcard, ok 17:25:49 igordcard, I'm asking because I did try that and it does not work for me. 17:26:52 scsnow: oh so you set up a multi-node env only made of OVS switches (without openstack) and attempted VXLAN-GPE+NSH communication? 17:27:00 scsnow: better try the "ethernet+NSH" 17:27:08 cathy_: I'm more involved in ODL and they have similar situation with OVS 17:27:16 scsnow: since that is the what is being targeted into OVS 17:27:28 cathy_, ethernet+nsh is suitable only for single-node case as far as I understand 17:27:50 igordcard, correct 17:27:58 scsnow: not really, through external VXLAN tunnel, it can apply to multi-node 17:28:16 scsnow: vxlan+ethernet+NSH 17:28:54 scsnow: :/ shoot me an email with the details, I'll be interested in that, but for n-sfc specifically I will start only with ethernet+nsh 17:29:01 cathy_, correct. but ovs with yi patched does not support such case 17:29:07 patches* 17:29:14 juanmafg: another interesting point is the mechanism between OVS and the SF which does not support the SFC encap (eg. NSH). Any discussion on this in ODL? 17:29:43 igordcard, will do 17:29:48 cathy_: do you mean SFC proxy? 17:29:54 scsnow: I know Yi or someone else in that team has completed a new patch of vxlan+ethernet+NSH 17:30:18 scsnow: They sent me email saying they will post it on OVS for review. You may want to check that 17:30:20 juanmafg: yes 17:30:22 cathy_: vxlan-gpe+ethernet_nsh? 17:30:41 cathy_, Ok, I'll check ovs mailing list for these patches 17:30:58 LouisF, cathy_: ODL is trying to secure the NSH support first in OVS 17:31:25 juanmafg: yes, I am wondering what is the mechanism if SF proxy does not want to do reclassification. 17:32:02 igordcard: no, Last time I talked with them, it is vxlan+ethernet+NSH 17:32:11 LouisF, cathy_: I know fd.io guys were looking into the NSH proxy support 17:32:24 sorry, forgot about the meeting 17:32:31 igordcard: But I have not got chance to check the code patch yet. vxlan-gpe will not be accepted by OVS at this stage due to multiple reasons 17:32:44 juanmafg: fd.io is adding nsh support 17:32:59 LouisF: yes 17:33:19 juanmafg: does fd.io have a meeting or something that we can join the discussion? 17:33:55 cathy_: https://wiki.fd.io/view/VPP/Meeting 17:34:07 cathy_: yes they have https://wiki.fd.io/view/NSH_SFC/Meeting 17:34:28 LouisF: that one is for NSH 17:34:38 https://wiki.fd.io/view/NSH_SFC/Meeting 17:34:47 juanmafg: yes 17:35:08 I am interested the way they will use for SF proxy that does not do re-classification since reclassification is "expensive". A correlation mechanism between the packet and the chain ID is needed 17:35:22 LouisF: juanmafg thanks 17:35:52 cathy_: i think the fd.io proxy does reclassification on the N-tuple 17:36:18 cathy_: I know fd.io guys were looking into a stateful NSH proxy, but do not know the details about the implementation 17:36:30 cathy_: perhaps quickly hash a few fields and keep state 17:36:41 igordcard: when we add NSH, we need to implement SF Proxy on OVS too. Better to come up with a mechanism that does not require reclassification 17:37:13 igordcard: We may also need to consider if the SF changes some of those fields 17:37:34 cathy_: maybe, for a first version it can look similar to how the mpls proxy is done today 17:37:50 igordcard: Ok, that is the minimum:-) 17:38:10 igordcard: the mpls proxy reclassifies on the n-tuple 17:38:44 juanmafg: stateful, not sure what that comes down. Need to dig into detail on that 17:38:57 LouisF: yes 17:39:16 LouisF: so fd.io also does reclassification 17:39:30 cathy_: if the SF changes fields, and depending on the changes, the sfc proxy will start losing its value 17:39:46 cathy_:I mean not doing reclassification, but I'm not completely sure 17:39:52 Let's think about this more and come up with good idea for discussion. 17:40:04 cathy_: their work is very recent - still discussing support for md-type-2 17:40:22 juanmafg: that is what I guess. we need to know the detail 17:40:52 LouisF: ok, thanks. We are more advanced I believe:-) 17:41:11 Ok, I guess enough on this topic. Let's move on 17:41:33 cathy_: afaik fd.io does not yet support nsh but work is under way 17:41:53 LouisF: OK, thanks. 17:42:00 #topic murano-plugin-networking-sfc repository 17:42:34 I see a new backend plugin in networking-sfc repo 17:43:11 cathy_: i see https://review.openstack.org/#/c/332025/ is abandoned 17:43:18 Since ODL, ONOS backend plugins are all in their own repo. I am wondering if we should ask that code to be in its own repo to be consistent 17:43:46 LouisF: Oh, then never mind 17:44:13 It was abandoned on July 4. 17:44:34 #topic ODL integration with networking-sfc 17:44:55 yamahata: do you know how the work is going? 17:45:09 He's starting to write driver 17:45:11 The link is at 17:45:30 https://review.openstack.org/#/c/337948/ 17:45:36 yamahata: Thanks 17:45:37 It would need several respin 17:45:46 yamahata: sure:-) 17:46:08 Folks, could you help review that patch ? 17:46:17 #link https://review.openstack.org/#/c/337948/ 17:46:29 yep 17:47:00 #topic Using upstream/base Neutron Open vSwitch Agent 17:47:08 Here is the link to this 17:47:21 #link https://review.openstack.org/334398 17:48:19 The problem is that if we rename, then the OpenStack package building process will not include networking-sfc and we have to add networking-sfc separately 17:48:46 cathy: I've had a quick look. Basically, it is for RPM packaging. We call our agent neutron-openvswitch-agent and so does the original neutron agent. 17:49:22 I think we should change the name of our agent to neutron-sfc-openvswitch-agent.... 17:49:33 or similar .. 17:49:42 cathy_: when is the refactoring to an OVS agent extension planned? 17:49:59 If we do not rename, then there is not such issue but for Neutron, the agent will use networking-sfc's version, which will not cause any issue since networking-sfc's agent does not change anything with regard to what are needed for Neutron 17:50:11 since networking-sfc needs to overwrite some behavior of standard ovs agent, we use our binary to replace neutron's 17:51:26 fsunaval: but if we change, then if the user wants to use networking-sfc, they have to specifically build networking-sfc package, right? 17:52:03 cathy: yes, and that is fine. 17:52:13 Of course when the new OVS agent design and implementation is completed which allows different features to install flow tables, then we are good forevere 17:52:30 Or the user could create a symlink for networking-sfc openvswitch in usr/local/bin/ 17:52:44 csun: yes, that too. 17:53:04 cathy_: cool 17:53:54 csun: fsunaval does that still require the user to manually do that? Can we make it automatically incorporated into the package build for the user 17:53:59 but the problem is, if we use specific package name, when user start ovs agent in general, it will start neutron's 17:54:16 and two binary cannot start together 17:54:37 georgewang: that means networking-sfc agent functionality will not be included and SFC does not work, right? 17:54:49 yes 17:55:17 Yes, it needs some manually setup for it. 17:56:04 csun: Is there a way to incorporate it into the package building process? 17:56:40 Yes, I think we could create a symlink in setup. 17:56:50 cathy: csun and myself will work it out... 17:57:03 csun: fsunaval Thanks! 17:58:20 I guess we are running out of time and we don't have enough time for next topic. It is a good discussion. Thanks everyone! 17:58:26 bye for now 17:58:29 bye 17:58:33 thank you, bye 17:58:34 bye 17:58:36 bye 17:58:37 bye 17:58:42 bye 17:59:05 #endmeeting