17:00:13 <cathy_> #startmeeting service_chaining
17:00:14 <openstack> Meeting started Thu Jul  7 17:00:13 2016 UTC and is due to finish in 60 minutes.  The chair is cathy_. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:18 <openstack> The meeting name has been set to 'service_chaining'
17:00:19 <scsnow> hi
17:00:22 <fsunaval> hi
17:00:26 <cathy_> scsnow: hi
17:00:30 <cathy_> fsunaval: hi
17:00:34 <cathy_> hi everyone
17:00:34 <igordcard> hi
17:00:39 <doonhammer> hi
17:00:57 <LouisF> hi
17:01:06 <yamahata> hi
17:01:27 <cathy_> Let's start
17:02:03 <cathy_> I will start with the second topic in the agenda since the first topic needs Mohan's feedback and he has not joined yet
17:02:19 <cathy_> #topic OVN driver for networking-sfc spec and implementation
17:02:46 <cathy_> LouisF: doonhammer how is this going?
17:03:29 <doonhammer> Sorry been down with the flu and way behind on this and my day job :-(
17:03:30 <cathy_> It seems going well and need some final touch to be merged, right
17:03:35 <LouisF> cathy_: spec patch under review
17:03:40 <LouisF> https://review.openstack.org/#/c/333172/
17:04:16 <cathy_> doonhammer: Oh, hope you feel well now.
17:04:37 <doonhammer> cathy: thanks recovered
17:05:18 <LouisF> there have been some comments on it but no major issues
17:05:38 <cathy_> LouisF: OK, thanks.
17:05:40 <LouisF> please review
17:05:41 <doonhammer> I need to talk some more of regXboi and resolve the issues on the ovs/ovn side
17:06:17 <LouisF> I believe Juno have been doing work on this also
17:06:27 <cathy_> doonhammer: Ok, thanks for the update. I see people are working actively on this on the OVN side
17:06:38 <doonhammer> iLouisF: yes he is on PTO this week
17:07:11 <cathy_> #topic Add "NSH" support----'correlation-nsh' in the chain-parameters
17:07:12 <LouisF> i will ping him to review the latest spec patch (3)
17:07:15 <doonhammer> It is working on the ovs/ovn side just need to get the arch cleaned up
17:07:55 <cathy_> igordcard: I know you are working on this. Maybe you would like to give the team a quick update on the progress?
17:08:20 <igordcard> cathy_: yeah - so I've been working on refactoring the non-ovs-specific code towards the plugin itself
17:08:41 <igordcard> i.e. mainly move the SFP generation/management code and models up to the plugin
17:08:50 <LouisF> igordcard: great
17:09:11 <igordcard> that is kind of a pre-requisite to the NSH enablement due to the close relationship with SFPs, so we should get that right first before actually enabling the NSH datapath
17:09:18 <scsnow> I have a question rg NSH. There is an option to enable nsh on agent side:
17:09:19 <scsnow> agent_opts = [
17:09:19 <scsnow> cfg.StrOpt('sfc_encap_mode', default='mpls',
17:09:19 <scsnow> help=_("The encapsulation mode of sfc.")),
17:09:19 <scsnow> ]
17:09:54 <scsnow> What is the relationship between this option and correlation and chain params?
17:10:02 <scsnow> in chain params*
17:10:19 <igordcard> it's taking long indeed since a lot of changes must be made (some models from the ovs driver had to be split in half, with one half moved to the plugin) - as soon as this work is ready for a first review I'll submit it to gerrit, and then actually start on the sfc_encap_mode work
17:10:30 <cathy_> scsnow: this option maps to the correlation method of the chain param
17:10:54 <cathy_> Currently we only support MPLS, we are adding another option "nsh"
17:11:13 <scsnow> So what if sfc_encap_mode=nsh and correlation=mpls?
17:11:52 <cathy_> sfc_encap_mode should match the correlation
17:11:57 <igordcard> scsnow: cathy_ perhaps the option can be deprecated in the near future in favor of the driver appropriately configuring the agent?
17:12:38 <LouisF> igordcard: agree
17:12:53 <scsnow> igordcard, exactly. I think that this option should be obsolete.
17:14:08 <cathy_> scsnow: LouisF igordcard agree.
17:14:23 <LouisF> scsnow: definitely obsolete
17:14:24 <cathy_> We should have only one place to specify this.
17:14:29 <scsnow> but I'm wondering whether it's feasible to support both mpls and nsh chains simultaneously in agent code
17:15:03 <scsnow> I mean some chains may use nsh encap, while others - mpls
17:15:12 <igordcard> in terms of the sfp generation work more specifically, port-chains are being created with the correct DB data at the moment, I'm just correcting the communication of that data to the agent now, so create-port-chain is almost working again after the refactoring (then I still need to check the other CRUD methods and the unit tests)
17:15:18 <cathy_> scsnow: the encap method is per chain
17:15:36 <cathy_> scsnow: so the user can specify mpls for one chain and NSH for another chain
17:15:54 <igordcard> scsnow: should be doable as soon as OVS has NSH support
17:15:57 <LouisF> scsnow: would there be a use case for that?
17:16:20 <cathy_> I will create a bug to remove the agent_opts for encap which comes from the config.
17:16:41 <scsnow> LouisF, I have no idea who is going to use such case
17:16:54 <cathy_> #action  cathy create a bug to remove the agent_opts for encap which comes from the config
17:17:23 <scsnow> LouisF, but this is about how things should work if encap type is per chain
17:17:47 <cathy_> scsnow: yes, it is how it works today. encap type per chain
17:19:18 <LouisF> yes but why would both types be needed simultaneously?
17:19:58 <igordcard> LouisF: as in 1 chain with 2 encaps?
17:20:04 <cathy_> igordcard: For backward compatibility we should keep the mpls encap there. Also some user might select to use it. No harm to keep it.
17:20:21 <LouisF> no different encaps for different chains
17:20:22 <cathy_> igordcard: not 1 chain with 2 encaps. one encap per chain
17:20:48 <juanmafg> cathy, LouisF: are you considering different transport types between the switches (VxLAN-GPE+NSH) and the switch and the VM (Ethernet+NSH)? (even we are using same SFC encapsulation)
17:20:58 <igordcard> cathy_: yeah, I will keep mpls encap as is
17:21:16 <cathy_> juanmafg: yes
17:22:00 <cathy_> juanmafg: we should allow different external tunnel transport although currently we only support VXLAN
17:22:15 <scsnow> LouisF, yes, it's a bit crazy to have both mpls and nsh chains configured
17:22:21 <igordcard> cathy_: juanmafg that is a interesting discussion to have... we also need to look at how that fits with existing neutron ml2 vxlan segments
17:22:49 <cathy_> igordcard: yes, that is what we need think.
17:22:50 <juanmafg> cathy_: well, this is also an interesting topic when it comes to OVS
17:23:12 <LouisF> juanmafg: depends on the nsh patches
17:23:19 <cathy_> juanmafg: yes
17:24:04 <juanmafg> cathy_, LouisF: arer you using Yi Yang patch for your tests?
17:24:15 <cathy_> juanmafg: in OVS, a patch of "ethernet+NSH" is being proposed
17:24:17 <LouisF> scsnow: my feeling is to support both nsh and mpls but simultaneously in different chains
17:24:26 <LouisF> but not
17:24:43 <scsnow> igordcard, did you try VxLAN-GPE+NSH case for multi-node setup?
17:25:05 <igordcard> scsnow: no, I will start simple with Ethernet+NSH though
17:25:06 <cathy_> juanmafg: hopefully that patch can get approved and merged soon, then we can have the NSH encap enabled on our API. As to external transport, we will first use VXLAN.
17:25:16 <igordcard> scsnow: but I'll keep that in mind
17:25:23 <scsnow> igordcard, ok
17:25:49 <scsnow> igordcard, I'm asking because I did try that and it does not work for me.
17:26:52 <igordcard> scsnow: oh so you set up a multi-node env only made of OVS switches (without openstack) and attempted VXLAN-GPE+NSH communication?
17:27:00 <cathy_> scsnow: better try the "ethernet+NSH"
17:27:08 <juanmafg> cathy_: I'm more involved in ODL and they have similar situation with OVS
17:27:16 <cathy_> scsnow: since that is the what is being targeted into OVS
17:27:28 <scsnow> cathy_, ethernet+nsh is suitable only for single-node case as far as I understand
17:27:50 <scsnow> igordcard, correct
17:27:58 <cathy_> scsnow: not really, through external VXLAN tunnel, it can apply to multi-node
17:28:16 <cathy_> scsnow: vxlan+ethernet+NSH
17:28:54 <igordcard> scsnow: :/ shoot me an email with the details, I'll be interested in that, but for n-sfc specifically I will start only with ethernet+nsh
17:29:01 <scsnow> cathy_, correct. but ovs with yi patched does not support such case
17:29:07 <scsnow> patches*
17:29:14 <cathy_> juanmafg: another interesting point is the mechanism between OVS and the SF which does not support the SFC encap (eg. NSH). Any discussion on this in ODL?
17:29:43 <scsnow> igordcard, will do
17:29:48 <juanmafg> cathy_: do you mean SFC proxy?
17:29:54 <cathy_> scsnow: I know Yi or someone else in that team has completed a new patch of vxlan+ethernet+NSH
17:30:18 <cathy_> scsnow: They sent me email saying they will post it on OVS for review. You may want to check that
17:30:20 <LouisF> juanmafg: yes
17:30:22 <igordcard> cathy_: vxlan-gpe+ethernet_nsh?
17:30:41 <scsnow> cathy_, Ok, I'll check ovs mailing list for these patches
17:30:58 <juanmafg> LouisF, cathy_: ODL is trying to secure the NSH support first in OVS
17:31:25 <cathy_> juanmafg: yes, I am wondering what is the mechanism if SF proxy does not want to do reclassification.
17:32:02 <cathy_> igordcard: no, Last time I talked with them, it is vxlan+ethernet+NSH
17:32:11 <juanmafg> LouisF, cathy_: I know fd.io guys were looking into the NSH proxy support
17:32:24 <s3wong> sorry, forgot about the meeting
17:32:31 <cathy_> igordcard: But I have not got chance to check the code patch yet. vxlan-gpe will not be accepted by OVS at this stage due to multiple reasons
17:32:44 <LouisF> juanmafg: fd.io is adding nsh support
17:32:59 <juanmafg> LouisF: yes
17:33:19 <cathy_> juanmafg: does fd.io have a meeting or something that we can join the discussion?
17:33:55 <LouisF> cathy_: https://wiki.fd.io/view/VPP/Meeting
17:34:07 <juanmafg> cathy_: yes they have https://wiki.fd.io/view/NSH_SFC/Meeting
17:34:28 <juanmafg> LouisF: that one is for NSH
17:34:38 <LouisF> https://wiki.fd.io/view/NSH_SFC/Meeting
17:34:47 <LouisF> juanmafg: yes
17:35:08 <cathy_> I am interested the way they will use for SF proxy that does not do re-classification since reclassification is "expensive". A correlation mechanism between the packet and the chain ID is needed
17:35:22 <cathy_> LouisF: juanmafg thanks
17:35:52 <LouisF> cathy_: i think the fd.io proxy does reclassification on the N-tuple
17:36:18 <juanmafg> cathy_: I know fd.io guys were looking into a stateful NSH proxy, but do not know the details about the implementation
17:36:30 <igordcard> cathy_: perhaps quickly hash a few fields and keep state
17:36:41 <cathy_> igordcard: when we add NSH, we need to implement SF Proxy on OVS too. Better to come up with a mechanism that does not require reclassification
17:37:13 <cathy_> igordcard: We may also need to consider if the SF changes some of those fields
17:37:34 <igordcard> cathy_: maybe, for a first version it can look similar to how the mpls proxy is done today
17:37:50 <cathy_> igordcard: Ok, that is the minimum:-)
17:38:10 <LouisF> igordcard: the mpls proxy reclassifies on the n-tuple
17:38:44 <cathy_> juanmafg: stateful, not sure what that comes down. Need to dig into detail on that
17:38:57 <cathy_> LouisF: yes
17:39:16 <cathy_> LouisF: so fd.io also does reclassification
17:39:30 <igordcard> cathy_: if the SF changes fields, and depending on the changes, the sfc proxy will start losing its value
17:39:46 <juanmafg> cathy_:I mean not doing reclassification, but I'm not completely sure
17:39:52 <cathy_> Let's think about this more and come up with good idea for discussion.
17:40:04 <LouisF> cathy_: their work is very recent - still discussing support  for md-type-2
17:40:22 <cathy_> juanmafg: that is what I guess. we need to know the detail
17:40:52 <cathy_> LouisF: ok, thanks. We are more advanced I believe:-)
17:41:11 <cathy_> Ok, I guess enough on this topic. Let's move on
17:41:33 <LouisF> cathy_: afaik fd.io does not yet support nsh but work is under way
17:41:53 <cathy_> LouisF: OK, thanks.
17:42:00 <cathy_> #topic murano-plugin-networking-sfc repository
17:42:34 <cathy_> I see a new backend plugin in networking-sfc repo
17:43:11 <LouisF> cathy_: i see https://review.openstack.org/#/c/332025/ is abandoned
17:43:18 <cathy_> Since ODL, ONOS backend plugins are all in their own repo. I am wondering if we should ask that code to be in its own repo to be consistent
17:43:46 <cathy_> LouisF: Oh, then never mind
17:44:13 <cathy_> It was abandoned on July 4.
17:44:34 <cathy_> #topic ODL integration with networking-sfc
17:44:55 <cathy_> yamahata: do you know how the work is going?
17:45:09 <yamahata> He's starting to write driver
17:45:11 <yamahata> The link is at
17:45:30 <yamahata> https://review.openstack.org/#/c/337948/
17:45:36 <cathy_> yamahata: Thanks
17:45:37 <yamahata> It would need several respin
17:45:46 <cathy_> yamahata: sure:-)
17:46:08 <cathy_> Folks, could you help review that patch ?
17:46:17 <cathy_> #link https://review.openstack.org/#/c/337948/
17:46:29 <igordcard> yep
17:47:00 <cathy_> #topic Using upstream/base Neutron Open vSwitch Agent
17:47:08 <cathy_> Here is the link to this
17:47:21 <cathy_> #link https://review.openstack.org/334398
17:48:19 <cathy_> The problem is that if we rename, then the OpenStack package building process will not include networking-sfc and we have to add networking-sfc separately
17:48:46 <fsunaval> cathy: I've had a quick look. Basically, it is for RPM packaging.   We call our agent neutron-openvswitch-agent and so does the original neutron agent.
17:49:22 <fsunaval> I think we should change the name of our agent to neutron-sfc-openvswitch-agent....
17:49:33 <fsunaval> or similar ..
17:49:42 <igordcard> cathy_: when is the refactoring to an OVS agent extension planned?
17:49:59 <cathy_> If we do not rename, then there is not such issue but for Neutron, the agent will use networking-sfc's version, which will not cause any issue since networking-sfc's agent does not change anything with regard to what are needed for Neutron
17:50:11 <georgewang> since networking-sfc needs to overwrite some behavior of standard ovs agent, we use our binary to replace neutron's
17:51:26 <cathy_> fsunaval: but if we change, then if the user wants to use networking-sfc, they have to specifically build networking-sfc package, right?
17:52:03 <fsunaval> cathy:  yes, and that is fine.
17:52:13 <cathy_> Of course when the new OVS agent design and implementation is completed which allows different features to install flow tables, then we are good forevere
17:52:30 <csun> Or the user could create a symlink for networking-sfc openvswitch in usr/local/bin/
17:52:44 <fsunaval> csun: yes, that too.
17:53:04 <igordcard> cathy_: cool
17:53:54 <cathy_> csun: fsunaval does that still require the user to manually do that? Can we make it automatically incorporated into the package build for the user
17:53:59 <georgewang> but the problem is, if we use specific package name, when user start ovs agent in general, it will start neutron's
17:54:16 <georgewang> and two binary cannot start together
17:54:37 <cathy_> georgewang: that means networking-sfc agent functionality will not be included and SFC does not work, right?
17:54:49 <georgewang> yes
17:55:17 <csun> Yes, it needs some manually setup for it.
17:56:04 <cathy_> csun: Is there a way to incorporate it into the package building process?
17:56:40 <csun> Yes, I think we could create a symlink in setup.
17:56:50 <fsunaval> cathy: csun and myself will work it out...
17:57:03 <cathy_> csun: fsunaval Thanks!
17:58:20 <cathy_> I guess we are running out of time and we don't have enough time for next topic. It is a good discussion. Thanks everyone!
17:58:26 <cathy_> bye for now
17:58:29 <scsnow> bye
17:58:33 <igordcard> thank you, bye
17:58:34 <LouisF> bye
17:58:36 <fsunaval> bye
17:58:37 <juanmafg> bye
17:58:42 <csun> bye
17:59:05 <cathy_> #endmeeting