17:01:17 #startmeeting service_chaining 17:01:18 Meeting started Thu Aug 6 17:01:17 2015 UTC and is due to finish in 60 minutes. The chair is cathy_. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:22 The meeting name has been set to 'service_chaining' 17:01:32 hello everyone 17:01:33 o/ 17:01:38 Hi 17:01:38 hi 17:01:48 hi, this is abhinav. 17:01:58 from AT&T Research. 17:02:13 abhisriatt: welcome to the meeting 17:02:23 abhisriatt: hi, welcome to the meeitng 17:02:23 Thanks for joining us Abhinav 17:02:29 i have posted update to https://review.openstack.org/#/c/204695 API doc 17:02:39 Thanks Guys. 17:02:52 cathy_: Abhinav is one of the people I mentioned who has worked on the flow steering work internally at AT&T 17:03:04 pcarver: great 17:03:10 hi 17:03:28 hi all 17:03:44 so maybe today we can have abhisriatt give a instrocution on the flow steering work at AT&T? 17:03:55 Brian___: xiaodongwang Hi 17:04:27 and also https://review.openstack.org/#/c/207251 Port Chain API and DB 17:04:36 hi xiaodongwang 17:04:58 hi from NetCracker research group 17:05:10 hi MaxKlyus 17:05:12 Louis has posted an update to the API doc. I saw some new comments. Could everyone please review the latest one and give all your comments so that we can get them addressed and merged? 17:05:27 MaxKlyus: Hi, welcome! 17:05:36 MaxKlyus: welcome to the meeting! 17:06:09 cathy_: yes vikram had some more comments I will post a new patch later today to address them 17:06:29 LouisF: I have few more :-) 17:06:31 LouisF: I saw that Jenkins -1 on https://review.openstack.org/#/c/207251. 17:06:49 LouisF: Thanks! 17:06:58 cathy_: yes still some pep8 issues 17:07:25 cathy_: will ork through them today 17:07:26 vikram_: could you please post them today so that Louis can address them all in one shot ? 17:07:28 work 17:07:39 LouisF: Thanks! 17:09:10 While Louis fixes the pep8 issue, everyone can start review the "Port Chain API and DB" codes please. 17:09:32 cathy_: sure 17:10:11 vikram_: thanks! BTW, I remember you previously signed up on the Horizon part code support for this project. How is that working going? 17:10:14 cathy_: yes 17:10:34 s3wong: thanks. 17:11:46 cathy_: I think we got to finalize the API's + Server code is needed for testing.. 17:12:00 cathy_: Framework is done 17:12:02 Also could everyone get on the OVS Driver and Agent spec and do a detail review of it? https://review.openstack.org/#/c/208663/ 17:13:21 cathy_: sure 17:13:33 cathy_:sure 17:13:34 vikram_: Agree with you. But I will not expect much change on the API side. So to speed up the coding work, we cna do this in parallel. I expect the API will be merged soon. What do you think? 17:14:02 s3wong: abhisriatt thanks. 17:14:45 cathy_: +1, we are doing it .. My only concern is getting the API's finalisied soon.. It's impacting both horizon and CLI 17:15:01 vikram_: Sure. Agree with you. 17:15:12 vikram_: Thanks for starting the coding work. 17:16:22 vikram_: so you will post all your comments today and Louis will address all comments and post a new version for final review. Let's get the API finalized as soon as possible so that it will not impact other pieces of work 17:16:42 cathy_: Sure. Will do! 17:16:56 cathy_: will do 17:17:26 vikram_: LouisF Thanks, folks! 17:17:43 Any other topic you have in mind? 17:17:46 cathy_:if you want, I can give you brief overview of our work done in AT&T on flow steering. 17:18:02 abhisriatt: sure, please go ahead 17:19:26 The flow steering project that we started in Research was to give control to deploy Tenant-defined middle-boxes to give mor control to cloud tenants. 17:19:57 sorry..to give contro to tenants to deploy middlebox of their choice. 17:21:17 abhisriatt: middleboxes being service functions? 17:21:25 The idea is that tenants will request some services, especially security services firewall, IDS, IPS, etc., that will run inside MBs. 17:21:52 LouisF: Yes, like firewall, IDS, IPS, etc. 17:22:15 abhisriatt: ok 17:23:01 The cloud provider’s job is to accept the request from tenants and setup the networking pipes in a way that packets should flow through these MBs. 17:23:08 abhisriatt: so it is possible to steer traffic to these service? 17:23:56 LouisF: Yes. 17:24:05 sorry for joining late 17:24:15 Mohankumar__: it is oK 17:24:53 abhisriatt: This requirement is what service chain project can provide 17:25:13 If I can give a little more detail, the AT&T work originally started out doing QoS (DSCP bit manipulation) in OvS using an external service that integrated with OpenStack APIs. 17:25:45 Abhinav's work was to then extend that framework to do flow steering which is basically the same intent as service chaining 17:25:52 Our APIs or CLIs are simple and in the form of: Source (VM or external traffic), destination VM (or any local VM), MB1, MB2,… MBn 17:26:33 abhisriatt: Your APIs are very similar to the API of the service chain project. 17:26:40 i.e. any traffic from source to destination should flow through these set of MBs. 17:27:11 cathy_: Yes. that’s why pcarver asked me to join this work.. 17:27:18 abhisriatt: looks like close alignment with the port-chain apis 17:27:28 We've worked through the prototype stage and have some experience with the necessary OvS flowmods required 17:27:44 pcarver: great. 17:28:09 pcarver: excellent please jump in on suggestions on the ovs driver and agent 17:28:13 The implementation wasn't conceived as a part of OpenStack, but rather sitting outside and interacting with Nova, Neutron, and Keystone, as well as interacting directly with OvS 17:28:14 LouisF: Yes, and we are actually designing new set of APIs that actually look very similar to what you guys have on the wiki. 17:28:50 The networking-sfc work differs mainly in being a part of OpenStack rather than sitting slightly outside of it 17:29:27 But with both the QoS and flow steering we've had some experience with how to manipulate OvS flow mods without disrupting Neutron's use of OvS 17:30:00 As Paul mentioned, we extensively used OVS flow mods to route packets from one MB to other without disrupting any existing flows or traffic. 17:30:32 abhisriatt: that is exactly what we need to do for port-chains 17:30:41 pcarver: "OVS flowmods" refer to the interface between OVS Agent and OVS on the same computing node or OVS driver on Neutron Server and OVS Agent on the Computing node? 17:31:01 My thinking is that we should model the networking-sfc interactions with OvS after the networking-qos interactions, but that we can leverage some of Abhinav's and other AT&T folks experience 17:31:29 abhisriatt: do you support ant form of load distribution across a set of MBs in a port-group 17:31:35 cathy_: yes, flowmod meaning the "magic incantations" that we need to put into OvS to make things happen 17:31:36 any ^ 17:31:53 abhisriatt: Yes, our design on the data path flow steering is similar to yours: route packets from one MB to other without disrupting any existing flows or traffic. 17:32:18 cathy_: flowmods are nothing but open flow rules that are inserted in OVS. 17:32:55 LouisF: We are currently working on load balancing across MBs, and we call it scalable flow steering. 17:33:33 In our implementation the thing that puts the flowmods into OvS is an independent server process outside of Neutron and doesn't use any Neutron agents 17:34:10 Here again, we are using Open flow features such as multipath, learning rules to load balance across many MBs. 17:34:38 But I think it should be adaptable, at least the underlying building blocks 17:34:54 pcarver:Yes. 17:34:55 abhisriatt: https://review.openstack.org/#/c/208663 descibes OF group-mod for doing load balancing across a group 17:35:30 abhisriatt: suggestions welcome 17:35:40 pcarver: agree with you, I think this project can leverage your OVS flow table part can 17:35:53 LouisF: sure, I will take a look at. 17:37:02 Instead of using an external server process to program the OVS, in our project the path will be OVS Driver on Neutron server talking with OVS Agent and the OVS Agent programming the OVS via openflow commands 17:37:47 cathy_: Ideally, that should be the design. 17:38:18 abhisriatt: Louis has posted the link which described the design. Could you get on that and give your input and comments? 17:38:22 However, we started this project with QoS and the OVS agent cannot create queues in OVS. That’s why we have to use an external process to achieve that. 17:39:20 abhisriatt: could you clarify what you mean by "can not create queues in OVS"? What are the queues used for? 17:40:05 cathy_: to rate limit the flows—a functionality needed by the bandwidth reservation project. 17:40:49 cathy_: That's some of the history I referenced briefly. The AT&T flow steering work was built on top of a QoS project. That piece of it isn't especially relevant to networking-sfc project, but was just the framework that we started with 17:40:51 abhisriatt: so that is related to your QoS funcitonality, right? 17:41:13 cathy_:yes. not related to networking-sfc. 17:41:28 pcarver: Yes, that is what i think. This part is not relevant to the service chaining feature itself 17:41:40 Since the OpenStack networking-qos work is proceeding independently we don't really need to focus on that part 17:41:59 pcarver: cool. we are in sync 17:42:40 Essentially we had an existing server framework that handled Neutron, Nova, and Keystone interaction and Abhinav worked on adding flow steering via OvS flowmods into that existing QoS framework 17:42:45 abhisriatt: pcarver when do you think you can carve out the OVS part of codes for use in networking-sfc project? 17:43:06 BTW, we made our controller open source. 17:43:21 https://github.com/att/tegu 17:43:44 cathy_: I think the first step is to get Abhinav aligned on which existing reviews are touching the OvS agent and which pieces are currently just stubbed out or don't exist at all 17:43:45 abhisriatt: OK, thanks. 17:43:53 abhisriatt: Go code! 17:44:07 s3wong: :-) 17:44:10 :) 17:44:30 Actually, it is written in “GO”, too. 17:44:37 pcarver: Ok, let's do that first. Thanks for bringing in abhisriatt ! 17:45:03 He needs to get oriented on the structure of the networking-sfc code base and then he can start bringing in his experience from Tegu (that's the name of our server) 17:45:20 abhisriatt: that's what I meant: code in Go :-) 17:45:25 okay :) 17:46:08 pcarver: sure. 17:46:41 I haven't been through all the reviews yet. Has anyone started touching OvS or is it all still stubs at this time? 17:47:39 pcarver: I started to look at it --- but in gerrit review (not merged yet), the dummy driver (stub) is all there is 17:47:46 pcarver: still stubs. But we are working on the codes now 17:48:10 pcarver: driver manager and dummy driver only 17:48:17 cathy_: Ok, we need to get Abhinav sync'd up so that he doesn't do duplicate work 17:48:25 Let's get the design reviewed and agreed first. Then we can start posting codes and review the codes. 17:48:53 +1 17:48:54 pcarver: yes, I am thinking we cna divide the coding work among us. 17:49:32 s3wong: has also signed on the the OVS part of code development. Actually the design posted is co-authored with s3wong 17:49:53 yes 17:50:14 s3wong: Thanks for your insight and comtributiont! 17:50:31 contribution! 17:51:53 pcarver, abhisriatt: please review the OVS driver spec and we will iterate the design from there 17:52:10 s3wong: will do 17:52:13 s3wong: will do 17:52:27 As for the coding part, I am thinking someone should get a basic OVS framework code and then we can have a meeting to review the framework and then divide the detail code development work among us? 17:53:04 cathy_: framework? as in the OVS driver on Neutron server? 17:53:13 Ok with this or any other suggestion how we avoid duplicate work as vikram_ pointed out? 17:54:14 s3wong: by framework I mean OVS driver on Neutron server, OVS agent on compute node, and OVS itself on Compute node 17:54:30 cathy_: that's everything :-) 17:55:04 I think we might need a consistent framework without much coding detail so that we each of us start coding, we do not have big mismatch 17:55:27 sorry I have an open question to everybody, what do you think about OVS mpls based traffic chaining? 17:55:33 s3wong: no code detail, just framework:-) 17:56:10 MaxKlyus: various drivers can be used 17:56:27 MaxKlyus: raw MPLS or MPLS over GRE or UDP? 17:56:51 raw MPLS multiple stack 17:57:00 We're definitely thinking about MPLS, but pretty much orthogonal to service chaining 17:57:11 MaxKlyus: we are running out of time. Maybe we cna discuss this in the next meeting. I think out framework should be able to support multiple transport and encap in the data path 17:57:31 or move discussion to openstack-neutron 17:58:33 it will be great 17:58:34 MaxKlyus: good question. We can discuss it in the next meeting or on openstack-neutron. Actually this question was touched before in the original API review. 17:59:54 One last question: how are you guys thinking of steering flows from one MB to another. 18:00:13 OK, folks. Thanks for joining the meeting and all the discussions. I think we are making good progress! Let's continue the discussion in next meeting. 18:00:19 bye now 18:00:22 by 18:00:26 abhisriatt: a chain of two Neutron ports? 18:00:34 yes 18:00:34 OK. Thanks guys! 18:00:40 bye 18:00:40 abhisriatt: wera re running out of time, let address that in next meeting. 18:00:46 cathy_:okay 18:00:54 bye 18:00:58 ok 18:01:01 bye 18:01:03 thanks a lot 18:01:09 bye 18:01:14 #stopmeeting 18:01:21 endmeeting? 18:01:25 #endmeeting