22:00:57 #startmeeting Networking Advance Services 22:00:58 Meeting started Mon Oct 14 22:00:57 2013 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:01:01 The meeting name has been set to 'networking_advance_services' 22:01:10 first up, apologies to those for whom this time is inconvenient 22:01:23 we will try to do it at a different time for follow up 22:01:41 just thought it was easier to herd everyone together while we are here anyway 22:01:50 #topic service insertion and chaining 22:02:07 this is a follow up from the last VPNaaS meeting 22:02:28 since this topic cuts across *aaS we thought it better to have this common discussion 22:02:36 #link https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering 22:02:50 there has been some discussion on the above blueprint 22:03:22 the most updated version was posted only recently so not everyone might have had a chance to take a look 22:03:37 for those who did, any thoughts? 22:04:00 good progress 22:04:00 there were some comments from enikanorov which have been addressed 22:04:16 gregr_3456: thanks 22:04:25 nati_ueno: thoughts? 22:05:01 SumitNaiksatam: +1 22:05:21 the idea is be able to be able to support both single service insertion and also chains 22:05:22 SumitNaiksatam: I'll comment in the doc also 22:05:28 nati_ueno: thanks 22:06:01 crickets? :-) 22:06:18 hi guys Beginner here :) Where is the blueprinbt for the data path 22:06:21 SumitNaiksatam: going over your comments. am I understanding correctly, that there could be service that is not inserted? 22:06:34 It was a very productive face 2 face discussion on Thu - it is really important that we are able to get to a common model that we can implement in Icehouse. This is imperative for us to make any meaningful progress with services. 22:06:48 SridarK: thanks 22:06:58 Snigs: data plane model is not prescriptive 22:07:09 Snigs: it is driven by the plugin/driver 22:07:21 Snigs: this model is to capture the intent of the user 22:07:37 enikanorov: thinking about your question 22:07:46 ok - so the plugin defines the data path and the encapsulation 22:07:52 Snigs: yes 22:08:07 enikanorov: i think every service is inserted 22:08:10 and this blueprint is focussed on specifying the parmeters for the chain ? 22:08:17 enikanorov: it may or may not be explicit 22:08:29 for the data path we can consider something along the lines of NSH but we are a bit far from that at this point 22:08:33 Snigs: at a high level you can look at it that way 22:08:40 SumitNaiksatam: i remember that is controlled by service instertion context 22:08:49 Well, explicit or not represents a pretty fundamental difference for the end user 22:08:53 which is a separate object 22:09:05 geoffarnold: agree 22:09:07 just thinking what level of control the user will have 22:09:13 will this BP change current LBaaS or FWaaS implementation much? I wonder how chaining will be implemented 22:09:13 geoffarnold: i meant to say there could be defaults 22:09:22 Let me post a multiline question.... 22:09:32 Who’s doing the insertion and chaining? It seems to me that there are two distinct (but overlapping) use cases in this area. One is where service instances are visible to applications: where we are adding API(s) to allow an application to explicitly manage the service instance. The other is where the service instance is invisible to the application: the application uses the Neutron APIs to interact with the logical 22:09:42 enikanorov: yes, service insertion context object 22:10:18 .... elects to use service instances (rather than, say, physical resources) to realize these logical resources. Many similarities, but also great differences. 22:10:28 iwamoto: implementation may change, the degree to which it may change may be varied, the attempt is to be minimally disruptive 22:10:40 geoffarnold: still reading :-) 22:11:27 geoffarnold: if i could grasp that, we are addressing the former case 22:11:45 geoffarnold: all neutron abstractions are currently targeting the former case 22:11:46 That's what I thought. 22:11:52 geoffarnold: ok 22:11:53 But we need the second case too 22:11:58 geoffarnold: sure 22:12:13 these seem to be complementary 22:12:20 geoffarnold: right? 22:12:30 It would be a shame to do two parallel service instance implementations and then have to refactor to belatedly catch up with the overlap 22:12:47 geoffarnold: hmmm…should not be the case 22:13:07 geoffarnold: Both of your use cases are covered by the proposal. 22:13:15 We've built an example of the second model in DNRM which we'll demo in Hong Kong 22:13:26 as i did not read the doc, yet, is the proposal is for the logical model only? what is the expectation in regards to the data plane? 22:13:32 Invisible case is the same as "Bump In The Wire" mode, where appplication is not aware of the service. 22:13:37 But Mark McClain and others are urging us to look at both 22:14:07 geoffarnold: on re reading your question, i agree with Kanzhe, we are addressing the logical resource case 22:14:44 Second [DNRM] case involves no new APIs. If you have a new API, it's a new use case IMHO 22:14:55 geoffarnold: in general neutron ia handling logical models 22:14:56 samuelbercovici: data plane implementation details are in the plugin/driver 22:15:03 Inserting LB will be the second use case, where application is aware of the service insertion. 22:15:14 samuelbercovici: anything specific that you would like to be incorporated here 22:15:32 geoffarnold: to give you an example - lets take FWaaS and firewall 22:15:40 user creates a firewall resource 22:15:43 this is a logical resource 22:15:44 Agreed. But does the end user know anything about the logical resource beyond what is in the current FW/LB/L3 APIs? 22:15:53 geoffarnold: no 22:16:14 So the resource could be HW or SW 22:16:22 yes 22:16:41 SumitNaiksatam: so idealy, the insertion "logical" model is specified by an admin, then the different drivers should get enough "TBD" information so they can adhere to this? 22:16:46 per the blueprint the service instance is a logical resource as the user sees it 22:17:14 samuelbercovici: yes 22:17:33 samuelbercovici: ideally it would be great to be able to flesh that interface out 22:17:43 at least from an insertion perspective 22:18:01 SumitNaiksatam: so lets take some use cases.. 22:18:08 samuelbercovici: yes 22:18:27 is it possible to decompose the insertion problem into a classification, redirect and return (next hop) 22:18:40 btw, folks, we also have a couple of other items on the agenda for today 22:18:49 so we need to give time for those as well 22:18:52 but go ahead 22:18:59 if i create a new VIP, currently i select the driver implementation, how would i specify the insertion model and the service model? 22:19:22 Snigs: that i consider it to be a implementation detail, however your point on the classification is very valid 22:19:39 Snigs: currently there is no defined classification mechanism in neutron 22:19:59 ok - let me listen a little more :) 22:20:02 Snigs: i consider it to be complementary but not within the scope of this bp 22:20:03 two portions at least are relevant here; a: how to expect the VIP handling (exmples: proxy, default gate way..) 22:20:05 samuelbercovici: I think it still up to the plugin driver to decide the insetion mode. But user can give a hint in case plugin drivers supports several types of insertion 22:20:17 but that's IMO 22:20:25 then also if default gateway, than the expectation is to be inserted insted of the gw 22:20:31 enikanorov: yes 22:20:45 samuelbercovici: does that make sense? 22:20:57 and b: the chaining order 22:21:41 samuelbercovici: you mean replace L3 gateway with LB? 22:21:49 LB -> VIP 22:22:35 samuelbercovici: Do you mean the LB provides both L3 gateway function and LB function? 22:23:01 SumitNaiksatam: yes. if the vip "specifies" that it is going to function in a model that needs the lb service to perform as default GW for the memebers, that it should replace the L3 gw 22:23:09 at least fro those members 22:23:16 samuelbercovici: agree 22:23:29 achiving this by the driver alone is currently no possible 22:23:46 i agree with samuel on this 22:23:55 +1 22:23:56 samuelbercovici: yes, so in the proposed model we have service_insertion_context 22:24:31 for exactly that reason 22:25:05 driver and insertion context are two different things 22:25:06 so what would a service_insertion_context define? 22:25:06 one could write a driver that can infer the insertion context (as it does today) but that is not fexible 22:25:06 and how does context play out in a service vm 22:25:06 samuelbercovici: its in the spec :-) 22:25:18 i will copy paste here - 22:25:46 it mainly has a insertion_type/mode 22:26:00 which will indicate L3 or L2 or BITW or Tap 22:26:19 and then additional attributes pertaining to whichever type/mode is chosen 22:26:44 so for e.g., if L3 is chosen, then a router_id would be a part of the insertion_context 22:26:53 got it sumit but how do we associate the interfaces from service vm to the service instances? 22:27:29 rudrarugge: nice segue to the next topic 22:27:35 rudrarugge: that is a too forwardlooking question IMO :) 22:27:50 sure will wait 22:28:00 The spec uses plural; e.g. routers. What does that mean? 22:28:12 geoffarnold, samuelbercovici, Snigs: do you want to follow up later on this particular topic 22:28:15 ? 22:28:21 SumitNaiksatam: so if the lbbas is a two leg solution that can bypass the l3 gw, how can it get inserted? 22:28:29 can we transition to the next topic 22:28:31 Yes 22:28:49 geoffarnold: thanks 22:28:52 ok. will read the document and comment ask questions on it 22:29:03 samuelbercovici: good question, hold that thought we can take that offline 22:29:11 i think we can handle that insertion 22:29:30 Yes 22:29:58 i had the second topic listed as "common agent model" for the services, but i will skip that 22:30:12 since the question was asked about service VMs lets get to that 22:30:26 #topic service VM library 22:30:55 #link https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms 22:31:01 note that there are other blueprints as well in the same zip code 22:31:02 most notably geoffarnold's blueprint 22:31:29 this particular topic for today's discussion is specifically on the service VM management framework/library 22:31:47 rudrarugge: i believe your question is in this context 22:31:51 lets address that 22:31:54 yes 22:31:56 thanks 22:32:11 gregr_3456: there? 22:32:15 hi 22:32:26 first can talk about service VM interfaces 22:32:27 you want to take rudrarugge's question 22:32:32 ok 22:32:48 so we have potentially two types of service interfaces 22:32:55 the though is that there are two types on interfaces 22:32:59 management and data plane for any service VM 22:33:05 first type is 'management interfaces' 22:33:12 gregr_3456: yes go ahead 22:33:17 second is data plane interfaces 22:33:23 (sorry, typed over you) 22:33:31 yup 22:33:44 gregr_3456: np, please keep going 22:34:00 the idea is to allow the consumer of the service VM framework to allocate one or more of each type 22:34:16 based on the requirements of that particular implementation 22:34:28 so far ok? 22:34:32 yes 22:34:51 consumer as in provider of the virtual appliance, or consumer of the VM services 22:35:21 consumer is consumer of this interface, the service implementation or plugin 22:35:53 gregr_3456: What is the reason for the serviceVM library to differentiate management vs. data-place interfaces? 22:36:32 IMHO, the two are the same for the library as long as the network reference is provided. 22:36:34 Could it be in-band or out-of-band management based on the service implementation ? 22:36:43 how do we handle accessing the UI of the service through lets say a management plane for service instance 22:36:47 it may be that mgmt interface may need to be isolated from data plane for availability purposes 22:37:19 the plugin should be able to access the mgmt interfaces 22:37:23 mgmt plane of service instance is different than mgmt plane of service vm? 22:37:29 ChristianM_: yes both are possible 22:37:30 can management interface be non-nic interface? for example serial. 22:37:42 yamahata: good point 22:38:00 gregr_3456: i think we should capture yamahata's point 22:38:05 access to mgmt interface is done through its IP address, interface is not relevant here. 22:38:21 Which of its IP addresses? It may have multiple 22:38:29 i am a bit lost here. what is added here on top of plain vanila spinning up VMs? 22:38:33 to Yamahatas point, the management interface may not necessarily be a network interface 22:39:08 The management interface has to be accessible to a plugin/driver 22:39:09 on sec guys, i think we have people typing over 22:39:12 If the mgmt interface is not on a network, neutron shouldn't care. 22:39:12 gregr_3456, right. 22:39:39 Disagree 22:40:02 one sec 22:40:08 let me respond to samuelbercovici first 22:40:10 Kanzhe: but from the VM management view the mgmt interface needs to be modeled. 22:40:20 that's a bigger question 22:40:33 Either Neutron is responsible for the mgmt network, or it affects availability (HA) of the {plugin+VM} subsystem 22:41:13 samuelbercovici: this framework/library tries to achieve spinning up the VM, thats correct, but in a way that it can be reused across services and components 22:41:47 right. 22:41:50 hich will work for some components and not others 22:41:56 yamahata, amotoki: agree the serial interface may not be for a neutron network 22:42:05 heck, we can't even prescribe the guest OS 22:42:21 but the probably the framework still needs to manage it 22:42:35 geoffarnold, gregr_3456: over to you :-) 22:42:52 lets have one conversation 22:43:06 So I've taken a pessimistic approach 22:43:16 SumitNaiksatam: ok, I agree. serviceVM library should still capture the serial interface config. 22:43:24 Kanzhe: thanks 22:43:33 geoffarnold, gregr_3456 : carry on 22:43:50 needs to work with as many kinds of virtual/physical appliances, new and existing, as possible 22:43:50 should be able to specify a management interface from library pov, whether network or serial or whatever... 22:43:51 ic. i am asking as we implement lbaas for radware using "service vms" that are not visible for the tenant. we consumed standard openstack capabilities for this. i would not assume that the way we did it is generic but works for our solution. 22:44:18 samuelbercovici: i am sure it does 22:44:31 Samuel: we should sit down and compare what we did for Vyatta in DNRM with the Radware LBaaS 22:45:00 samuelbercovici, geoffarnold : yes, we can see what the commonalities are and bring them in 22:45:02 In our case we have to create a router to access the VM 22:45:03 And ideally add a third virtual appliance to break ties ;-) 22:45:06 we also have a service vm implemented as firewall 22:45:17 rudrarugge: welcome to the party :-) 22:45:18 there were many small nits that we needed to have. many of those were around supporting our ha model 22:45:24 :) 22:45:40 samuelbercovici: aha, thats the point 22:45:48 I looked at the device inventory BP for LB, and it generalizes nicely to what we did in DNRM 22:45:59 geoffarnold: great 22:46:07 would be helpful to gather these 'nits' and try and capture as many as possible for a common framework 22:46:35 gregr_3456: +1 that is why we are proposing a "common" framework 22:46:40 My feeling about frameworks is that we need a minimal mandatory piece (so we can all get along) together with "best practices" 22:46:49 Most like geoffarnold's DNRM is the first consumer of the serviceVM framework. 22:47:04 Well, Radware too. 22:47:04 geoffarnold: what is DNRM? may be a silly question. 22:47:07 some nits for us: using dhcp assigned addresses for L3 mode to be used by service instances 22:47:10 geoffarnold: agree, so thats a good thing, right? 22:47:16 Yes 22:47:33 yes +1 22:47:43 so here is a general statement - there will be common requirements and there will be specifics 22:47:44 What's DNRM? This: https://blueprints.launchpad.net/neutron/+spec/dynamic-network-resource-mgmt 22:47:53 so mybe, it would be best served, to list all current impelemntation for service vms and get some nits to establish "best practices" before we try to drive a comonality 22:48:08 I'm working to carve it up into multiple blueprints, but this captures the end-to-end use cases 22:48:12 geoffarnold: thanks 22:48:14 we need to at least capture the common requirements 22:48:42 do we have time for use cases? 22:48:49 samuelbercovici: being common is not being prescriptive 22:49:07 gregr_3456: yes please, in case you want to bring up any 22:49:09 Most like the common framework will satisfy a targeted use case at the beginning, then expend to support more. 22:49:10 I've restricted myself to service VMs that are completely invisible to the user; they're all owned by the "DNRM" project ;-) 22:49:41 ok, on to use cases 22:49:49 geoffarnold: sure. worst case, we can do it in HK :-) 22:50:09 What's on the agenda for the Thursday meeting at BigSwitch? 22:50:10 likely most common case is private service owned by a tenant 22:50:41 but other use cases include shared service VMs that are owned by admin/operator 22:51:06 and for scale-out, a logical service may be hosted across multiple service VMs 22:51:19 any disagreement with this? 22:51:27 gregr_3456: agree, i think those capture most of the cases 22:51:36 I am guessing shared service VMs is not a common use case 22:51:36 we r similar to geoffarnold in that we have service VMs invisible to tenants 22:51:44 What's the mapping between "service" and "logical resource instance" 22:51:53 (The latter in the Neutron API sense) 22:51:58 terminology has been a problem 22:52:12 geoffarnold: we will try to make progress on the same discussion (but we will have a chance to pull each other's hair out in person :-)) 22:52:19 In an implementation at Cisco the service VMs are also invisible to tenants 22:52:20 Excellent 22:52:40 i have been using logical service to describe the one specified by Neutron API 22:52:44 Private service owned by a tenant but provided by the operator? 22:52:45 Sumit: Do we have any hair left ? 22:52:56 I have plenty (beard too) 22:53:01 Snigs: :-) 22:53:22 samuelbercovici: the service VM is almost always invisible to the tenant 22:53:27 I'm concerned about the interaction with placement and HA issues 22:53:37 i am an IRC newbie, need to learn to respond to individual. :) 22:53:57 +1 geoff 22:53:58 And logical to physical mappings for things like multi tenant big iron LBs and routers 22:54:08 the ha is probably the biggie here 22:54:20 samuelbercovici, bobmel: there is no suggestion in this proposal to make it explicitly visible to the tenant 22:54:42 SumitNaiksatam: k 22:54:47 Well, the BP isn't exactly clean on that 22:54:55 rudrarugge: service owned by provider 22:54:58 Implies Heat might know more about mappings 22:55:03 geoffarnold: ok 22:55:23 SumitNaiksatam: +1. The framework is for other module to consume, not meant for tenant. 22:55:30 geoffarnold: heat will never interact with the library/framework proposed here 22:55:40 In DNRM we lump tall that complexity into a couple of policy engine black boxes, contents TBD 22:55:53 yes, service owned by provider and pinned to tenant is another variant 22:56:02 planning to update spec after this meeting 22:56:09 geoffarnold: great - policy is separate from mechnism 22:56:18 geoffarnold: this blueprint is only for mechanism 22:56:27 gregr_3456: thanks 22:56:40 When we're considering use cases, let's include software delivery use case - new virtual appliance installation in a running cloud 22:56:43 thanks for feedback 22:56:44 nice discussion everyone - enough to keep gregr_3456 busy i think :-) 22:56:57 Thanks all 22:56:59 geoffarnold: good topic for F2F meeting 22:57:00 thanks 22:57:03 ok one more topic 22:57:05 thany u all. 22:57:07 not done :-) 22:57:09 ? 22:57:14 good night 22:57:16 VM creation is triger by some neutron cli, right? 22:57:24 NO NO NO!!!! 22:57:31 still here... 22:57:33 garyduan: No 22:57:34 :-) 22:57:41 nati_ueno: there? 22:57:47 Sorry, I'm late to jump in 22:57:51 Really ned to decouple that - see my BP 22:57:57 we wanted to discuss common agent model 22:58:02 SumitNaiksatam: yep 22:58:10 well, I mean in the use case 22:58:13 #topic common L3 agent framework 22:58:16 plugin, driver, now agent. Hmmmm 22:58:16 SridarK: there? 22:58:23 yes here 22:58:25 good 22:58:30 i don't think we have enough time 22:58:32 certainly implentation wise, it's some internal APIs 22:58:36 but just want to put it out there 22:58:37 that's my understanding 22:59:02 we currently have a situation where we have three different flavors of the L3 agent in the reference implementation 22:59:14 L3, FWaaS, VPNaaS 22:59:17 However since my first DNRM implementation spins up discrete L3 routers.... 22:59:38 Didn't Cisco have a refactoring proposal for that? 22:59:42 as suggested by nati_ueno (and also SridarK) earlier, we need to have a better approach towards these 22:59:57 Is mastery there? 22:59:59 nati_ueno, SridarK: thoughts? 23:00:07 mestery 23:00:10 yes. we should have generic service-agent 23:00:26 geoffarnold: bobmel was driving that i believe 23:00:35 the refactoring is complete 23:00:37 That's right - I forgot 23:00:38 yes and plugin service or vendor specific agents 23:00:40 but that is for the plugin 23:00:50 nati_ueno, SridarK: go ahead 23:00:51 We need a whiteboard 23:01:01 https://docs.google.com/presentation/d/1e85n2IE38XoYwlsqNvqhKFLox6O01SbguZXq7SnSSGo/edit#slide=id.p 23:01:10 This was the slide I send to the mailing list 23:01:24 nati_ueno: thanks 23:01:38 geoffarnold: we can potentially have this discussion as well in F2F 23:01:44 In that mailling thread, option2-2 was the conclution 23:01:47 but this is the best we have in terms of opening it to the community 23:01:55 nati_ueno: thanks 23:02:05 so we have a service-agent, and it will provide some hooks (event driven model) 23:02:18 OK, I have to drop off. Thanks guys. See (some of) you Thursday 23:02:23 then each service agent driver will hook that 23:02:34 nati_ueno: +1 23:02:45 geoffarnold: thanks 23:02:59 Any thought? 23:03:15 so in Icehouse, I wanna leave current l3-agent as-is for backward compatibility 23:03:17 nati_ueno: i guess we don't need to have conclusion now, we are past the meeting time 23:03:23 sure 23:03:33 nati_ueno in agreement - lets talk more 23:03:35 but i think it will be good for everyone to know this so that we can keep discussing 23:03:37 i am +1 23:03:43 SridarK: agree 23:03:56 #topic Open discussion 23:04:02 we can have a follow up meeting 23:04:16 may be at a convenient time for other folks 23:04:25 which topics did we leave out? 23:05:03 or what would you like to be discussed in the context of "advance services" and common requirements? 23:05:33 we use the mailing list as well 23:05:38 any parting thoughts? 23:06:00 nice discussion. 23:06:03 SumitNaiksatam: How about to have a etherpad page? 23:06:05 Since we are discussing about different type of insertions, implementations, I think that would be good to discuss about "retrieving" physical network mapping 23:06:12 SumitNaiksatam: it will help the summit discussion also 23:06:17 which is not doable easy as is 23:06:32 nati_ueno: sure, i will add one, and link this meeting 23:06:38 SumitNaiksatam: Thanks! 23:06:42 ivar-lazzaro: good point 23:06:50 Also we should have weelky meeting 23:06:50 lets do that for a follow up 23:07:01 because this one need many time to discuss 23:07:10 sometimes it looks like a forever discussion :P 23:07:11 +1, we can set this up as a weekly meeting, may be at a different time 23:07:24 awesome 23:07:25 +1 weekly until summit 23:07:25 nati_ueno: agree 23:07:27 +1 23:07:33 +1 23:07:36 Agree 23:07:40 ok i can set one up, will check for a convenient time 23:07:49 Good 23:07:53 i know enikanorov is half asleep :-) 23:08:02 or may be more than half 23:08:13 alrighty, thanks everyone for your time 23:08:18 good 23:08:21 looking forward to the next one 23:08:36 #endmeeting