14:00:40 #startmeeting neutron lbaas 14:00:41 Meeting started Thu Feb 27 14:00:40 2014 UTC and is due to finish in 60 minutes. The chair is enikanorov. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:45 The meeting name has been set to 'neutron_lbaas' 14:00:48 Hello! 14:01:04 hi! 14:01:10 hello. 14:01:24 hi 14:01:29 Hello 14:01:34 hi 14:01:42 hi 14:01:48 i guess we're missing some folks who has actively participated in the discussion on ML 14:02:15 hi 14:02:35 anyway, we're going to continue the discussion here today and clarify some details on our proposal 14:03:12 so far I've seen quite tough discussion on proposal #3 as per this page: https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion 14:03:42 Picture 4.1 is also a good representation of object model except for cluster and loadbalancer which we would like to discuss separately 14:04:29 I wonder what is the outcome from discussion with Jay Pipes on the ML. 14:04:56 iwamoto: i think we're not going towards that direction 14:05:02 Oh! He and I had a google hangout earlier this week-- 14:05:16 His plan was to flesh out his idea more so we could discuss it. 14:05:40 I think he's planning on making a blueprint and wants to discuss specific deployment scenarios. 14:05:47 the problem is that his proposal is complete redefinition of whole user experience 14:05:52 But I think it's going to take him a week or two to get that far. 14:06:15 I liked what he was saying, in a very general senseā€¦ I would like to see what he proposes more officially before any kind of decision is made there. 14:06:17 and i'm not sure it fits in neutron ideology 14:06:24 Yes-- and in fact we discussed the point you brought up during that hang out. 14:07:01 my concern is that our discussion brings us quite far from incremental model improvement 14:07:04 hi all 14:07:09 hi sam 14:07:36 I suspect in the long run he's going to need a more complicated model to support some of the more complicated deployment scenarios-- though it's not my idea, so I'm not entirely sure. I'd like to give him a chance to flesh out his idea more. 14:07:54 sbalukoff: exactly, andi think it's obvious 14:07:57 enikanorov: I agree. 14:08:30 so lets get back on identifying the drawbacks of proposed model with vips/listeners/pools 14:08:34 before we start 14:08:56 sbalukoff, +1 on letting him flesh out his idea more. 14:08:56 i'd like remind the notion of 'root object' that we have right now (which is pool) 14:09:31 enikanorov: can you define what is a root object? 14:09:39 the sole purpose of the root object is to be able to specify 'configuration unit'. that has nothing to do with backend or provider 14:09:49 but it has something to do with the flavor 14:10:23 so you create *some* root object with the flavor, which defines capabilities 14:10:41 and then you expect that everything connected to that object complies with that capabilities 14:11:06 so once again, it's not any kind of implementation details, it's a user expectation 14:11:40 if you created a service with 'High-end' flavor, you expect every single bit of it to be 'high-end' 14:11:45 enikanorov: but flavors is an implementation detail 14:12:05 nope, it's essential part of the API 14:12:28 sam: how is it impl detail? 14:12:31 also, i have not seen any deatils about flavors 14:12:32 how is that a detail? 14:12:50 I'd also like to see a better definition of what is an 'implementation detail', eh. :) 14:12:55 samuelbercovici: I've sent out an email to ML with draft design 14:13:14 That's true-- flavors are still pretty nebulous at this point. 14:13:23 anyway, whatever name you may give to the flavor, it's a representation of user expectation of the service 14:13:41 enikanorov: well, flavor in the contenx that was discussed was mainly around scheduling based on required features 14:13:44 right now we have simplistic 'flavor' which is provider 14:14:26 when we move to flavors, provider stays and is assigned during scheduling based on flavor 14:14:38 so in this sense, flavors/providers should be discussed after we complete a logical model 14:15:11 i disagree, it's important part, because they define resource grouping 14:15:27 enikanorov: +1 14:15:37 service capabilities is a prt of API, in this or that way 14:15:52 flavor is just a simple representation of group of parameters 14:16:02 enikanorov: as fasr as I see, both in radware's case as well as the cirix driver, this is not true 14:16:14 I think "resource grouping" is an example of implementation detail 14:16:18 what exactly is not true? 14:16:40 iwamoto: i'd say it's an example of user expectation 14:16:53 pool is nott the 'root object' if I understand your definition and it does not define the grouping model 14:17:25 currently it is 14:17:26 samuelbercovici: right now it is, we're trying to get rid of that because of inconsistency with L7 requirements 14:17:41 ok, anyway 14:17:55 i'd like to see how it will work if we don't have the notion of root object 14:18:03 do you have an idea? 14:18:37 yes, but we should agree what is the logical model first 14:19:23 folks, we have certain constraints which are bw compatibility and implementation complexity 14:19:27 i have added to the wiki a slight modification to the current model that cleans the model from consideration of scheduling and grouping. 14:19:37 so befor agreeing on logical model, i'd like to see how it can be implemented 14:20:03 samuelbercovici: are you refering to a google doc file? 14:20:34 i would like you all to look at this and tell if this model can represnt the use cases at hand. this logica model is the same as currently exists with removal of a few constarints 14:20:41 enikanorov: yes 14:21:03 i've read the doc, to mee it seems that it doesn't address the concerns I have 14:21:06 is there a url for the google doc? 14:21:17 #link https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit# 14:21:18 if we agree on this model, we can discuss how to schedule and if/which constarints should be added 14:21:37 samuelbercovici: I'm just seeing this for the first time (Sorry! Haven't had time to catch up on discussion of the last 24 or os hours) 14:22:08 enikanorov: Can you specific a specific concern or two that is not addressed? 14:22:23 er.. sorry, can you mention a specific concern? 14:22:28 (Sorry, it's early here. XD) 14:22:32 samuelbercovici: ip address reuse for instance 14:22:50 samuelbercovici: in your proposed model you removed statistics and statuses from pools and members, do you think user don't need to know all of this? 14:22:59 from the workflows specified, i can't see how is that achieved 14:23:28 and yes, I am considering implementation here. once i know how that can be implemented - i can agree or disagree 14:24:03 obondarev: They are needed. they might need addressing/modification when we start talking on how this actualy get scheduled 14:24:26 obondarev: I am trying to seperate concerns. 14:24:36 if they are needed then your proposed logical model is incomplete 14:24:50 ok, may be we can go one step at a time 14:24:59 when i do the following: 14:25:07 1) create-vip vip_parameters 14:25:18 what is happening as a result? 14:25:21 samuelbercovici: ? 14:25:38 obondarev: correct. as stated. it addresse the concern of configuring the service. scheduling, and getting data back from device should be discussed after 14:25:43 (i'm just trying to get together your idea) 14:26:31 enikanorov: i am not sure i understadn your question 14:27:03 i'm trying to understand, what happens on the background when user ussies lbaas commands 14:27:17 for instance, right now we need to start with pool-create 14:27:40 at that point pool is stored in db, it has provider assigned and is scheduled to a agent (if it's haproxy) 14:27:55 and when we create vip for the pool, the configuration is actually deployed 14:28:10 so i'm trying to understand how that will work with the API you are proposing 14:29:42 enikanorov: I understand that the current ha proxy implementation starts to immediatly schedule on the pool creation. after reviewing all use cases, we might decide that schdeuling this way might not be working well in ha proxy. 14:30:18 samuelbercovici: thats correct, but i'm asking how you see the process, and not how it will work with haproxy provider 14:30:28 enikanorov: for example in another strategy is to wait till all the related inforamtion (currently vip + pool) is available before scheduling 14:30:46 yes, that is an option 14:31:16 so you've got vip+pool and driver has scheduled that to some backend 14:31:30 now you want to add ssl vip or listener to your pool 14:31:40 another port, same ip, different protocol 14:31:43 enikanorov: Thre are alternatives, but I want to fist ground the logical object graph and see that ir can be used to address all type of configuration before we discuss scheduling. 14:31:45 how will you do that? 14:33:54 we can assume we agreed on the logical model and start discuss further to see what challenges we may face with that model 14:34:24 obondarev: on which one? we have two models to discuss 14:34:26 obondarev: well if you look at the different discussions on ML, this is still not the case. 14:34:54 * IgorYozhikov is now away: went away... 14:35:14 let's start from whatever from this two models, let it be sam's proposal 14:35:51 Hi Jay! 14:36:07 obondarev: i think we are discussing it already 14:36:08 :) 14:36:14 sbalukoff: hello! :) 14:36:20 hi Jay 14:36:20 enikanorov: morning1 14:36:35 * jaypipes was off yesterday... 14:36:54 enikanorov: yeah, so I just want continue discussing use cases 14:36:57 and i'm trying to understand how how that model addresses the scenario i'm talking about 14:37:15 +1 enikanorov 14:37:54 enikanorov: do you mean scheduling? 14:38:08 could someone be so kind as to quickly summarize anything I missed? :) 14:38:13 obondarev: +1 14:38:17 no, i mean what i need to do to have another vip on the same IP for already existing backend 14:38:28 in fact, as a user i don't know anything about the backend 14:38:38 i just want another VIP on the same IP and different port 14:39:11 jaypipes: I think there are a lot of people here who would like to see you flesh out your proposal as you and I discussed on the hangout. Otherwise, we've mostly been talking about Sam's proposal that he sent to the mailing list about a day ago. 14:39:13 jaypipes: you've missed all and nothing :) we're in the middle 14:39:22 jaypipes: currently we are discussing sam's proposal on the new object model: https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit# 14:40:09 enikanorov: in https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit?pli=1#heading=h.3rvy5drl5b5r, see under the use cases, use case 2 14:40:20 hehe 14:40:31 OK, let me go read Sam's proposal. 14:40:44 samuelbercovici: right, but that just doesn't dive any details 14:40:51 * jaypipes goes and reads all and nothing ;) 14:41:22 you create a brand new vip with some ip address, and what lbaas needs to do? 14:41:39 find exiting vip with the same ip and attach it to same port? why? 14:41:57 what will be the result of call #2? 14:42:05 #b i mean 14:42:07 enikanorov: vip1 and vip2 have the same ip 14:42:12 rueght? 14:42:19 yes, they should 14:42:35 the difference is that they have different tcp ports 14:42:39 right 14:43:30 enikanorov: so a driver that needs to have both vips on the same backend, can schedule them that way 14:43:51 well, here come our sweet impl details 14:43:54 which driver? 14:44:03 you haven't provided a driver to the call 14:44:13 how would lbaas know your intentions? 14:44:43 you are creating two vips with some parameters where ips aer same accidently 14:45:09 Is there any driver that can put the same IP on two different back-ends and have that actually work? (I'm not aware of any actual implementation that can do this.) 14:45:35 sbalukoff: no one interested:) everyone insterested in logical model :P 14:45:58 anyway, the problem is fundamental 14:46:16 when you create a vip, at least you expect that DB object will be created 14:46:25 in haproxy's case can't you spin up too haproxy instances both on the same ip but listening on different ports? 14:46:28 sbalukoff: for eample using SDN, you can route to different back ends based on the ip+port 14:46:59 blogan: Yes, but that's different than having same IP on two different back-ends. 14:47:01 samuelbercovici: which entity will do the routing? 14:47:19 ah i see, two different backends 14:47:35 ok, going back to two vips 14:47:53 when you just issue tw create-vip cmds, you get two entries in DB, no deployment yet 14:47:54 enikanorov: but obvisously this might not be the common case 14:48:17 sharing IP address between the vips? 14:48:27 That's a very common case. 14:48:31 it is very common use case i'd say 14:48:50 HTTP + HTTPS on a single IP is very common, and unavoidable because of the way DNS works. :) 14:49:02 sbalukoff: i mean havinf same vip with differen ports deployed on different backends 14:49:12 ah, that's for sure 14:49:17 sbalukoff: ++ 14:49:19 samalba: Yes, indeed. 14:49:30 samuelbercovici: let's go back to two vips case 14:49:38 Er? sorry, meant samuel (stupid auto-complete!) 14:49:48 let's say you've created both and get two etries in DB 14:49:59 what should happen next? 14:51:42 enikanorov: so as an example. if we use existing semantics 14:54:07 with the way the tenant specify the "provider" both vips gets sent to the provider. we will probably not allow using different providers. the provider will probably use some logica to scheule both vips on the saem backend 14:54:12 alternatively 14:54:43 well, you see, you have to use that 'impl detail' such as provider 14:54:51 and with flavors that will not be possible 14:55:01 because flavor doesn't control provider directly 14:55:02 So, one of the problems I see with a purely logical discussion is that sometimes it might dictate an implementation that is not physically possible using any existing implementation technology. So? in proposing different logical models, I would like to see specific use cases fleshed out with at least one suggestion that drivers might use to do actual implementation (this is so that we keep our logical models within th 14:55:02 e realm of physical possibility for forseeable use cases). 14:55:11 (I mention this as we're running out of time in this meeting.) 14:55:28 enikanorov: as i said, this will be addressed. i want to see that even before we solve it, the model can support all the use cases people wanted 14:55:33 (And again, I'd love to see these idea proposals fleshed out. Both Samuel's and Jay's.) 14:56:06 samuelbercovici: well, as far as I see the API, there is a case that people want, but it can't be implemented 14:56:13 that's the fundamental problem 14:56:54 enikanorov: ok. next step for me is to porpose how the scheduling happens 14:57:09 +1 would like to see jaypipes proposal 14:57:18 samuelbercovici: exactly! 14:57:40 enikanorov: will do this next and will send next week 14:57:41 samuelbercovici: my guess that you'll end up with something very similar to #2 or #3 14:57:49 sure, thanks 14:57:49 Why would implementation drive the logical model? Just because its possible to express something that can't currently be implemented doesn't mean that the model is bad... 14:58:22 edhall: I think we need to operate within the realm of the physically possible. 14:58:28 meanwhile, I would appreciate if people will address the proposed logical model and commend on the document if something is missing 14:58:29 edhall: what the point in the model if it cant be implemented? 14:58:33 That doesn't mean we can't invent new technology here. 14:58:43 samuelbercovici: one thing I'd like to see in that Google doc is a more user-focused definition of the use cases. For example, "Single Vip with L4 load balancing" in my mind, is not a user-focused use case. It's an implementor-focused use case. Id love to see that expanded to be a description of the way a user thinks about things (i.e. Alice has a monolithic web application running on 1 node using X web server/platform. Sh 14:58:43 e wants to add mulitple identical instances to increase scalabiity. She will need to redirect her single floating IP address to a load balancer, which should spread load across all her new instances in a balanced fashion, etc" 14:59:21 There is no reason that an API can't respond "sorry I can't do that." 14:59:27 But any new technology also needs to be physically possible. 14:59:29 edhall: ++ 14:59:36 edhall: right. that a subtle limitation 14:59:44 jaypipes: i would like to take it offline with you to understand this 14:59:50 i'm not sure it's better that clenr API 14:59:53 *clean 14:59:55 samuelbercovici: absolutely! 15:00:28 samuelbercovici: we can do so in #openstack-neutron after the meeting. 15:00:29 ok, thanks everyone for the discussion 15:00:42 #endmeeting