14:00:27 #startmeeting neutron lbaas 14:00:28 Meeting started Thu May 29 14:00:27 2014 UTC and is due to finish in 60 minutes. The chair is enikanorov_. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:32 The meeting name has been set to 'neutron_lbaas' 14:00:32 morning 14:00:49 #link https://wiki.openstack.org/wiki/Network/LBaaS#Agenda 14:01:03 morning 14:01:03 this is the agenda for the meeting 14:01:24 blogan: do you want to give an update on the blueprint? 14:01:35 sure 14:02:01 i updated the BP to reflect the change to 1:M LoadBalancer to Listener 14:02:26 is there an agreement on N:M vs 1:M? 14:02:27 Unless there are objections to this, I assume thats the way we want to go for now at least 14:02:54 +1 for 1:M 14:02:59 can you put a link to the updated blueprint (#link)? 14:03:00 +1 for 1:M 14:03:02 agreed 14:03:05 +1 1:M 14:03:12 Youcef: https://review.openstack.org/#/c/89903/8/specs/juno/lbaas-api-and-objmodel-improvement.rst 14:03:16 sbalukoff did wonder if we should put IPv4 and IPv6 addresses on the load balancer object 14:03:25 I'd like to get everyone's thoughts on that 14:04:14 anyone have thoughts? 14:04:18 I'd be for it. :) 14:04:24 lol 14:04:27 hmm 14:04:30 two addresses per node or mixed adresses or both? 14:04:32 #link https://review.openstack.org/#/c/89903/8/specs/juno/lbaas-api-and-objmodel-improvement.rst 14:04:43 Mixed addresses. 14:04:50 i thought that was initial idea, but then we switched to single address per LB 14:04:52 If you wanted many ip addresses then it now seems weird if you do that 14:04:59 So, up to one IPv4 and one IPv6 14:05:03 I like single 14:05:07 depends on how much code it would take; i'd rather take the smallest bite that we can, and nail it. 14:05:07 me, too 14:05:09 simpler 14:05:13 just wanted to clarify 14:05:15 jorgem: agree, me too 14:05:35 just create another lb if you want IPv6 14:05:39 I agree 14:05:47 +1 14:06:08 but then N:M may make sense 14:06:13 Is there a type of ipaddress to know whether you are reading an IPv4 or an IPv6? 14:06:13 So what would we miss out on if we didn't have the IPv6 on the load balancer? 14:06:26 (i mean LB:Listener relation) 14:06:41 enikanorov_: yeah that was the main reason for that relationship 14:07:28 hello! 14:07:29 blogan: It's more work for the user to maintain two identical listners + sub objects. 14:08:05 sbalukoff: yes that is one big reason 14:08:29 It's also extremely common for back-ends that can do IPv6 ro be able to do both IPv4 and IPv6. 14:08:35 I think that starting with 1:N with aim of moving to M:N is not good 14:08:50 samuelbercovici: agreed. 14:08:58 samuelbercovici: what do you think about an ipv4 address and ipv6 address on the load balancer object? 14:09:00 Having IPv4 and IPv6 on the same lb makes sense for one use case. However, when considering the myriad of use cases (i.e. combination of single to multiple ip address) then you have an inconsistent way of doing things. For example, you could technically have 2 ways of loadbalancing 1) One lb with IPV4 and IPv6 or 2) One lb with IPV4 (IPV6 nulled out) and one with IPv6 (IPv4 nulled out). I think this becomes confusing. 14:09:53 i think N:M should be ok. If user want advanced use case (ipv4+ipv6) its fine to let em deal with some API complexity around it (like attaching listeners to another LB) 14:09:54 jorgem: I don't think it's that confusing. you've actually just listed all the possible permutations. 14:10:00 if we take out implementaion efforts, I think that the M:N relashionship is the correct one 14:10:37 sbalukoff: What if I wanted more than 2 ip addresses (some of customers do this and I don't know why really) 14:10:49 samuelbercovici: are you advocating we start with the M:N relationship from the beginning? 14:11:01 blogan: yes 14:11:05 I"m just having a hard time seeing many actual practical cases where N:M will actually be used other than the one IPv4 + one IPv6 case. 14:11:16 me, too 14:11:20 sbalukoff: +1 14:11:23 +1 14:11:25 and ther eis a workaround with two lbs 14:11:34 Sorry for asking, but what would be the use case for M:N, in short words? 14:11:50 so why then do we want to go eventually to M:N? what is the use case that cannot be accomplished? 14:12:20 aburashci: it allows a user to reuse a listener thats already assigned a load balancer, so they don't have to redo their configuration for a load balancer with an IPv6 vip 14:12:54 if its edge case then I'm fine. Just wanted to see if anyone else had a strong position on it. Now that I think about it I'm not really adamant about it. 14:12:59 Youcef: If we do the IPv6 and IPv4 attribute I'd suggest we don't go to the M:N 14:13:09 aburaschi: in addition to ipv4+ipv6- a User wants to expose his application to the internet on one network using a virtual ip and then expose the same application on a different network facing only internal users 14:13:15 My point is that I don't see a reason for N:M *except* for the IPv4 + IPv6 use case being discussed. So, I'm a fan of 1:N only, with the caveat that "1" in this case is an entity that can have up to one IPv4 and one IPv6 address. 14:13:28 samuelbercovici: good point 14:13:47 blogan: but even if we don't, we can still accomplish the ipv4+ipv6 with 2 lbs, right? 14:13:55 samuelbercovici and blogan, thanks, that make sense. 14:13:56 if i've got two network configs, is it reasonable to expect to share a listener? 14:14:17 @Youcef, not unless we implement N:M 14:14:21 dougwig: well, its the exactly same application 14:14:28 samuelbercovici: In that use case, which seems a little contrived to me, I would say the work-around of having two different (non-shared) listeners is acceptable. 14:14:49 right, but if i'm defining policies on two different subnets, i usually expect to define two network configs, even if i'm duping them. 14:15:07 sbalukoff: it is as exceptable as using two different listeners for ipv4 amd ipv6 14:15:08 sbalukoff: agree, having 2 listeners is not a big deal. 14:15:45 samuelbercovici: Except that I'm saying the IPv4 + IPv6 for the same "service" or "listener" is a far more common scenario. :P 14:16:02 sbalukoff: not in the use cases i have encountered 14:16:04 vivek-ebay: We can have ipv4+ipv6 with 2 different listeners (duplicate the listener), right? 14:16:08 what it really comes down to is making it easier for an end user 14:16:16 Youcef: right 14:16:32 blogan: Exactly. 14:16:41 easier - could also mena more consistent which could mean two listeners 14:16:49 so if it is the same application (listener and bellow) it should be defined once 14:16:52 Having two different non-shared listeners is not a show stopper. It's just less convenient for the user. 14:16:59 I see the point of sharing a listener. Nevertheless, what if we only have a 2:N relationship, beeing only an IPv4/6 issue? 14:17:31 I don't like creating special cases in an API -- 14:17:32 aburaschi: what about the use case, I have described? it can be more than 2... 14:17:42 xgerman_:+1 14:17:50 it's also a question of where it's easier. a UI can hide this. but the api can require separate building blocks, as per xgerman_'s point. 14:18:15 is keeping the user from having to do duplicate work and maintaining more than one listener/pool important enough to keep M:N? 14:18:25 how much implementation complexity we see if we go with N:M route ? 14:18:37 samuelbercovici, sure :) Not trying to ignore the use case. Just trying to understand if a dual IPv4/6 interface is an option instead of an M:N relationship. 14:18:42 folks, i think that complexity of ipv4+ipv6 implemented with two LBs is solvable by proper UI 14:18:46 It's quite a bit more complex to implement N:M 14:18:54 and then we may want to go for more flexible N:M 14:18:57 testing is a big hassle 14:18:57 vivek-ebay: there was a thread on the mailing list a few days ago that outlined some warts it'd bring to the drivers. 14:19:16 dougwig brought up some very good points about inconsistencies that can creep up with N:M, especially if N is a large number. :) 14:19:18 vivek-ebay: This should be just API. Implementation can be the same either way I would think right? 14:19:30 enikanorov_: the key chalenge is with proviosning status and operation status on child objects 14:19:43 we already going to face this with pools 14:20:09 samuelbercovici: that's the whole other big story... I wouldn't even try to address that within this bp 14:20:16 In any case, I'm willing to go with 1:M with no plans to go with N:M in the future... any user asking for IPv6 is likely to be slightly more advanced anyway. :) 14:20:33 +1 abd might have automation 14:20:54 sbalukoff: I like this appraoch since it will keep it simpler 14:20:57 sbalukoff: +1 14:21:10 sbalukoff: I just don't want to go with 1:M adn add the IPv4 and IPv6 attributes and then go to N:M and make the IPv6 attribute unnecessary 14:21:25 blogan: Good point. 14:21:32 +1 to blogan point 14:21:50 I don't think N:M is good from an implementation perspective (*any* implementation) for reasons that dougwig pointed out on the mailing list. 14:21:55 can anyone specify, why M:N is an issue from an end user? 14:22:14 samuelbercovici: i think with a proper UI it's not an issue 14:22:18 samuelbercovici: i think its good from an end user standpoint 14:22:26 M:N should make it easier for end-user....i think its more of a implementation complexity issue 14:22:39 agree 14:22:45 Implementation complexity versus user convenience. 14:22:48 I see either GUI users or API users (which hopefully means programatic interaction). If GUI then the issue can be abstracted. If programatic user it means righting slightly more code. I think 1:N is easier to understand and implement then with these assumptions. 14:22:57 enikanorov_: that is true as well but I don't see how a UI will be able to maintain two different listeners that shoudl be the same 14:22:59 vivek-ebay: I think M:N is confusing for the user, but this is a matter of opinion :) 14:23:01 In this case, people are saying the implementation complexity is too great for not much user convenience. 14:23:02 writing* 14:23:28 blogan: not sure i understand. N:M means sharing listeners between LBs 14:23:32 sbalukoff +1 14:23:53 enikanorov_: oh i thought you were saying 1:N would be made simpler to do through the UI 14:23:55 so are we in agrrement that M:n is the better approach from a user perspective? 14:23:58 con someone briefly explain implementation complexity with M:N ? briefly 14:24:30 vivek-ebay: dougwig sent an email about some of them on the ML 14:24:41 vivek-ebay: on Tuesday I think 14:24:48 briefly: "I don't have a strong objection, just an implementation shudder. Of the 14:24:48 two backends that I'm familiar with, they support 1:N, not N:N So, we 14:24:48 fake it by duping listeners on the fly. But, consider the extreme, say 14:24:48 1000 LB's and 1 shared listener. How long does it take to create 1000 14:24:48 listeners? What happens when it fails on 998? Ok, we rollback. What 14:24:48 happens when the rollback fails? Inconsistent state. Driver's can't 14:24:48 async. Driver's can't run cleanup routines later. 14:24:49 What about when half the LB's have lit listeners and the other half don't; 14:24:49 does the db say that N:N link is there yet or not? 14:24:50 Shrink the allowed number of listeners and the window of pain gets 14:24:50 smaller, but at operator scale, even a small window will get hit." 14:25:33 so that supports N:M 14:25:50 dougwig: we do it per object. what does mean 'create 1000 listeners'? 14:25:55 vivek-ebay: Actually that says N:M leads to the above scenario, which is not good. 14:25:58 Yes... I thought the whole point of LB was the other way round... 14:26:03 you create listener and associate it to LB 14:26:18 it either fails or succeeds for each association 14:26:21 N:M would mean shared listner, which means we will have 1 instaed of 1000 14:26:27 enikanorov_: if you have 1000 LB's that share one listener, and a backend that doesn't support M:N, then you have to create 1000 listeners as an "atomic" operation. 14:26:35 Sure, but what if you've got a listener that is already shared to 1000 LBs, and make a change to the listener? 14:26:42 Or one of the sub-objects. 14:26:52 Same scenario applies that dougwig is describing. 14:26:53 vivek-ebay: but on the backend there will most likely be 1000, depending on how the backend implements it 14:26:53 oh ok....you meant from driver perspective. 14:27:16 vivek-ebay: Yes, I think so. 14:27:34 guys, we have very similar issue with pools 14:27:39 i'd like to see a user that creates 1000 lbs that share the same listeners :) 14:27:48 me too 14:27:51 samuelbercovici: yes we do :( 14:28:01 so this needs to be solved anyway 14:28:03 he probably pays big money to not to create listeners for each of his lbs :) 14:28:37 btw. is is also the case with current implementation of healthmonitors 14:28:38 can we not just start with with 1:M and add the N:M later if it is really needed? 14:28:45 anyway, N:M is definitely more code then 1:N 14:28:57 blogan: +1 14:28:58 enikanorov_: +1 14:29:01 blogan: that would be significant API change 14:29:09 the 1000 is just to make the problem obvious. it's still there with 2, it'll just be hit less often. 14:29:14 enikanorov_: correct 14:29:35 we are already going to do a significant change, so we might as well try to do the critical changes at onec 14:29:51 +1 samuelbercovici 14:29:56 enikanorov_: i know and thats the problem with it 14:29:58 not having strong feelings either way, my concern would be "is N:M feasible for Juno", more than anything else... 14:30:08 this is why we either do 1:N "forever" or do M:N 14:30:16 It seems to me that sharing config is a nice concept but the real reason people want to "share" is because they can't copy configs. So copying seems to be the goal not necessarily sharing. Sharing across 1000 lbs sounds nice but what happens when you want to start making changes across these lbs and realize you didn't want to share afterall? 14:30:48 Its the same concept as side effects in coding 14:30:53 jorgem: you disassociate and create different listeners 14:31:02 correct which is a pain 14:31:11 jorgem: +1 14:31:12 just like refactoring code with side effects 14:31:19 But you would have to do it anyway with 1:N 14:31:36 aburaschi: How so? It wouldn't be shared? 14:31:51 i don't see why is it a pain, that would depend on UI 14:32:02 I mean, you would have to create all the relations from scratch 14:32:03 doing everything manualy via CLI is pain anyway 14:32:05 but if is is realy the same application, than managing M differetn replicas specificaly in TLS and L7 become very inconvinient 14:32:06 if we use UI we can also use 1:N 14:32:17 samuelbercovici: agree 14:32:27 enikanorov: My point is how much extra work is it really for a GUI or programatic API user to "copy" configs? 14:32:34 samuelbercovici: another good point for having M:N 14:32:50 jorgem: copy only address intiali deploy 14:33:10 the key issues is the on-going operation aspects of it 14:33:25 samuelbercovici: That's actually a really good point. 14:33:40 so dougwig has voiced his problems with M:N, are there any others? 14:33:52 tetsing becomes difficult 14:33:57 now, the minute that we share pools, we have already made a dive to managing such cases 14:34:50 the l7 point is a good one. i'm not so fond of the "we're already in pain elsewhere" reasoning. :) 14:34:51 samuelbercovici: which is why I'd say we go with M:N or 1:N on both 14:35:05 1:N all the way :-) 14:35:33 blogan: 1:N on listeners:pools basically breaks L7 entirely, doesn't it? 14:35:41 Or are you suggesting creating duplicate pools? 14:35:55 sbalukoff: yep 14:36:02 for example, in M:N if you need tu update the default TLS certificate for the application, you do it once, if you replicate you do M times 14:36:26 sbalukoff: why? 14:36:44 So, it's really about shifting the complexity of "interesting deployment failure scenarios" back to the user to deal with 14:36:48 oh, you mean not shring pools? 14:37:11 samuelbercovici: Yes, that's what I meant. 14:37:31 So you're advocating for sharing of pools? 14:37:32 sbalukoff: well this becomes even more problematic 14:37:40 because if we share pools, i don't know why we wouldn't share listeners 14:37:40 We are already 1:N on listener:pool? technically? because it's listener:l7policy:pool for the others 14:37:53 blogan: Good point. 14:38:31 rm_work: We're N:M because a given pool can be the target of many L7policies. 14:38:32 rm_work: i think it's N:M, because different listeners may use sam pool 14:38:35 *same 14:38:53 And yes, because different listeners can use the same default_pool_id. 14:39:01 so if we need to handle shared pool provisoning status and operations status, a similar approach should be used for listeners 14:40:33 and as far as I understand this is the key compleixty in the implementation 14:40:48 N:M makes sense for listener to pool 14:40:54 I honestly think that unintended side effects by sharing configs is more important to a lb user due to the mission critical nature of a lb. I actually advocate not sharing anything. Again, a GUI and script can help me out in terms of setup and scale. I care that my lbs work and are isolated from human error. 14:40:54 So, not being able to share pools seems to me like a much bigger burden on the user and not being able to share listeners. But again, from the code-- we're having to deal with the same kind of complexity around failure scenarios in any case. 14:41:05 samuelbercovici: i'd say this implementation item should be postponed 14:41:06 vivek-ebay: agree. 14:41:16 at least it should not be a part of the first patch 14:41:22 jorgem +1 14:41:26 enikanorov_: agreed 14:41:56 I see the lb:listener relationship differently than the listern:pool relationships. the latter is much more common in N:M. 14:42:05 *listener 14:42:15 I care less about steps and more about what will be the delievry for Juno 14:42:21 yeah but i guess I don't count listener:l7policy:pool as M:N listener:pool, since l7policies aren't shared… so it's M:1 listener:pool, and 1:N:1 listener:l7policy:pool, no? 14:42:29 also we said pool is a logical construct and gets provisioned when a listener or lb gets created 14:42:45 rm_work: why l7 policies not be shared? 14:42:49 samuelbercovici: +1 14:42:55 are they? i just thought they weren't 14:43:02 maybe I am remembering the model wrong 14:43:11 rm_work: it is one of the options 14:43:25 rm_work: but lets not add this in for now 14:43:28 kk 14:43:33 i think if we start sharing everything complexity and unexpected side effects increases exponentially 14:43:41 samuel's L7 policy model allows for sharing of L7 policies. Mine doesn't. 14:43:42 + 1 14:43:47 jorgem: N:M is superset of 1:M, so if cloud operator wants, it can then forbid sharing 14:43:49 ah, that'd explain it sbalukoff 14:44:09 meant +1 on the Juno delivery as mentioned by samuelbercovici 14:44:42 The blue print should describe the delivery and then the steaps to get ther 14:44:51 I'd also +1 that (as i said earlier, Juno delivery is what matters most if we don't have strong opinions in either direction) 14:45:19 we've been talking for 45 minutes. not usually a sign on not having strong opinions. 14:45:23 :) 14:45:25 heh 14:45:32 we can vote? 14:45:42 dougwig: Not in this group. XD 14:45:42 if delivery is what matters then doing 1:M at first with the intention on donig N:M later would make more sense since there is no consensus 14:45:46 enikanorov: I trying to force it because even as a developer I still have bugs when dealing with "pass by reference" objects and such even though I am aware of these things. Again lbs are missions critical, I want to help humanity from shooting itself in the foot :) 14:46:31 If lbs weren't mission critical I would be indifferent 14:46:37 jorgem: agree, but that's phylosophy. In C++ you can use const& 14:46:38 yeah, and I can see the support calls from users chnaging something and not realizing they broke all their lbs 14:46:50 would we be voting on B:L:P or 1:L:P or 1:L:1 ? or just two of those? 14:46:50 jorgem: if someone don't want to share than he might as well not share 14:46:51 +1 jorgem at this point I don't want to see breaking one loadbalancer breal all the others. 14:46:54 so you may shoot or you may be safe from that 14:46:58 that's flexibility 14:46:59 deciding on 1:N makes shring impossible 14:47:18 Vote = good. Before the meeting ends = even better. 14:47:19 xgerman_ / dougwig: I'm afraid just voting might alienate people with valid opinions :( but i guess at some point it might be necessary? 14:47:28 enikanorov: I suppose. Your point makes sense it is just a preference then. 14:47:40 enikonarov_: The point with const & is to avoid shoving the whole object on the call stack while still preserving the const. 14:47:58 aren't we coding in Python? 14:48:09 enikanorov: I guess I can tell our GUI programmers to not share configs so I guess I'm fine then. 14:48:12 alirght we're getting in the weeds 14:48:13 xgerman_: when someone says' shooting in the leg' ... 14:48:20 so snake-biting in out foor :-) 14:48:30 xgerman_: I was using it as sort of metaphor lol 14:48:31 i think const& was a similie/metaphor/allegory 14:48:33 so should there eb a vote and that vote is the final decision? 14:48:56 would the vote need to be a supermajority? >_> 14:49:01 we need some mechanism of coming to a decision 14:49:23 i see more reasons for N:M 14:49:24 xgerman_: +1 14:49:28 flexibility is fine. We can control side effects via our GUI at Rackspace and tell API users to be careful. 14:49:37 jorgem: yep 14:49:39 blogan: so taking odd the implementation complexity, you think that M:N should not be implemented 14:49:41 ? 14:50:00 we can "straw poll" if that's less contentious maybe? 14:50:00 odd==off 14:50:16 well, if we want to hide ot from users, it's hard to implement, ... 14:50:21 samuelbercovici: I'm leaning towards N:M but not strongly against either one. I think the flexbility is most important, but I am afraid of the complexity caused by sharing everything 14:50:49 I think we need to evalute the implementation complexity off-line 14:50:54 I totally agree with blogan... 14:51:18 and with you too, samuelbercovici. 14:51:19 my biggest concern is not making things late or brittle for Juno, frankly. 14:51:22 are we in agreement that from a user-perspective M:N is the correct approach? 14:51:43 I agree we need to make this available in Juno 14:51:49 I think the copy approach is easieer to understand 14:52:00 blogan / samuelbercovici / dougwig / sballe_ +1 14:52:08 samuelbercovici: It's certainly the simpler approach for the users and provides the most flexibility. 14:52:20 N:M is fine ~+1 :) 14:52:27 I think we agree that M:N is the most flexible approach, and that there is some concern about getting that to work for Juno 14:52:29 jorgem: LOL! 14:52:42 i think we need to do what is right, even just N:M is not the whole completed refactoring 14:52:44 so lets, give it a hsot, to pinpoint the key chalenges in doing it and evaluete complexity so we have abetter measured way to decide 14:53:11 N:M for listener-to-pool +1 14:53:16 okay so back to N:M then? 14:53:26 For me, even if there isn't a "final decision", a sense of how many people want what, and how strongly they feel would be nice before this ends in 7 min. How can we quickly make a kinda vote thing happen. 14:53:28 ? 14:53:28 sounds like it 14:53:32 blogan: More or less. 14:53:41 We're all sort of luke-warm about it. 14:53:44 it seems. 14:53:49 N:M for listener to pool +1, N:M for lb:listener: -1 :) 14:53:56 i'm still 1:N, but not enough to really register a negative vote about it. 14:53:57 I still think going with 1:M to start woudl be better, even though it is a major change later 14:54:12 I like 1:M more 14:54:20 me too +1 14:54:22 +1 for both 14:54:29 I like M:N but worry about Juno release 14:54:34 samuelbercovici: Not fair ;-) 14:54:43 i like sam's idea of figuring out the implemenation issues offline 14:54:53 blogan: +1 14:55:04 blogan / samuelbercovici +1 on scoping 14:55:18 yeah, we don't have time in the next 5 minutes for a code review :P 14:55:20 dare we try to move on to another topic? 14:55:30 sure, we have 5 14:55:34 so decision? give it till next week to quantify the effort and then get final decision? 14:55:41 blogan: i think we covered most of them 14:55:42 yep 14:55:45 samuelbercovici +1 14:55:46 samuelbercovici: sounds like it to me 14:55:47 except for status of the pool 14:55:50 where is the offline meeting? who is taking point on that? 14:55:55 What about Epic/Umbrella? 14:55:57 enikanorov: health monitor as well 14:56:04 thats on the agenda 14:56:13 any chance of getting an object model bp for l7/ssl, so we can evaluate the N:M in that context better? 14:56:21 blogan: IMO this also better postponed 14:56:26 I need to drop off. will read the scrollback later. Bye. 14:56:53 enikanorov: thats fine, I think this is a good stopping point anyway 14:57:10 action items? 14:57:26 research scope of implementation issues with N:M 14:57:36 #action research scope of implementation issues with N:M 14:57:39 sam to send to ML intial bultes of things to address in M:n 14:57:51 space to do that? ML? 14:57:58 ML 14:58:01 aburaschi: yes 14:58:05 ok, that was a late Enter... 14:58:08 we can discuss some more in #neutron-lbaas as well 14:58:08 :) 14:58:17 #action sam to send to ML intial bultes of things to address in M:n 14:58:17 #action research scope of implementation issues with N:M for both lb:listener and listener:pool relationships 14:58:18 Yep. 14:58:31 blogan: true but it would be nice if discussions get paraphrased on ML as well 14:58:31 I'd like to see a bot get added to #neutron-lbaas 14:58:44 So that people have a place to look at conversations who can't be online all the time. 14:58:51 sbalukoff: yup 14:58:56 + 14:58:58 + 14:58:58 can we add neutron-lbaas to eavesdrop? who's the person to contact about that? 14:59:00 + 14:59:00 +1 14:59:18 i don't know, may be mestery knows 14:59:18 blogan: guess we're short on time to discuss the service implementation, so we can go back to the ML for that 14:59:21 =1 14:59:22 + 14:59:29 +1 14:59:34 VijayB_: join #neutron-lbaas 14:59:38 i++ 14:59:48 blogan: ok 14:59:49 Haha 14:59:50 +1 14:59:51 ok, we're out of time 14:59:54 Ok, thanks y'all! 14:59:58 thanks 14:59:58 bye 14:59:59 thanks everyone for joining 14:59:59 Thank you all! 15:00:00 thanks! 15:00:05 #endmeeting