sbalukoff | Maybe have a pros and cons list for each alternative? | 00:00 |
---|---|---|
sbalukoff | I would *hope* that quells too much unnecessary back-and-forth by people who won't be 100% satisfied with this solution, but would choose it as the lesser of the evils. | 00:00 |
sbalukoff | (Because there certainly isn't a panacea for this problem.) | 00:01 |
*** vivek-ebay has quit IRC | 00:01 | |
dougwig | pros/cons would get repetitive, let me try some wordy words and see what comes out. | 00:02 |
sbalukoff | Sounds good. | 00:03 |
sbalukoff | I just try to be pragmatic about this kind of thing: Yes, it isn't a perfect solution, but STFU unless you can come up with something better. | 00:03 |
dougwig | i have yet to see the overall openstack community adopt that attitude. :) | 00:03 |
sbalukoff | Heh! True enough. Maybe we need a few more assholes who are willing to ram-rod stuff through when we need practical solutions. | 00:05 |
sbalukoff | (ie. tyrants) | 00:06 |
blogan | sbalukoff's new alias RamRod | 00:07 |
dougwig | uhh.. not touching that. | 00:09 |
sbalukoff | Haha! | 00:14 |
*** openstackgerrit has quit IRC | 00:18 | |
*** openstackgerrit has joined #openstack-lbaas | 00:19 | |
sbalukoff | Speaking of ramrodding something through: dougwig or xgerman, have time to take a look at this? https://review.openstack.org/#/c/121233/ | 00:20 |
xgerman | I need to do some more research | 00:22 |
xgerman | how microverisons and the other projects do it | 00:22 |
xgerman | I have seen both in openstack decimal and non-decimal but the plan was to do the "right" thing | 00:22 |
xgerman | reading now: https://wiki.openstack.org/wiki/Nova/ProposalForAPIMicroVersions | 00:23 |
*** crc32 has quit IRC | 00:31 | |
*** mlavalle has quit IRC | 00:35 | |
*** mlavalle has joined #openstack-lbaas | 00:35 | |
dougwig | sbalukoff: how about this? | 00:41 |
dougwig | https://www.irccloud.com/pastebin/OIZccGJu | 00:41 |
sbalukoff | are not being addressed by the OpenStack dev community yet | 00:42 |
sbalukoff | Otherwise, the wording looks good. | 00:42 |
*** mlavalle has quit IRC | 00:45 | |
dougwig | sbalukoff: new rev pushed | 00:47 |
sbalukoff | Ok, looking. | 00:48 |
xgerman | alreday -1d - and dare you to rebase while I comment | 00:59 |
*** xgerman has quit IRC | 01:10 | |
*** fnaval has quit IRC | 01:58 | |
*** fnaval has joined #openstack-lbaas | 02:14 | |
*** xgerman has joined #openstack-lbaas | 03:28 | |
*** xgerman has quit IRC | 03:30 | |
*** xgerman_ has joined #openstack-lbaas | 03:30 | |
*** rm_you has joined #openstack-lbaas | 03:33 | |
*** rm_you has joined #openstack-lbaas | 03:33 | |
*** ptoohill_ has joined #openstack-lbaas | 03:35 | |
*** xgerman_ has quit IRC | 03:53 | |
rm_you | sbalukoff: replied again | 04:30 |
sbalukoff | Uh-oh. | 04:31 |
*** vivek-ebay has joined #openstack-lbaas | 04:33 | |
sbalukoff | Yo dawg, I hear you like comments. So I put a comment on your comment and commented on it. | 04:38 |
rm_you | i sometimes wonder if LBaaS is the most sarcastic/memetastic OS group :P | 04:41 |
*** vivek-ebay has quit IRC | 04:49 | |
*** fnaval has quit IRC | 05:18 | |
*** fnaval has joined #openstack-lbaas | 05:58 | |
*** fnaval has quit IRC | 06:21 | |
rm_you | sbalukoff: did you read that ML email about FLIPs? | 06:24 |
*** ptoohill_ has quit IRC | 06:37 | |
*** ptoohill_ has joined #openstack-lbaas | 06:39 | |
*** ptoohill_ has quit IRC | 06:44 | |
*** openstackgerrit has quit IRC | 06:49 | |
*** openstackgerrit has joined #openstack-lbaas | 06:49 | |
*** amotoki is now known as amotoki__away | 06:55 | |
*** enikanorov_ has joined #openstack-lbaas | 07:04 | |
*** enikanorov__ has quit IRC | 07:07 | |
*** cipcosma has joined #openstack-lbaas | 07:35 | |
*** vivek-ebay has joined #openstack-lbaas | 07:49 | |
*** vivek-ebay has quit IRC | 07:54 | |
rm_you | blogan: BOOM. Comments. | 08:00 |
rm_you | oh actually I guess that's on johnsom's review <_< | 08:00 |
*** woodster_ has quit IRC | 08:30 | |
*** Despruk has joined #openstack-lbaas | 09:15 | |
Despruk | Hello. I have a bug in lbaas haproxy driver when using session_persistence type HTTP_COOKIE. In haproxy conf, line for pool memers is gerated "server 62d96c09-0316-43ae-a831-8dd2b22aa983 10.0.0.10:12345 weight 1 cookie 0" adding the "cookie <number>" for each pool member. However, when adding new server to members pool, new haproxy conf entry is added as | 09:30 |
Despruk | "cookie 0" and previous "cookie 0" server is now "cookie 1". This causes old HTTP sessions with cokie 0 to go to new pool member and sessions are broken. | 09:30 |
Despruk | is this a known bug? | 09:32 |
Despruk | it appears to be coming from "neutron/services/loadbalancer/drivers/haproxy/cfg.py" line 145. | 09:35 |
Despruk | server += ' cookie %d' % config['members'].index(member) | 09:35 |
Despruk | is there some reason why a pool index is used here instead of something unique, like member id? | 09:36 |
*** amotoki__away is now known as amotoki | 10:22 | |
*** SumitNaiksatam has quit IRC | 10:25 | |
*** f13o has joined #openstack-lbaas | 10:31 | |
*** SumitNaiksatam has joined #openstack-lbaas | 10:34 | |
*** amotoki is now known as amotoki__away | 11:54 | |
*** mikedillion has joined #openstack-lbaas | 12:55 | |
*** woodster_ has joined #openstack-lbaas | 13:00 | |
*** mikedillion has quit IRC | 13:19 | |
*** kbyrne has quit IRC | 14:19 | |
*** kbyrne has joined #openstack-lbaas | 14:21 | |
*** vivek-ebay has joined #openstack-lbaas | 14:40 | |
*** mikedillion has joined #openstack-lbaas | 14:45 | |
*** fnaval has joined #openstack-lbaas | 14:45 | |
*** TrevorV_ has joined #openstack-lbaas | 15:04 | |
*** mestery has quit IRC | 15:42 | |
*** ptoohill_ has joined #openstack-lbaas | 15:54 | |
*** ptoohill_ has quit IRC | 15:56 | |
*** fnaval has quit IRC | 16:18 | |
*** xgerman has joined #openstack-lbaas | 16:21 | |
*** mestery has joined #openstack-lbaas | 16:21 | |
*** mestery has quit IRC | 16:28 | |
*** mestery has joined #openstack-lbaas | 16:28 | |
dougwig | Despruk: can you file that bug here https://bugs.launchpad.net/neutron/+filebug, and then send me a link to it? | 16:29 |
*** fnaval has joined #openstack-lbaas | 16:45 | |
blogan | dougwig: does the neutron meetup start on monday or tuesday? | 16:54 |
dougwig | monday - wednesday. | 16:54 |
*** barclaac has joined #openstack-lbaas | 16:55 | |
*** mlavalle has joined #openstack-lbaas | 16:59 | |
*** sbfox has joined #openstack-lbaas | 17:10 | |
*** sbfox has quit IRC | 17:12 | |
*** sbfox has joined #openstack-lbaas | 17:12 | |
*** sbfox has quit IRC | 17:16 | |
*** sbfox has joined #openstack-lbaas | 17:17 | |
*** vivek-ebay has quit IRC | 17:19 | |
*** ajmiller has quit IRC | 17:19 | |
*** jschwarz has joined #openstack-lbaas | 17:22 | |
*** mikedillion has quit IRC | 17:24 | |
*** mikedillion has joined #openstack-lbaas | 17:25 | |
*** jschwarz has quit IRC | 17:28 | |
*** TrevorV_ has quit IRC | 17:30 | |
*** mestery has quit IRC | 17:35 | |
*** vivek-ebay has joined #openstack-lbaas | 17:38 | |
*** mikedillion has quit IRC | 17:40 | |
*** TrevorV_ has joined #openstack-lbaas | 17:41 | |
*** ptoohill_ has joined #openstack-lbaas | 17:41 | |
*** ptoohill_ has quit IRC | 17:41 | |
*** ptoohill_ has joined #openstack-lbaas | 17:42 | |
dougwig | sbalukoff, blogan, xgerman: does haproxy support M:N as suggested in that thread? | 18:02 |
xgerman | haproxy can share pools among listeners (if you install it all in one process - not like sbalukoff suggested) | 18:04 |
xgerman | we just can't share listeners outside the same LB | 18:05 |
sbalukoff | Except that nobody does that between listeners in reality. ;) | 18:05 |
xgerman | well, the legitimate use case is to cut down on health monitoring traffic | 18:05 |
xgerman | but we are in the cloud they can just by more boxes | 18:06 |
sbalukoff | Sure, but doing that sharing buys you essentially nothing because nobody does that "legitimate use case" | 18:06 |
sbalukoff | And it means that you don't have fault separation between listeners | 18:06 |
sbalukoff | Anyway, we've already debated this, and these aren't new arguments. :P | 18:07 |
dougwig | might be interesting to see if *any* of the backends actually support sam's model, since the "lots of events" problems gets even worse as "lots of events AND lots of dup" if the model isn't native. | 18:07 |
sbalukoff | dougwig: What's your question relate to? | 18:07 |
dougwig | my original objection was around implementation complexity, which i'm not sure has gone away, even moving operational status into the root. | 18:08 |
sbalukoff | Well, one of the feature requests we have going for the haproxy guys (and I'll be poking Baptiste about at the hack-a-thon) is the ability to take health-check information from an outside source. | 18:08 |
xgerman | dougwig, that's my worry, too | 18:08 |
sbalukoff | This is going to be important once we're doing ACTIVE-ACTIVE | 18:08 |
dougwig | a10 supports more of this model than haproxy, and i still don't relish the failure cases. | 18:08 |
xgerman | yeah, it doesn't strike me as something we should support | 18:09 |
sbalukoff | For what it's worth, I think we still have complexity there, as you've suspected... not sharing means that more of that complexity is exposed to the user. | 18:09 |
sbalukoff | Having said this... | 18:09 |
*** mestery has joined #openstack-lbaas | 18:09 | |
sbalukoff | I think the case where entities are shared is actually the exception | 18:09 |
xgerman | +1 | 18:09 |
sbalukoff | Except listeners sharing a single load balancer | 18:09 |
sbalukoff | Oh-- and pools being shared within a single listener (via L7 rules) is also fairly common. | 18:10 |
sbalukoff | But, again, I don't think we're really saying we can't do that... :/ | 18:10 |
sbalukoff | But pools being shared outside a single listener-- nobody does that. | 18:11 |
sbalukoff | Members being shared between pools-- nobody does that | 18:11 |
sbalukoff | Listeners being shared between loadbalancers-- again, nobody does that. | 18:11 |
xgerman | And to be honest we like to have this thing in production rather sooner than later so i am inclined to veto anything wgich causes more work/delays | 18:12 |
sbalukoff | xgerman: +1 | 18:12 |
dougwig | vip -> vport at 1:M is obviously a minimum (LB -> listener, in our parlance.) | 18:13 |
sbalukoff | (And by "nobody" I mean "almost nobody-- less than 1% of deployments") | 18:13 |
sbalukoff | dougwig: Agreed. | 18:13 |
sbalukoff | And again, when L7 rules actually happen, then having a pool used multiple times within a single listener is also "perfectly legitimate" | 18:14 |
sbalukoff | And will reduce health monitoring traffic. | 18:14 |
dougwig | looks like a10 is 1:M on pool -> listener, and M:N on l7 <-> listener (attached separately), so i'll be dup'ing pools in that scenario. | 18:18 |
xgerman | yeah, that feels more and more like a whiteboarding exercise | 18:19 |
xgerman | dougwig, something completely different: where do we file bugs against LBaaS V2 | 18:19 |
xgerman | ? | 18:19 |
sbalukoff | "Premature optimization"-- solving problems that don't actually come up enough to worry about them in the real world. | 18:19 |
dougwig | xgerman: neutron | 18:20 |
xgerman | link? | 18:20 |
dougwig | https://bugs.launchpad.net/neutron/+filebug | 18:20 |
dougwig | direct assign them to someone on our team, or they might languish. | 18:20 |
xgerman | thanks -- | 18:20 |
dougwig | (feel free to assign them to me,) | 18:20 |
dougwig | ((for triage)) | 18:21 |
xgerman | ok, we (=HP) can also fix it | 18:21 |
xgerman | but will assign to you | 18:21 |
xgerman | https://bugs.launchpad.net/neutron/+bug/1399749 | 18:26 |
xgerman | dougwig - I can't assign to you (the system doesn't let me) | 18:26 |
dougwig | We are but peons | 18:35 |
*** vivek-ebay has quit IRC | 18:39 | |
*** SumitNaiksatam has quit IRC | 18:39 | |
xgerman | ok, they let me assign to myself which I did | 18:41 |
xgerman | and subscribe you... | 18:41 |
dougwig | Ok | 18:41 |
dougwig | Ty | 18:41 |
xgerman | Doug Wigley Jr --> that's you? | 18:42 |
xgerman | they had anither doug wigley Jr -- which made it confusing | 18:42 |
*** vivek-ebay has joined #openstack-lbaas | 18:56 | |
dougwig | really? wow. no, i'm "Doug Wiegley" (or Douglas) | 19:03 |
*** SumitNaiksatam has joined #openstack-lbaas | 19:09 | |
*** mwang2 has quit IRC | 19:13 | |
*** mikedillion has joined #openstack-lbaas | 19:17 | |
*** sbfox has quit IRC | 19:27 | |
*** vivek-ebay has quit IRC | 19:29 | |
*** Youcef has quit IRC | 19:57 | |
openstackgerrit | Trevor Vardeman proposed stackforge/octavia: Creation of Octavia API Documentation https://review.openstack.org/136499 | 19:59 |
*** xgerman has quit IRC | 20:09 | |
blogan | so just read through the backlog here | 20:13 |
blogan | is the suggestion that we support sharing of pools within a single listener context vs no sharing at all? | 20:14 |
dougwig | i think the suggestion is still "do what we have now, let's ship before we shoot ourselves in the head again". | 20:14 |
sbalukoff | dougwig: +1 | 20:15 |
blogan | well the only problem i can foresee with that is the status changes from what they are now to what they might be would be a backwards incompatible change | 20:15 |
sbalukoff | blogan: I agree that sharing pools within the context of a single listener makes sense when L7 rules are introduced. But I think even more strongly that we need to get something out. | 20:16 |
sballe | dougwig +1 | 20:16 |
sballe | sbalukoff: +1 | 20:16 |
blogan | i dont dispute that, but my concern again is releasing this API and then immediately having to introduce another version that supports the status changes | 20:16 |
blogan | which may or may not include sharing, bc the status changes are something that will need to be addressed with or without sharing | 20:17 |
sbalukoff | blogan: Does the API we're introducting include L7? | 20:17 |
sbalukoff | Er... that we're releasing, I mean. | 20:17 |
blogan | it will | 20:17 |
blogan | not immediately it is a separate blueprint | 20:18 |
sbalukoff | Ok, then we should figure this out. | 20:18 |
blogan | but im pretty sure we would like it to be in kilo | 20:18 |
sbalukoff | When L7 is introduced that will be a backward incompatible change in many ways, IMO... | 20:18 |
sbalukoff | I agree. | 20:18 |
sbalukoff | And I'm saying, I guess, that when L7 is introduced we should allow sharing of pools in the context of a single listener. | 20:19 |
sbalukoff | *shrug* | 20:19 |
*** ajmiller has joined #openstack-lbaas | 20:19 | |
blogan | well we can punt on sharing of pools even then, and add it in as a nother feature when tis ready, that wouldn't be backwards incompatible | 20:19 |
sbalukoff | But I'd like to see LBaaSv2 out with or without L7. | 20:19 |
blogan | i just think the status changes will be backwards incompatible | 20:19 |
sbalukoff | I'm not sure that's true. | 20:19 |
blogan | especially if loadbalancer and listener have two status fields | 20:20 |
sbalukoff | The examples you gave on the ML discussion didn't include L7 | 20:20 |
sbalukoff | That is to say.... | 20:20 |
sbalukoff | With L7, we will need to support 1:N for listeners to pools. | 20:20 |
*** jorgem has joined #openstack-lbaas | 20:21 | |
sbalukoff | The question is does that "N" include pools which are shared by multiple L7 policies or not. | 20:21 |
sbalukoff | I don't see a reason not to allow them to be shared in this contexxt. | 20:21 |
dougwig | Duke Nukem Forever, LBaaS pack | 20:21 |
sbalukoff | The status information will essentially relay the same information. | 20:21 |
sbalukoff | dougwig: +1 | 20:21 |
sbalukoff | Yeah, I'd rather see L7 delayed if we can't resolve this than LBaaSv2 delayed for any reason at all. | 20:22 |
dougwig | i think the status tree is additive, not incompatible, unless i'm missing something. | 20:22 |
sbalukoff | dougwig: That's how I'm seeing it, too. | 20:23 |
blogan | well every entity currently has a status, if we did the status tree we'd have to remove that | 20:28 |
blogan | bc now /pools are logical | 20:28 |
blogan | status directly on the pool entity wouldn't make sense | 20:29 |
dougwig | is there a reason those entities couldn't continue to have a status? i mean, would a member be up for one pool and down for another? | 20:29 |
sbalukoff | dougwig: That would be strange. It's not beyond the realm of possibility, but it would be strange. | 20:30 |
sbalukoff | dougwig: That could happen if the pools have different health checks and it succeed for one and not the other. | 20:30 |
blogan | well the reason would be the status of a pool shared by listeners could be different between the listeners, thus the one /pools/{pool_id} entity having a status looks like a global status | 20:30 |
sbalukoff | blogan: I'm not advocating sharing pools between listeners. | 20:31 |
sbalukoff | Partially for the reason you've just elucidated. | 20:31 |
dougwig | we have to deal with possible breaking enhancements while moving forward. and is the status tree big enough that it'd force a protocol rev, or do we do it non-breaking and say, hey, use these status trees now, the objects will still have a status, but it may not be the *most* correct status it could be in the context of *your* LB. i can see that being | 20:32 |
dougwig | technically feasible, and not too horrible. which, to me, means that it's not worth revving the protocol for. which means it's not worth holding for. | 20:32 |
dougwig | that was just short of a ramble, but i think it conveyed my thinking. | 20:33 |
sbalukoff | dougwig: I'm all for not doing anything to create more delays. | 20:33 |
dougwig | i'm just trying to get at whether there's a feasible backwards compatible answer that lets blogan sleep at night. | 20:35 |
dougwig | which is a pretty high bar, actually. | 20:35 |
sbalukoff | Haha | 20:35 |
sbalukoff | I personally don't understand how blogan sleeps. | 20:36 |
dougwig | i'm not sure that he does. | 20:36 |
sbalukoff | I suspect he doesn't, given how late I've seen him online here. | 20:36 |
blogan | i dotn know what yall are talking about, i see you guys on here later than me | 20:37 |
blogan | and dougwig gets on earlier! | 20:37 |
blogan | anyway, if we do sharing of pools, and one backend doesn't support sharing of pools, then its creating a different pool for each time it is shared and thus it will have a different status, which can't be reprsented by the /pools/{pool_id} | 20:38 |
blogan | i fear i am not making sense, so i am sorry | 20:38 |
sbalukoff | It sounds like you're worried about implementation details slipping in there. | 20:39 |
sbalukoff | As much as I despise that phrase. | 20:39 |
sbalukoff | In that case, the driver writer needs to do what she thinks is best in this case: | 20:39 |
sbalukoff | If it were me writing it, I would probably just return the status of the first one of these groups / pools that is "shared" with the assumption that the rest of them probably have the same status. | 20:40 |
sbalukoff | Which is almost always going to be correct. | 20:40 |
blogan | okay so you'd be fine with keeping the status on the root pool object? | 20:40 |
sbalukoff | (Because they'd have the same connectivity and same health checks.) | 20:40 |
sbalukoff | blogan: Yes. | 20:40 |
sbalukoff | We're going to have to make that choice as well when we do active-active on Octavia. | 20:41 |
sbalukoff | *shrug* | 20:41 |
sbalukoff | If it turns out to cause problems to make that kind of assumption, then we will have to revisit the design. | 20:41 |
sbalukoff | But I don't think it'll cause problems at this point. | 20:41 |
dougwig | fyi, neutron-lbaas split is scheduled for 10am MDT, monday morning. expect the repo to be in a funky state for a day or two. | 20:42 |
sbalukoff | In fact-- if we can get haproxy to take health check information from an external source (to reduce the number of health checks hitting back-end servers in big active-active deployments)... | 20:42 |
sbalukoff | I'm kind of counting on that working. | 20:42 |
sbalukoff | dougwig: Nice! | 20:42 |
sbalukoff | Please keep us posted! | 20:42 |
blogan | sbalukoff: I'm not objecting with movign forward with what we have, im just voicing concerns I have about possible issues down the immediate road | 20:43 |
sbalukoff | That's fine. | 20:43 |
sbalukoff | And I think it's worthwhile to have this kind of discussion. | 20:43 |
blogan | so I think if we plan on just allowing pools to be shared within the scope of a single listener, that will alleviate many of my concerns | 20:44 |
*** crc32 has joined #openstack-lbaas | 20:44 | |
blogan | though we do still plan on having separate statuses, provisioning and operating | 20:44 |
sbalukoff | And I think it'll alleviate many of Samuel's concerns, too. | 20:44 |
sbalukoff | Yep, that's a good idea. | 20:44 |
blogan | so I assume the pool's status field will be its operational status | 20:47 |
sbalukoff | Does provisional status make sense for a pool? | 20:48 |
sbalukoff | (Unless it's connected, ultimately, to a loadbalancer?) | 20:48 |
blogan | in some backends yes | 20:48 |
blogan | but implementation detail! | 20:49 |
sbalukoff | Haha! | 20:49 |
sbalukoff | I don't think it'll break anything to have different back-ends report different things for provisional status depending on what they have to do. | 20:49 |
blogan | i think we can say pools only have operational status, and the status field on the pool is the operational status | 20:49 |
sbalukoff | For all practical purposes, we only care when 1) it's provisioned 2) whether it's operationally available or not. | 20:50 |
sbalukoff | Yeah, in my mind provisional status has less meaning for a pool. | 20:50 |
sbalukoff | But then, I am coming from an haproxy-centric perspective. | 20:50 |
blogan | well i woudl prefer if the provisioning status was reflected in the listener's provisioning status | 20:50 |
blogan | so if i change a pool, its parent listener's provisioning status changes to PENDING_UPDATE | 20:51 |
sbalukoff | If other drivers will make meaningful use of the provisional status on a pool, I don't think that's something that hurts anything else. | 20:51 |
blogan | or if i add a pool as well | 20:51 |
sbalukoff | My thought is that provisional status should always cascade from parent objects to child objects. | 20:51 |
sbalukoff | But again, that is something that apparently will vary from implementation to implementation. | 20:52 |
blogan | so in my scenario the listener's status remains unchanged but the pool's status goes into PENDING_UPDATE (and if member's had a provisional status, those would go into PENDING_UPDATE? | 20:53 |
*** vivek-ebay has joined #openstack-lbaas | 20:53 | |
sbalukoff | (What's frustrating about this is that that PENDING_UPDATE state should be about the shortest lived state of all.) | 20:53 |
sbalukoff | blogan: What if we leave it up to the driver to modify provisional status as they see fit? | 20:54 |
sbalukoff | I mean... what's the point of provisional status anyway? To block other updates from happening while a change is underway, right? | 20:55 |
blogan | that leads to inconsistent behavior | 20:55 |
sbalukoff | Is there any other use for this field? | 20:55 |
blogan | other than user feedback, no | 20:55 |
sbalukoff | So... when a change happens, drivers should change any entities which can't tolerate parallel changes to 'PENDING_UPDATE' until the update is complete. | 20:56 |
sbalukoff | And yes, different back-ends will have their quirks. | 20:57 |
sbalukoff | I don't know that there's a good way around that. | 20:57 |
sbalukoff | But again... the hope is that most of these kinds of changes happen quickly, so unless you're firing update requests out *really fast* you probably won't notice the differences. | 20:57 |
*** jorgem has quit IRC | 20:57 | |
sbalukoff | And if you are firing out updates that quickly, then it's a *good thing* to block where it's appropriate so that we don't mess up the back-end. | 20:58 |
blogan | yes and honestly update requests are rare with load balancers (relatively speaking) | 20:58 |
blogan | at least from our experience | 20:59 |
sbalukoff | (It's good to have this dicussion of handling status, by the way-- we've avoided it for months, and we need to have it.) | 20:59 |
sbalukoff | Our experience too. | 20:59 |
blogan | so my preference is if any child of a load balancer is updated, it should put the load balancer in PENDING_UPDATE | 20:59 |
sbalukoff | Probably less than 10% of deployments ever see a change after they're deployed. And those that do, it's usually something really minor (adding and removing members) | 20:59 |
blogan | which is the lowest common denominator | 20:59 |
sbalukoff | I'm OK with that. | 21:00 |
blogan | omg | 21:00 |
sbalukoff | I'd probably target the listener, but again, that's haproxy-specific thinking there. | 21:00 |
sbalukoff | HAHA! | 21:00 |
blogan | well id target both, but that would just be for user feedback | 21:00 |
blogan | and i know why you'd do it on the listener, but doing it on the load balancer should work for all backends | 21:01 |
sbalukoff | We probably need to write up all these musings into a formal recommendation, as far as how to handle the various statuses. | 21:01 |
sbalukoff | And get feedback from the larger community on it. | 21:01 |
blogan | umm this is in that reproposed lbaas v2 spec, which you +1'ed | 21:02 |
blogan | lol | 21:02 |
blogan | not any of the explanations of course though | 21:03 |
dougwig | weren't we splitting the status field? | 21:03 |
*** mikedillion has quit IRC | 21:03 | |
blogan | well thats another thing to discuss | 21:03 |
blogan | if load balancer and listener should have provisioning status and operational status, how do we do that | 21:06 |
blogan | just add the fields | 21:06 |
blogan | ? | 21:06 |
sbalukoff | Sure. | 21:06 |
sbalukoff | Any reason not to? | 21:06 |
blogan | adding immediately into lbaas v2, i dont see a reason | 21:07 |
blogan | if we waited until later, i would | 21:07 |
sbalukoff | Do eet. | 21:07 |
blogan | i guess it could be a more complex migration from v1 to v2 | 21:07 |
dougwig | let's just spec it as two and charge ahead. | 21:07 |
blogan | but taht would have to happen anyway if we waited | 21:07 |
blogan | sounds good to me | 21:08 |
blogan | i'm going to add to the spec | 21:08 |
blogan | yall look at it closely | 21:08 |
*** jorgem has joined #openstack-lbaas | 21:08 | |
sbalukoff | Ok. | 21:08 |
blogan | sbalukoff: since you're here, would agree with my comments that how the vip is created and whetehr the user owns the vip or octavia does is up to the network driver implementation? | 21:09 |
blogan | and by that i mean, implementation detail? | 21:09 |
sbalukoff | Well... I think German's concern that if a user loses a specific VIP that causes them unnecessary grief is completely valid. | 21:10 |
blogan | oh i dont dispute that either | 21:10 |
sbalukoff | So, whatever we do, and whoever owns the VIP, we need to make sure the user doesn't lose control of it. | 21:10 |
sbalukoff | It's easy to "guarantee" that if the user owns the VIP | 21:10 |
blogan | but if we have to create our own custom network driver and we don't want to allow that, our driver won't allow that | 21:10 |
sbalukoff | But that makes implementing things on Neutron a lot harder, I seem to recall. | 21:11 |
sbalukoff | Right. | 21:11 |
sbalukoff | Ok, if you put it that way (and explain it that way), then yes-- who owns the VIP is an implementation detail. | 21:11 |
sbalukoff | Again, I would love it if we hat an actual IPAM where users could "own" public IPs in a more meaningful way. | 21:12 |
blogan | ok good, and that has a caveat that the code otuside the network driver can't make assumptions on who owns teh vip | 21:12 |
sbalukoff | Sure. | 21:12 |
blogan | though it sounds like in German's case, octavia will still own the VIP, but they will allow users to point a floating ip to that vip, or octavia can do that for them giving them ownership of that flip | 21:13 |
sbalukoff | I think you're correct. | 21:13 |
blogan | though im not sure if users can point a flip they own to a port htey do not own | 21:13 |
sbalukoff | Users probably can't. But a service account ought to be able to. | 21:14 |
rm_work | Yeah, that was what I explained in my comment | 21:14 |
blogan | yeah so it would then fall to octavia to do that | 21:14 |
sbalukoff | (I think it's essentially got the same as admin privileges there.) | 21:14 |
blogan | in which case if the user ever diassociated that flip with the port, they wouldn't be able to reassign it to that same port again | 21:14 |
sbalukoff | rm_work: You're assuming I read what you write. ;) | 21:14 |
rm_work | whatever the "scaling IP" is in your deployment, needs to be handled by octavia | 21:14 |
sbalukoff | Actually, I do, but I am a little behind. | 21:14 |
blogan | no one should ever read what he writes | 21:15 |
sbalukoff | And I was rather brain-fried after all the reviewing yesterday. | 21:15 |
rm_work | in our case, it just happens that the scaling IP *is* the FLIP | 21:15 |
rm_work | but that's an implementation detail | 21:15 |
sbalukoff | rm_work: Are the scaling IP and the public IP one and the same in all deployments? | 21:16 |
rm_work | sballe: no | 21:16 |
rm_work | err | 21:16 |
rm_work | sbalukoff: no | 21:16 |
rm_work | T_T | 21:16 |
sbalukoff | That's what I thought. | 21:16 |
sbalukoff | And, it's sounding like Octavia needs to own both of them in any case to do what it'll have to do. | 21:16 |
rm_work | but it doesn't matter, since WHATEVER the scaling IP is, it needs to be done by Octavia | 21:16 |
sbalukoff | Right. | 21:17 |
rm_work | Octavia doesn't need to own the public IP | 21:17 |
rm_work | just the scaling IP | 21:17 |
blogan | damnit rm_work sbalukoff and i were in agreement and then you come in | 21:17 |
sbalukoff | How will the user point / route the public IP at the scaling IP if he doesn't own the scaling IP? | 21:17 |
blogan | go back to your hobbit hole | 21:17 |
sbalukoff | Haha | 21:17 |
rm_work | lol | 21:17 |
sbalukoff | blogan: We were agreeing about status information. | 21:17 |
blogan | and that creating the vip and ownership is an implementation detail | 21:18 |
sbalukoff | rm_work decided to regurgitate discussion about network topology. | 21:18 |
sbalukoff | Aah... true. | 21:18 |
rm_work | copy/pasting from my comment: | 21:18 |
rm_work | RS Impl: Octavia assigns a FLIP to the Amphorae, returns IP. User uses that IP as the LB's public IP. | 21:18 |
rm_work | HP Impl: Octavia assigns whatever their scaling backend IP thing is, returns the IP. User assigns their own FLIP to that IP. User uses their FLIP's IP as the LB's public IP. | 21:18 |
rm_work | Either way, Octavia has to do something similar -- attach some sort of scaling IP to all of the Amphorae. It's just that in our case, that IS the public IP, and in HP's case, it's an internal only IP that needs a FLIP created and routed to it. | 21:18 |
blogan | im pretty sure rm_work and I are saying the same thing | 21:18 |
rm_work | yes, since we agreed in the comments there :P | 21:19 |
sbalukoff | Can a user point a FLIP at an IP they don't own? | 21:19 |
rm_work | ... at least I think we did | 21:19 |
sbalukoff | (And isn't on a network they own?) | 21:19 |
rm_work | sbalukoff: we should make it so they can, I think | 21:19 |
sbalukoff | (I'm speaking of the HP impl view) | 21:19 |
rm_work | it may require their implementation to do some permission granting | 21:20 |
rm_work | not sure | 21:20 |
sbalukoff | So there's a serious security problem there... | 21:20 |
rm_work | that's kinda an implementation detail | 21:20 |
rm_work | <_< | 21:20 |
rm_work | I mean, not *any* IP they don't own | 21:20 |
rm_work | but the one that was created for them, yes | 21:20 |
sbalukoff | The security problem being that if I can point a FLIP at any IP on any network I want, I can now expose other tenants' "private" servers if I know anything about their internal networks. | 21:20 |
rm_work | ^^ | 21:21 |
sbalukoff | Ok, we're going to have to be more explicit in describing the restrictions there. | 21:21 |
rm_work | would require them to grant permission on specific objects to specific users, which might be hard | 21:21 |
rm_work | we're getting there with Barbican, but I don't think Neutron has plans for that kind of granular access control | 21:21 |
sbalukoff | But... correct me if I'm wrong, I think you're providing that option because HP has a hard and fast need to have the user own the FLIP? | 21:21 |
rm_work | so they might need an "attach" command where the user passes in a FLIP ID and Octavia/Neutron-LBaaS does the attachment of the FLIP as an admin | 21:22 |
sbalukoff | Aah... that would work. | 21:22 |
blogan | so why dont we just say the neutron driver HP uses will have a config option to turn on automatic creation of the FLIP to point to the scaling VIP, on behalf of the user | 21:22 |
rm_work | yeah because HP is weird :P | 21:22 |
sbalukoff | So Octavia, using its service-account-level privilege, would be the thing altering the FLIP. | 21:22 |
rm_work | bah where is xgerman I wanted to prod him with that comment T_T | 21:22 |
sbalukoff | It's Friday. Nobody at HP works on Fridays. ;) | 21:23 |
blogan | keep the prodding to a minimum, you dont wont to anger him | 21:23 |
rm_work | sbalukoff: could be, if Neutron doesn't support fine-grained access control that we'd need to have to allow the user to point a FLIP at "specific" IPs | 21:23 |
sbalukoff | (I am, of course, joking.) | 21:23 |
rm_work | damn, i was just sending me resume off to HP | 21:23 |
* rm_work puts resume back in folder | 21:23 | |
sbalukoff | rm_work: It's something to look into. | 21:24 |
sbalukoff | Haha! | 21:24 |
rm_work | my old job let us off at 3:00 on fridays :P | 21:24 |
rm_work | of course, they also enforced 8am start times the rest of the week >_> so bleh | 21:24 |
sbalukoff | All you had to do was show up at 6:00am? | 21:24 |
sbalukoff | ... | 21:25 |
sbalukoff | HAHA | 21:25 |
sbalukoff | Yeah... I like where I'm at. I'm literally still in my pyjamas, working from the bedroom. | 21:25 |
rm_work | sbalukoff: oh man, pajamas? | 21:25 |
rm_work | that's a step up from what I am usually wearing <_< | 21:25 |
rm_work | >_> | 21:25 |
rm_work | <_< | 21:25 |
sbalukoff | And thus we enter the TMI territory | 21:26 |
rm_work | so anyway, THOSE FLIPS, am i right? :P | 21:26 |
sbalukoff | Er... sure/ | 21:26 |
sbalukoff | ? | 21:26 |
* rm_work is trying to change the subject | 21:27 | |
sbalukoff | I need to respond to your comments in any case. And the ML discussion started about FLIPS. | 21:27 |
rm_work | YES | 21:28 |
rm_work | respond to that ML post | 21:28 |
rm_work | I hope your response is something like "Great idea, we'll start working with you on that immediately, and plan to go forward with that plan for our own network infrastructure ASAP." | 21:28 |
rm_work | Would make this Octavia discussion much simpler :P | 21:29 |
openstackgerrit | Trevor Vardeman proposed stackforge/octavia: Nova driver implementation https://review.openstack.org/133108 | 21:30 |
sbalukoff | Haha! | 21:30 |
sbalukoff | Oh shit! Trevor's actually getting useful work done today. | 21:30 |
rm_work | wat | 21:30 |
rm_work | nah | 21:30 |
sbalukoff | I better get back at it. Makin' me look bad. | 21:30 |
blogan | you do that well enough yourself | 21:31 |
blogan | BOOM! | 21:31 |
* blogan drops the mic | 21:32 | |
sbalukoff | HAha! | 21:32 |
*** jorgem has quit IRC | 21:52 | |
*** jorgem has joined #openstack-lbaas | 21:55 | |
*** jorgem has quit IRC | 21:56 | |
openstackgerrit | Trevor Vardeman proposed stackforge/octavia: Creation of Octavia API Documentation https://review.openstack.org/136499 | 21:57 |
TrevorV_ | Alright guys! I'm done for the day and will hear from/see you guys Monday! Have a good weekend! | 21:58 |
*** TrevorV_ has quit IRC | 21:58 | |
*** xgerman has joined #openstack-lbaas | 22:19 | |
*** jorgem has joined #openstack-lbaas | 22:24 | |
rm_work | ptoohill: when are you partying tonight? | 22:26 |
ptoohill | I may have rsvp'd to the wrong person, so possibly not at all :/ | 22:26 |
rm_work | T_T | 22:26 |
rm_work | I would assume you RSVP'd and just go | 22:27 |
rm_work | I doubt they'll be like "NO, GO HOME" | 22:27 |
rm_work | :P | 22:27 |
rm_work | you're the one that convinced me to go T_T | 22:27 |
ptoohill | hope not, and was planning on that, so im heading that way sometime after 6 | 22:27 |
ptoohill | :P | 22:27 |
rm_work | when does it officially start? | 22:27 |
ptoohill | 6:30 | 22:27 |
xgerman | 6:30 - is that a party you cna bring kids? | 22:29 |
ptoohill | heh | 22:29 |
xgerman | ;-) | 22:29 |
ptoohill | its more of a dinner thing | 22:29 |
rm_work | gotta start drinking early to get my money's worth :P | 22:29 |
xgerman | better head there at 5 then :-) | 22:29 |
ptoohill | 'Jingle and Mingle' | 22:30 |
xgerman | anyhow, rm_work sadly we work full days Friday at HP | 22:30 |
rm_work | heh | 22:30 |
rm_work | so you did see our conversation above :P | 22:30 |
rm_work | didn't think you were here | 22:30 |
xgerman | yeah, I was at the shop and got a new battery for my car | 22:31 |
xgerman | (those things only last 11 years!!) | 22:31 |
rm_work | holy shit you got 11 years? O_o | 22:31 |
rm_work | down here it's more like 4-5 | 22:31 |
rm_work | heat related, I think | 22:31 |
xgerman | well, the shop asked me to replace like 3 years ago but after I coudn't start it the other day I caved in | 22:32 |
rm_work | heh | 22:32 |
johnsom | sbalukoff: I saw that comment about not working on Friday.... | 22:32 |
rm_work | ah, johnsom tattled | 22:32 |
sbalukoff | Hehe! | 22:32 |
sbalukoff | Ain't I a stinker? | 22:33 |
xgerman | rm_work we don't have a hard and fast rule who owns the VIP -- I just was on the receiving end of customers complaining to often when we misplaced their VIP so I am reluctant to own it any longer... | 22:33 |
rm_work | yeah, I get it | 22:33 |
xgerman | cool, yeah and I am not sure if the special RAX FLIPs will gain traction in the community. We might just get a set of ips and being told "good luck" | 22:34 |
xgerman | from our Neutron people | 22:35 |
xgerman | so hard to state requirements -- so I am stating prferences | 22:35 |
xgerman | hope that makes sense | 22:35 |
*** vivek-eb_ has joined #openstack-lbaas | 22:37 | |
*** vivek-ebay has quit IRC | 22:37 | |
*** vivek-ebay has joined #openstack-lbaas | 22:38 | |
*** vivek-eb_ has quit IRC | 22:38 | |
openstackgerrit | Michael Johnson proposed stackforge/octavia: Add Amphora base image creation scripts for Octavia https://review.openstack.org/132904 | 22:41 |
*** BhargavBhikkaji has joined #openstack-lbaas | 22:41 | |
johnsom | sbalukoff: I dropped the comment about the template file. You are correct about the config rendering. | 22:42 |
BhargavBhikkaji | Hello. | 22:42 |
sbalukoff | Sounds good. I'll check it out later today. | 22:42 |
sbalukoff | Thanks! | 22:42 |
sbalukoff | Hello! | 22:42 |
johnsom | No, thank you for reviewing! | 22:42 |
BhargavBhikkaji | I am trying to integrate HAProxy LB to my openstack network. Is there any link that walks me thro' the procedure ? | 22:43 |
rm_work | https://review.openstack.org/#/c/132904/9/elements/haproxy-octavia/os-refresh-config/configure.d/20-haproxy-tune-kernel | 22:43 |
rm_work | I thought sbalukoff was saying it was better to disable conntrack altogether? | 22:44 |
rm_work | though I heard that secondhand so maybe that is not right? | 22:44 |
johnsom | Yeah, but he sent settings for it anyway... Grin | 22:44 |
sbalukoff | That's correct. | 22:44 |
rm_work | <_< | 22:44 |
rm_work | >_> | 22:44 |
rm_work | <_< | 22:44 |
blogan | BhargavBhikkaji: are you using the neutron lbaas extension with the haproxy driver? | 22:44 |
sbalukoff | And yes, the settings are there for tuning it anyway. | 22:44 |
sbalukoff | :) | 22:44 |
rm_work | k, I guess that's in case it isn't off, and has no effect if it is? | 22:45 |
sbalukoff | Well, I'm not sure we're going to be able to get away from having conntrack loaded, running as we are on a compute node, and given the machinations that need to happen there for VM connectivity. | 22:45 |
sbalukoff | It will still have an effect if it's loaded-- a negative one. | 22:46 |
sbalukoff | But, those settings in that file are FAR better than the default for a load balancer. | 22:46 |
dougwig | BhargavBhikkaji: http://docs.openstack.org/admin-guide-cloud/content/install_neutron-lbaas-agent.html | 22:46 |
rm_work | sbalukoff: i mean, if it ISN'T loaded, then setting that conntrack_max will do nothing | 22:47 |
rm_work | right? | 22:47 |
johnsom | right | 22:47 |
rm_work | so either it fixes the broken default settings, or it does nothing | 22:47 |
sbalukoff | Your are correct | 22:48 |
rm_work | k | 22:48 |
*** cipcosma has quit IRC | 22:48 | |
sbalukoff | You are, even. | 22:48 |
rm_work | heh | 22:48 |
rm_work | I refrained from asking "my what are correct?" :P | 22:48 |
rm_work | I really was going to let that slide. | 22:48 |
BhargavBhikkaji | blogan: I installed mirantis and trying to integrate LB now. I am not sure sure if i am using neutron lbaas extension with the haproxy driver | 22:49 |
BhargavBhikkaji | <blogan>: Can you explain more ? | 22:49 |
blogan | BhargavBhikkaji: I think the link dougwig posted is a good starting point | 22:49 |
BhargavBhikkaji | dougwig: Thanks. I am going thro' the link | 22:50 |
*** barclaac has quit IRC | 23:01 | |
*** barclaac has joined #openstack-lbaas | 23:09 | |
johnsom | rm_work blogan: Looking at the comments on the controller spec. Just to come up to speed, the current plan is the api process will have access to the Octavia DB (tm) directly? | 23:10 |
blogan | johnsom: yes, but i wouldn't consider it part of the controller | 23:11 |
blogan | but i guess its a gray area | 23:12 |
johnsom | No, that is fine that it is *not* part of the controller (actually better). I just thought we were limiting the components that had direct access to the DB. | 23:12 |
blogan | well the API needs to have access to teh DB | 23:12 |
rm_work | yes | 23:13 |
BhargavBhikkaji | dougwig: does "apt-get install neutron-lbaas-agent" downloads haproxy ? | 23:13 |
johnsom | Ok. I thought someone was pushing to have that access proxy through something. I will update the spec with this in mind soon. | 23:13 |
blogan | well i think the original plan was to just have things make requests to a queue, and the controller pulls off that queue | 23:14 |
johnsom | Probably back when the API code was just the neutron plug in. | 23:14 |
johnsom | Right | 23:14 |
johnsom | Ok, thanks for the catch and review. | 23:15 |
blogan | but having an API in front lets us let the controller only worry about creates, updates and reads, and the API process handles the higher frequency reads | 23:15 |
blogan | and we immediately store the user's request, instead of having it travel trhough the queue to the controller, and then the controller has to store that | 23:16 |
blogan | and also teh api process is the gateway on whether a load balancer can be updated or not, based on its provisioning status | 23:16 |
blogan | so we don't get a bunch of updates to the same load balancer and they get processed in an odd order | 23:17 |
johnsom | I think the controller will need to manage that as well. | 23:17 |
johnsom | Health events and such | 23:18 |
blogan | yeah definitely the health events | 23:19 |
blogan | but that shouldn't chagne the provisioning status, just the operating status | 23:19 |
johnsom | I think we need to go into a "pending update" like state during fail over due to health event situations | 23:20 |
johnsom | we don't want to try to push out a config change during deploy | 23:21 |
*** vivek-ebay has quit IRC | 23:22 | |
dougwig | BhargavBhikkaji: i'm not sure. check your package list (dpkg) | 23:26 |
*** mlavalle has quit IRC | 23:27 | |
openstackgerrit | Jorge Miramontes proposed stackforge/octavia: Added versioning and migration mandates https://review.openstack.org/139755 | 23:37 |
jorgem | yeah buddy :) | 23:38 |
blogan | updated the lbaas v2 spec https://review.openstack.org/#/c/138205 | 23:42 |
*** jorgem has quit IRC | 23:52 | |
dougwig | sbalukoff: the reason for entry_point instead of driver was that flavors can also be applied to plugins | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!