*** sbfox has joined #openstack-lbaas | 00:37 | |
*** sbfox has quit IRC | 00:39 | |
*** vjay3 has joined #openstack-lbaas | 00:42 | |
vjay3 | hi blogan | 01:52 |
---|---|---|
vjay3 | i am trying to rebase the netscaler driver with the latest v2 patches. i get the following error | 01:54 |
vjay3 | ! [remote rejected] HEAD -> refs/publish/master/bp/netscaler-lbass-v2-driver (squash commits first) | 01:54 |
*** woodster_ has joined #openstack-lbaas | 01:57 | |
*** vivek-ebay has joined #openstack-lbaas | 03:13 | |
*** vjay3 has quit IRC | 03:25 | |
blogan | its been over 24 hours since I logged into irc! | 03:27 |
rm_you | oh god | 03:27 |
rm_you | vjay3 is about to clobber you, i feel | 03:27 |
blogan | he's not in channel | 03:27 |
rm_you | i know | 03:27 |
rm_you | but | 03:27 |
rm_you | <vjay3> i am trying to rebase the netscaler driver with the latest v2 patches. i get the following error | 03:27 |
rm_you | <vjay3> ! [remote rejected] HEAD -> refs/publish/master/bp/netscaler-lbass-v2-driver (squash commits first) | 03:27 |
blogan | i got it | 03:27 |
rm_you | yeah | 03:27 |
blogan | he sent email too on ML | 03:27 |
rm_you | heh | 03:28 |
dougwig | blogan: jinx | 03:28 |
blogan | im just going to have to update teh GerritWorkflow page with explicit details on how to properly do rebases | 03:28 |
blogan | and ive been lazy about it because i hate wiki markup | 03:28 |
dougwig | i sent a safe example to the ML (using a new clone.) | 03:29 |
blogan | dougwig: why'd you do git review -d first? | 03:30 |
blogan | oh nvm | 03:30 |
dougwig | new clone, you have to re-setup the dependencies. | 03:30 |
blogan | sorry i thougth the git checkout after was pulling from gerrit | 03:30 |
blogan | that is the best way to do it | 03:31 |
blogan | it's just obvious because gerrit provides no help | 03:31 |
blogan | just not obvious | 03:31 |
dougwig | did my email come through with a bunch of weird characters? i told outlook to use plain text. | 03:32 |
blogan | Œrebase¹ | 03:32 |
dougwig | fuck. | 03:32 |
dougwig | i hate outlook. i hate exchange. | 03:32 |
blogan | lol yeah they are sprinkled throughout | 03:32 |
dougwig | fuck, i'll probably have to use a different client for ML emails. that's just crappy. | 03:33 |
blogan | what have you been using before? | 03:33 |
dougwig | it's always been outlook. but i've noticed a few of those characters before, it's just less obvious when it's text. | 03:34 |
blogan | i was just about to use outlook myself but then realized i can't figure out how to do in-line comments correctly with outlook | 03:38 |
rm_you | haha yes | 03:38 |
rm_you | comments in red | 03:39 |
rm_you | that's how | 03:39 |
rm_you | YOLO | 03:39 |
dougwig | you set it to compose in text, and then tell it to use '>'. but it clearly is encoding more than ascii, or mine wouldn't be getting corrupted. | 03:39 |
*** vjay3 has joined #openstack-lbaas | 03:39 | |
rm_you | dougwig: is that on windows or OSX? | 03:39 |
dougwig | os x | 03:39 |
rm_you | hmm | 03:39 |
rm_you | I looked and could not find that option in OSX for the life of me | 03:39 |
rm_you | also, does practically every dev these days use OSX? <_<< | 03:40 |
rm_you | seems like 90% of people I talk to are on OSX | 03:40 |
dougwig | outlook -> preferences -> compose, unclick html by default, then in the settings pane click 'text'. | 03:40 |
rm_you | hmm k | 03:40 |
dougwig | i just died a little inside, being able to type that from memory. | 03:40 |
rm_you | i'll try not to forget that before tomorrow afternoon | 03:40 |
dougwig | this channel is archived. | 03:40 |
blogan | don't worry dougwig can die a little more inside tomorrow | 03:40 |
blogan | ill be sure to ask him agian tomorrow as well. at some point dougwig will just be an empty husk | 03:41 |
dougwig | that's alright, i've been sitting with a -2 all weekend, which is killing much more of my soul. | 03:41 |
dougwig | blogan: thanks for the +1 re-up, btw. | 03:42 |
blogan | np, you've given me a gajillion re-ups | 03:42 |
blogan | and evgeny too | 03:43 |
blogan | and avishay | 03:43 |
blogan | and ptoohill | 03:43 |
blogan | and yourself | 03:43 |
blogan | and soon to me vjay3 | 03:43 |
blogan | soon to be | 03:43 |
blogan | oh i have a list | 03:43 |
dougwig | the blogan naughty/nice list. | 03:44 |
blogan | all naughty | 03:44 |
blogan | hey you never added your thoughts on the object model discussion | 03:44 |
dougwig | sorry, i've worked over 40 hours since friday morning, trying to get the new CI processes in place, since they morphed again (and look to be in flux AGAIN on monday.) after fpf/juno, i expect to refocus. | 03:45 |
blogan | no dougwig, you need to put in 60 hour weekends | 03:46 |
blogan | somehow squeeze out 6 extra hours on both days | 03:47 |
dougwig | friday was a 17 hour day, so in comparison, i took it easy on the weekend. | 03:47 |
dougwig | here, jump through this hoop. now 40 more. you're on 39? oh wait, we painted them all red. have you been through the red ones yet? | 03:48 |
blogan | those are just flames | 03:48 |
dougwig | my family went camping for the week this morning. so for the first time in years, i watched a gory violent war movie on full volume in the middle of the day. i guess i could've been working then. | 03:50 |
rm_you | heh | 03:50 |
rm_you | like | 03:50 |
blogan | no you gotta take advantage of that | 03:50 |
rm_you | oldschool War Movie | 03:50 |
blogan | what movie? | 03:50 |
rm_you | or Inglorious Bastards? :P | 03:50 |
blogan | rm_you: what do you consider oldschool? | 03:50 |
rm_you | blogan: anything serious about war where the moral is "war is bloody and everyone dies" | 03:51 |
blogan | please don't say "saving private ryan" please don't say "saving private ryan" | 03:51 |
dougwig | it was lone survivor | 03:51 |
blogan | oh ive been meaing to watch that | 03:51 |
rm_you | hmm | 03:51 |
rm_you | alternatively, Enemy at the Gates was good | 03:51 |
*** amotoki_ has joined #openstack-lbaas | 03:51 | |
rm_you | not really "oldschool" either :P | 03:51 |
blogan | though it will be hard not to do mark wahlberg impressions while watching it | 03:51 |
*** amotoki_ has quit IRC | 03:52 | |
blogan | yeah i like enema at hte gates | 03:52 |
dougwig | i never said old school. | 03:52 |
dougwig | not that i'm knocking old school war movies. | 03:52 |
*** Krast has joined #openstack-lbaas | 03:56 | |
blogan | alirght im going to play a few games and going to sleep | 04:09 |
blogan | yall have a good night | 04:09 |
dougwig | Night. I'm going to go find another violent movie. | 04:15 |
rm_you | night | 04:22 |
*** vivek-eb_ has joined #openstack-lbaas | 04:25 | |
*** vivek-ebay has quit IRC | 04:28 | |
*** vivek-ebay has joined #openstack-lbaas | 04:50 | |
*** vivek-eb_ has quit IRC | 04:53 | |
*** vivek-ebay has quit IRC | 04:54 | |
*** amotoki_ has joined #openstack-lbaas | 05:12 | |
*** enikanorov has quit IRC | 05:32 | |
*** enikanorov has joined #openstack-lbaas | 05:32 | |
*** vjay3 has quit IRC | 05:34 | |
*** sbalukoff has joined #openstack-lbaas | 05:41 | |
*** vjay3 has joined #openstack-lbaas | 05:43 | |
*** vjay3 has quit IRC | 05:54 | |
*** orion_ has joined #openstack-lbaas | 05:59 | |
*** orion_ has quit IRC | 06:04 | |
*** jschwarz has joined #openstack-lbaas | 07:00 | |
*** mestery_afk has quit IRC | 07:05 | |
*** mestery_afk has joined #openstack-lbaas | 07:07 | |
*** sbfox has joined #openstack-lbaas | 07:50 | |
*** sbfox has quit IRC | 08:03 | |
*** vjay3 has joined #openstack-lbaas | 09:06 | |
*** vjay3 has quit IRC | 09:44 | |
*** vjay3 has joined #openstack-lbaas | 09:46 | |
*** vivek-ebay has joined #openstack-lbaas | 10:26 | |
*** vivek-eb_ has joined #openstack-lbaas | 10:30 | |
*** vivek-ebay has quit IRC | 10:31 | |
*** vjay3 has quit IRC | 11:25 | |
*** vjay3 has joined #openstack-lbaas | 11:26 | |
*** vivek-ebay has joined #openstack-lbaas | 11:58 | |
*** vivek-eb_ has quit IRC | 12:01 | |
*** vivek-ebay has quit IRC | 12:03 | |
*** jschwarz has quit IRC | 12:25 | |
*** amotoki_ has quit IRC | 12:31 | |
*** vjay3 has quit IRC | 12:40 | |
*** rohara has joined #openstack-lbaas | 13:02 | |
*** enikanorov__ has joined #openstack-lbaas | 13:08 | |
*** enikanorov_ has quit IRC | 13:11 | |
*** jschwarz has joined #openstack-lbaas | 13:46 | |
*** orion__ has joined #openstack-lbaas | 14:01 | |
*** enikanorov_ has joined #openstack-lbaas | 14:03 | |
*** Krast_ has joined #openstack-lbaas | 14:04 | |
*** enikanorov__ has quit IRC | 14:11 | |
*** Krast has quit IRC | 14:11 | |
*** sbfox has joined #openstack-lbaas | 14:38 | |
*** jschwarz has quit IRC | 14:41 | |
*** markmcclain has joined #openstack-lbaas | 14:49 | |
*** fnaval has joined #openstack-lbaas | 14:49 | |
*** fnaval_ has joined #openstack-lbaas | 14:51 | |
*** sbfox has quit IRC | 14:53 | |
*** fnaval has quit IRC | 14:54 | |
*** xgerman has joined #openstack-lbaas | 15:10 | |
*** sbalukoff has quit IRC | 15:12 | |
*** TrevorV_ has joined #openstack-lbaas | 15:16 | |
*** TrevorV_ has quit IRC | 15:17 | |
*** TrevorV_ has joined #openstack-lbaas | 15:18 | |
*** markmcclain has quit IRC | 15:35 | |
blogan | good morning! | 15:53 |
dougwig | morning! | 15:59 |
*** sbfox has joined #openstack-lbaas | 16:00 | |
*** sbfox has quit IRC | 16:07 | |
*** sbfox has joined #openstack-lbaas | 16:07 | |
blogan | dougwig i think you're a zombie | 16:09 |
dougwig | blogan: most days i feel like one. | 16:09 |
*** vivek-ebay has joined #openstack-lbaas | 16:33 | |
*** VijayB has joined #openstack-lbaas | 16:41 | |
*** crc32 has joined #openstack-lbaas | 16:55 | |
*** sbfox1 has joined #openstack-lbaas | 16:57 | |
*** sbfox1 has quit IRC | 16:57 | |
*** sbfox1 has joined #openstack-lbaas | 16:58 | |
*** sbfox has quit IRC | 16:59 | |
*** sbfox has joined #openstack-lbaas | 17:01 | |
*** sbfox1 has quit IRC | 17:02 | |
*** vivek-ebay has quit IRC | 17:07 | |
*** vivek-ebay has joined #openstack-lbaas | 17:08 | |
*** VijayB has quit IRC | 17:08 | |
*** vivek-ebay has quit IRC | 17:10 | |
*** markmcclain has joined #openstack-lbaas | 17:18 | |
*** markmcclain has quit IRC | 17:18 | |
*** markmcclain has joined #openstack-lbaas | 17:19 | |
*** VijayB_ has joined #openstack-lbaas | 17:27 | |
*** fnaval_ has quit IRC | 17:30 | |
*** jschwarz has joined #openstack-lbaas | 17:37 | |
*** TrevorV_ has quit IRC | 17:38 | |
*** sbfox1 has joined #openstack-lbaas | 17:44 | |
*** sbfox has quit IRC | 17:47 | |
*** VijayB_ has quit IRC | 17:56 | |
*** sbalukoff has joined #openstack-lbaas | 17:57 | |
*** magesh has joined #openstack-lbaas | 17:59 | |
*** vjay3 has joined #openstack-lbaas | 18:01 | |
*** VijayB_ has joined #openstack-lbaas | 18:21 | |
*** VijayB_ has quit IRC | 18:25 | |
*** sbfox1 has quit IRC | 18:29 | |
*** sbfox has joined #openstack-lbaas | 18:29 | |
*** vjay3 has quit IRC | 18:31 | |
*** TrevorV_ has joined #openstack-lbaas | 18:35 | |
*** jorgem has joined #openstack-lbaas | 18:35 | |
*** VijayB has joined #openstack-lbaas | 18:54 | |
*** VijayB has quit IRC | 18:59 | |
blogan | no more -2 dougwig | 19:21 |
dougwig | woohoo, now i need to work on some + symbols. | 19:23 |
*** TrevorV_ has quit IRC | 19:26 | |
*** TrevorV_ has joined #openstack-lbaas | 19:27 | |
*** VijayB_ has joined #openstack-lbaas | 19:32 | |
*** VijayB_ has quit IRC | 19:32 | |
*** fnaval has joined #openstack-lbaas | 19:35 | |
*** fnaval has quit IRC | 19:39 | |
*** jschwarz has quit IRC | 19:45 | |
blogan | https://wiki.openstack.org/wiki/Gerrit_Workflow | 19:51 |
blogan | edited with dependency chain maintenance at the bottom | 19:51 |
*** VijayB_ has joined #openstack-lbaas | 19:52 | |
dougwig | looks good, but i think each scenario needs a concrete example. | 19:54 |
*** VijayB_ has quit IRC | 19:58 | |
*** rohara has quit IRC | 19:58 | |
*** rohara has joined #openstack-lbaas | 19:59 | |
*** VijayB has joined #openstack-lbaas | 20:10 | |
blogan | that might require images | 20:10 |
blogan | and im too lazy to do that | 20:10 |
blogan | i guess you mean just fake commit hashes and Change-Ids | 20:11 |
*** sbfox has quit IRC | 20:14 | |
*** VijayB has quit IRC | 20:15 | |
*** enikanorov has quit IRC | 20:15 | |
*** enikanorov has joined #openstack-lbaas | 20:15 | |
*** VijayB has joined #openstack-lbaas | 20:29 | |
sbalukoff | Heh! | 20:31 |
*** VijayB has quit IRC | 20:33 | |
*** TrevorV_ has quit IRC | 20:38 | |
blogan | sbalukoff ping | 20:43 |
dougwig | no route to host | 20:49 |
sbalukoff | Haha! | 20:52 |
sbalukoff | I'm here. | 20:52 |
sbalukoff | More or less. | 20:52 |
sbalukoff | (I'm planning on at least lurking in the Neutron meeting coming up in 7 minutes.) | 20:52 |
sbalukoff | blogan: pong | 20:53 |
*** VijayB_ has joined #openstack-lbaas | 20:53 | |
blogan | sbalukoff: question about single listener, wouldn't the https redirect scenario require two listeners in an haproxy config? | 20:55 |
blogan | or is it just redirecting to the machine listening on 443 | 20:56 |
blogan | and if haproxy is listening on that its fine | 20:56 |
*** sbfox has joined #openstack-lbaas | 20:56 | |
blogan | im probably not making sense | 20:57 |
sbalukoff | blogan: No. It requires two listeners in separate configs and processes. | 20:58 |
sbalukoff | So you can redirect from one listening service to any other listening service on any other machine or port. | 20:58 |
blogan | ok makes sense | 20:58 |
sbalukoff | This is just a specialized case of that. | 20:58 |
sbalukoff | :) | 20:58 |
sbalukoff | Ok, cool. | 20:58 |
*** sbfox1 has joined #openstack-lbaas | 21:00 | |
*** sbfox has quit IRC | 21:00 | |
*** orion_ has joined #openstack-lbaas | 21:29 | |
*** sbfox1 has quit IRC | 21:30 | |
*** sbfox has joined #openstack-lbaas | 21:30 | |
*** orion__ has quit IRC | 21:32 | |
*** orion_ has quit IRC | 21:33 | |
*** fnaval has joined #openstack-lbaas | 21:34 | |
*** crc32 has quit IRC | 21:46 | |
johnsom_ | I think we will need one vip per instance, many listeners per instance. | 21:49 |
johnsom_ | Running multiple HAProxy instances per Octavia VM gets messy with log file locations, start/stop scripts, api ports, etc. | 21:50 |
sbalukoff | Actually, it's reeeeally simple. | 21:50 |
sbalukoff | Anything that needs a custom path, use the listener's UUID. | 21:51 |
xgerman | ok, so that we are on the same page loadbalancer = VM | 21:51 |
xgerman | listener = haproxy on that VM? | 21:51 |
sbalukoff | Er... anything that needs a unique path. | 21:51 |
sbalukoff | No | 21:51 |
sbalukoff | Listener should be a listening TCP port. | 21:51 |
johnsom_ | So, instead of having canned Octavia VM images, you propose "building" HAProxy directories/instances? | 21:52 |
sbalukoff | loadbalancer is loadbalancer as defined in Neutron LBaaS (ie. basically a virtual IP address with a couple extra attributes.) | 21:52 |
sbalukoff | johnsom_: Yes. | 21:52 |
blogan | it'll have colocation/apolocation hints as well | 21:52 |
xgerman | understood I was more tlaking what real objects (aka nova vms map to what) | 21:52 |
sbalukoff | It's a specialized appliance. why not? | 21:52 |
blogan | or will that be on the listener? | 21:52 |
sbalukoff | blogan: That will be on the loadbalancer. | 21:53 |
xgerman | so Neutron LBaaS object maps to "load balancing appliance" | 21:53 |
sbalukoff | xgerman: Nova VMs map to an Octavia VM | 21:53 |
sbalukoff | An Octavia VM may have many load balancers | 21:53 |
sbalukoff | A load balancer may have many listeners | 21:53 |
blogan | okay so if a loadbalancer has two listeners, can those two listeners be on separate nova VMs? | 21:54 |
sbalukoff | A listener may have many pools | 21:54 |
sbalukoff | A Pool may have many members. | 21:54 |
johnsom_ | That seems like it makes the "shared fate" decisions a lot uglier as we won't be able to let nova manage it | 21:54 |
sbalukoff | blogan: No. | 21:54 |
dougwig | might be a good time to revisit ipv4+ipv6 on a single LB (we can map it to two in neutron lbaas) | 21:54 |
sbalukoff | dougwig: I'm indifferent on that. | 21:55 |
sbalukoff | dougwig: On the one hand, the most common use case by far for IPv6 + IPv4 is to expose the same service over different IP protocols. | 21:55 |
sbalukoff | But, allowing them to be separate allows them to scale separately. | 21:55 |
sbalukoff | So, I could see arguments for either way. | 21:55 |
dougwig | hmm, that only matters if you want more ipv6 than 4. which should be true in a few decades. | 21:56 |
dougwig | :) | 21:56 |
sbalukoff | dougwig: I might not actually matter at all. | 21:56 |
sbalukoff | dougwig: The other argument that holds weight here is that it's a simpler model to have a loadbalancer have just one vip_address. | 21:57 |
xgerman | yeah, I like that | 21:57 |
sbalukoff | (And not care whether that address is IPv4 or IPv6) | 21:57 |
xgerman | simple = good | 21:57 |
xgerman | (for me at least) | 21:57 |
blogan | or we could make it totally awesome | 21:57 |
blogan | and redefine loadbalancer to have many VIPs and each VIP have many Listeners! | 21:58 |
johnsom_ | sbalukoff: With your terminology, does load balancer == HAProxy instance in the Octavia case? | 21:58 |
xgerman | I love awesome - you know the Hulu shod the Awesomes? | 21:58 |
sbalukoff | johnsom_: No. Listener = HAproxy instance. | 21:59 |
xgerman | ok, that makes sense now | 21:59 |
blogan | many loadbalancers can be on a VM though right? | 21:59 |
xgerman | yep, he said that earlier | 21:59 |
sbalukoff | blogan: Yes. | 21:59 |
xgerman | though I would love to restrict 1 LB per VM | 21:59 |
blogan | i think it should be configurable | 21:59 |
sbalukoff | xgerman: I don't see a reason we couldn't make that configurable. | 22:00 |
sbalukoff | 1 LB per VM is just a specialized case of many LBs per VM. :) | 22:00 |
xgerman | true. | 22:00 |
blogan | so is it the plan to have an API in front of Octavia for 0.5? i can't remember | 22:01 |
johnsom_ | I would like to see more of load balancer == HAProxy instance, where HAProxy instance can have one or more listeners. | 22:01 |
sbalukoff | blogan: I think so. We'll want an Operator API in any case. | 22:01 |
sbalukoff | johnsom_: Why? | 22:01 |
sbalukoff | (Seriously, I'm just looking to understand your reasoning for wanting that. A pros and cons list would be great.) | 22:03 |
blogan | so is everyone okay with the object model and API only having the loadbalancer as the root object? | 22:03 |
johnsom_ | It let's the customer manage their shared fate better. It saves memory and simplifies SSL termination. It is a common use case to have many ports on the same vip. I want to use additional listeners for proof-of-life tests. | 22:03 |
sbalukoff | blogan: I certainly am. | 22:03 |
xgerman | blogan: +1 | 22:04 |
sbalukoff | johnsom_: Can you elaborate on what yo mean by 'shared fate'? I don't think I understand that. | 22:04 |
sbalukoff | The memory shared is negligible. I don't agree that SSL termination is simplified. Common use case is satisfied with many haproxy processes on the same vip as well. | 22:05 |
sbalukoff | Case for keeping separate haproxy instance per listener: | 22:05 |
sbalukoff | 1. Simpler template | 22:05 |
sbalukoff | 2. Good separation of resources / accounting at the haproxy level. | 22:05 |
sbalukoff | 3. If you mess up listener on port A, you don't also take out listener on port B. | 22:06 |
johnsom_ | Customers see the load balancers in horizon as a "unit" and every listener that is a member of it has a shared fate with the others on the same load balancer unit. | 22:06 |
sbalukoff | 4. Restarting HTTP listener doesn't cause you to lose your SSL session cache for the HTTPS listener. | 22:06 |
sbalukoff | 5. Troubleshooting is simpler if you can watch just the listener having problems. | 22:06 |
sbalukoff | johnsom_: Is that not the case with separate haproxy instances? They would all be listening on the same Octavia VM. | 22:07 |
blogan | johnsom_: do you mean if you "restart" a loadbaalncer then it should restart all of the listeners | 22:08 |
blogan | ? | 22:08 |
sbalukoff | 6. Restarting an haproxy instance is probably the "most risky" part of any kind of regular maintenance. It's better to leave the instances listening on different ports alone. | 22:08 |
johnsom_ | I don't think it makes the template much more complicated really. The memory does add up. #3 is a feature. #5 is harder as you have more logs to download and look through. | 22:10 |
johnsom_ | blogan: yes | 22:10 |
sbalukoff | I disagree entirely on #5. When you're troubleshooting the listener on port 443, you don't want to have to filter out log lines for the listener on port 80. | 22:11 |
johnsom_ | Customers tend to align "services" to load balancers. I.e. the have a "acme application" service and it's load balancer. | 22:11 |
sbalukoff | #3 is a *really good* feature. | 22:11 |
blogan | i think #5 is harder or simpler based on the case | 22:11 |
sbalukoff | blogan: You're too nice. ;) | 22:12 |
blogan | no I'm just saying if you don't know which port its on, it would be more difficult to have to pull down all the logs rather than looking through one big log | 22:12 |
johnsom_ | #3 is a feature in that they are tied together, if one is wrong, restarted, shutdown they all should be | 22:12 |
sbalukoff | Oh-- I would consider that an anti-feature. | 22:13 |
xgerman | blogan +1 also if you are debugging redirects or mixed poages (e,g, image son 80 and secure stuff on 443) | 22:13 |
sbalukoff | Not quite a bug, but not desirable behavior. | 22:13 |
dougwig | 1 LB per VM + dpdk == lots of wasted vm potential. | 22:13 |
dougwig | (i'm way behind on this chat.) | 22:13 |
johnsom_ | Typically you are debugging the service vs. the port. I.e. you need to see the hand offs or associated communication flows. | 22:13 |
sbalukoff | Interestingly enough: you don't *have* to use separate log files if you're using separate haproxy instances. | 22:14 |
sbalukoff | But if you are using a single haproxy instance, you can only use a unified log file, IIRC. | 22:14 |
sbalukoff | Separate instances therefore gives you more flexibility. | 22:14 |
johnsom_ | dougwig: doesn't dpdk work at the hypervisor level? I'm all for stacking up the VMs on the hardware | 22:15 |
sbalukoff | Also, if something like syslog-ng is used (as we use it here), then the whole "what gets logged where" question mostly becomes moot-- as you can do whatever you want with a utility like that. | 22:15 |
dougwig | the notion of userland networking is not specific to the hypervisor. | 22:16 |
xgerman | anyhow, the majority of our customers run one listener per ip | 22:16 |
sbalukoff | xgerman: By design, or is that because "that's how we wrote it originally" | 22:16 |
sbalukoff | ? | 22:16 |
xgerman | I think both | 22:17 |
sbalukoff | I'm getting the impression that you're advocating this behavior not because it's actually better, but because it's what you're used to. | 22:17 |
johnsom_ | How can you isolate load when stacking instances in on VM? | 22:18 |
xgerman | in our implementation the listene ron port 80 and port 443 are presented as two separate load balancers | 22:18 |
xgerman | (internally they are the same haproxy) | 22:18 |
johnsom_ | sbalukoff: I think what he meant is what we see our customers doing is one listener per ip. | 22:18 |
xgerman | so either way would wotrk with our customers | 22:18 |
blogan | xgerman: thats probably because you went off of the atlas API which only allowed one port per load balaancer | 22:19 |
xgerman | yep, we did... | 22:19 |
blogan | do you have the concept of shared vips in libra? | 22:19 |
sbalukoff | johnsom_: Are your customers configuring the haproxy service(s) directly, manually? If so then I'm not surprised: this is what the haproxy docs would have you do. But it doesn't make for the best design for a service controlled through an API. | 22:19 |
xgerman | blogan: yes, we can share vips | 22:20 |
blogan | xgerman: so in taht case don't you have the ability to have one vip with many listeners? | 22:20 |
xgerman | sbalukoff: the main argument is that we need to predict load to pick a specific vm size | 22:20 |
johnsom_ | sbalukoff: No, they can't access the haproxy config. It is either through horizon or the API, typically automated with tools like puppet | 22:20 |
sbalukoff | xgerman: Yes, that will always be a problem... at least until we have some good auto-scaling capability (ie. Octavia v2) | 22:21 |
xgerman | yep, so having restrictions/configurations allows to kind of reign that in | 22:21 |
xgerman | so we can run on say xsmall with minimum memory... | 22:21 |
sbalukoff | Also interestingly, if you have to re-balance load, it helps to have separate haproxy instances because it becomes much more obvious which ones are the ones consuming the most resources. | 22:22 |
sbalukoff | (Again, this goes back to the "good separation of resources" argument above) | 22:22 |
johnsom_ | Sorry to ask again, but how do you separate load with multiple instances on one vm? | 22:22 |
sbalukoff | Oh, you were asking me? I thought you were talking to dougwig. | 22:23 |
johnsom_ | With instance per vm nova and neutron provision/schedule for a performance floor. | 22:23 |
xgerman | blogan: yes we have one vip and many listeners... kust somehow convoluted through Atalas :-) | 22:23 |
sbalukoff | So, you can use the standard linux process measures to separate load (eg. things like resident memory, % CPU usage, etc.) But to counter your question: How do you separate load if it's all one haproxy instance? | 22:24 |
blogan | xgerman: yes you can blame Jorge | 22:24 |
sbalukoff | johnsom_: Can you be more explicit in what you mean by 'instance' in this case? | 22:25 |
johnsom_ | See my above statement. Linux process tools can't see outside the VM, so they have no concept of the hardware load beyond the "stolen cpu time" | 22:25 |
sbalukoff | johnsom_: I'm still not quite following you: How is this any different if we're using multiple haproxy instances? | 22:26 |
blogan | you mean a single haproxy instance? | 22:27 |
johnsom_ | With one HAProxy instance running per VM nova and neutron provision/schedule the VM to have a performance floor. I.e. minimum guaranteed level of performance. | 22:27 |
sbalukoff | blogan: Who is "you"? | 22:27 |
blogan | lol you | 22:27 |
xgerman | + we know the memory footprint in advance | 22:27 |
sbalukoff | johnsom_: Right... again... so, that one haproxy instance is servicing, say, requests on port 80 and port 443 for a single VIP, right? | 22:28 |
xgerman | in our current world, yes | 22:28 |
sbalukoff | johnsom_: So how would this be different if a separate haproxy instance were used for port 80 and one for port 443? | 22:28 |
sbalukoff | blogan: I think I used the words I intended there. | 22:29 |
johnsom_ | With multiple instances per VM you can't guarantee that the load balancer the customer allocated will get any level of service if instance B eats the performance allocated to the VM. | 22:29 |
blogan | sbalukoff: yeah im pretty sure you did now too | 22:29 |
johnsom_ | Take the customer perspective of what they are purchasing | 22:30 |
sbalukoff | johnsom_: I think you're arguing against having more than one loadbalancer (ie. VIP) per VM. And that's fine. It will be a configurable option that you can use in your environment. | 22:30 |
sbalukoff | johnsom_: In your scenario, you're still selling the customer a single service that is delivering service to port 80 and port 443 clients. | 22:30 |
sbalukoff | What I'm proposing doesn't change that. It just does it in two separate processes instead of one for various reasons I've previously alluded to. | 22:31 |
johnsom_ | That is cool and good to know. Now, can I have multiple listeners per load balancer. That is the flip side to that statement. If that is configurable I think we will have the flexibility for all of the topologies. | 22:31 |
sbalukoff | So, again, I think you're arguing against multiple loadbalancers (VIPs) per VM, and not really making a case for multiple listeners per HAproxy instance. | 22:32 |
sbalukoff | johnsom_: Yes, I mentioned that above. | 22:32 |
sbalukoff | A VM *may* have multiple loadbalancers (VIPS) (but doesn't have to) | 22:32 |
sbalukoff | A loadbalancer *may* have multiple listeners (but doesn't have to) | 22:33 |
*** vjay3 has joined #openstack-lbaas | 22:33 | |
sbalukoff | A listener may have multiple pools (but doesn't have to) | 22:33 |
sbalukoff | A pool may have multiple members (but doesn't have to) | 22:33 |
blogan | i think he means multiple listeners in the same haproxy instance | 22:33 |
blogan | but i've been wrong before, i should probably just shutup | 22:33 |
sbalukoff | Argh... | 22:33 |
johnsom_ | blogan: +1 | 22:33 |
xgerman | yeah, the last thing worries me. Right now we have special nova flavors (aka the cheapoest vm settings to run haproxy) and this makes it unpredictable | 22:33 |
sbalukoff | Dammit, please be very specific in your teminology! | 22:33 |
sbalukoff | xgerman: How? | 22:34 |
sbalukoff | How does it make any lick of a difference for performance if all the listeners on a single loadbalancer (VIP) are serviced by one process versus several processes? | 22:34 |
xgerman | if one of my users goes crazy and makes like 100 haproxy instances ont he same VM to listen on each port | 22:35 |
sbalukoff | xgerman: Right, he'll have the same problem if it's one haproxy instance. | 22:35 |
sbalukoff | Again, NO DIFFERENCE. | 22:35 |
xgerman | We said earlier there was a memory footprint difference between the two models | 22:36 |
johnsom_ | sbalukoff: context switching load, memory overhead, and it breaks the customer's "model" of what they are purchasing. | 22:36 |
sbalukoff | xgerman: Not significantly. | 22:36 |
sbalukoff | Seriously, a single haproxy instance doesn't take that much RAM. | 22:36 |
xgerman | well, we have very small machines :-) | 22:37 |
sbalukoff | You can't spare an extra 5-10MB per listener? | 22:37 |
xgerman | we can chip in a few but not unlimited | 22:38 |
xgerman | so at least there should be a configurable limit | 22:38 |
blogan | so would making this a configurable option be something that makes doing either one drastically harder? | 22:39 |
sbalukoff | xgerman: What kind of limit are you talking about? Number of listeners per loadbalancer? | 22:39 |
johnsom_ | What would the horizon UI look like. One load balancer per VIP:PORT pair? | 22:39 |
xgerman | sbalukoff, I just want to make sure we don't run out of RAM | 22:39 |
sbalukoff | blogan: It's a lot simpler to gather usage data, etc. if you're dealing with an haproxy config that only knows about one listener and its associated backends. | 22:40 |
sbalukoff | So practically speaking, it's actually quite a bit more complicated to have multiple listeners per haproxy instance. | 22:40 |
sbalukoff | And makes a lot of other tasks more complicated. | 22:40 |
sbalukoff | xgerman: So... | 22:40 |
sbalukoff | On the RAM thing: | 22:40 |
sbalukoff | In our experience, the higher-volume instances end up chewing up more RAM anyway. | 22:41 |
sbalukoff | The amount lost to having separate processes is negligible compared to that. | 22:41 |
sbalukoff | Also, I don't think we're losing much in terms of context switching either... a single haproxy instance needs to be doing this internally to service 10's of thousands of connections per second anyway. | 22:41 |
sbalukoff | So again, the overhead in terms of resources for doing it this way is small. | 22:42 |
xgerman | agreed. But I don't like to give the customers the opportunity to go crazy, make a 100 listeners, and tank the VM | 22:42 |
johnsom_ | sbalukoff: is the billing model per VIP:PORT pair or per load balancer? Maybe that is the disconnect. | 22:42 |
sbalukoff | And is totally worth it for the gains we get with more simple configuration, separation, and troubleshooting. | 22:42 |
sbalukoff | johnsom_: Actually, we hadn't discussed the billing model yet. | 22:42 |
sbalukoff | But! | 22:42 |
blogan | johnsom_: i think thats up to the oeprator, octavia just needs to provide the stats | 22:43 |
blogan | my opinion | 22:43 |
sbalukoff | It's actually quite common to bill more for terminated TLS connections. | 22:43 |
sbalukoff | Because that alone requires considerably more resources. | 22:43 |
johnsom_ | blogan: +1 | 22:43 |
sbalukoff | But yes, it is up to the operator. | 22:43 |
sbalukoff | And stats are easier to come by with simpler configs. :) | 22:43 |
blogan | sbalukoff: we currently bill only for extra the uptime of a loadbalancer with termination | 22:43 |
johnsom_ | I think by not allowing multiple listener per HAProxy instance we are limiting the operator's ability to use alternate billing models. | 22:44 |
xgerman | we don't offer SSL termination... | 22:44 |
blogan | but that doesn't mean it can't change | 22:44 |
sbalukoff | johnsom_: I disagree | 22:44 |
sbalukoff | I think that multiple listeners per HAProxy instance doesn't actually affect the billing problem significantly. | 22:44 |
blogan | as long as Octavia provides the stats broken down by loadbalancer, and each listener, then I don't think it would change much with billing either | 22:45 |
blogan | and whatever else people need since the stats requirements hasn't been discussed at length | 22:46 |
dougwig | good lord, i was typing up one email, and now there's a novel waiting for me here. | 22:46 |
blogan | dougwig: enjoy | 22:47 |
sbalukoff | Again, I'm going to say that it's actually easier to have more flexibility in billing with multiple instances because it's easier to get the stats necessary. :/ | 22:47 |
sbalukoff | dougwig: Sorry! | 22:47 |
johnsom_ | If I bill for bandwidth, per load balancer exposed in horizon, and those load balancers have multiple ports, the operator would have to aggregate that data across all of the instances to report billing in the same model the customer sees in horizon. Where if the operator has the option to have multiple listeners per instance there is no aggregation required. | 22:47 |
sbalukoff | johnsom_: The operator is going to have to do something similar anyway when an active-active topology is used. | 22:48 |
sbalukoff | Also, when haproxy restarts, it loses its stats. So it's probably best to rely on something like iptables for accounting for raw bandwidth numbers. | 22:48 |
blogan | shouldn't it be Octavia's job to do the aggregation of stats in a consumable format? | 22:48 |
sbalukoff | And well... it's simple to aggregate. But how are you going to provide a breakdown, say, if you charge a different bandwidth rate for terminated SSL traffic versus non-terminated SSL traffic? | 22:49 |
sbalukoff | Combining this data actually removes options the operator might like for billing. | 22:49 |
sbalukoff | blogan: Yes, I think so, too. | 22:49 |
sbalukoff | Also, our customers like seeing graphs which show their HTTP versus HTTPS bandwidth usage. | 22:50 |
blogan | we currently track them separately as well | 22:51 |
blogan | and we also currently have to deal with the stats being reset when an "instance" is restarted | 22:52 |
blogan | and its a pain in the ass | 22:52 |
sbalukoff | blogan: That problem is reduced if only the instance servicing a single listener gets restarted, rather than all the listeners for a given loadbalancer (VIP). | 22:53 |
blogan | reduced maybe, but if we're still using haproxy to get teh stats, it's still something that has to be accounted for | 22:53 |
xgerman | well, if you send the stats every minute you are just loosing a minute or so | 22:53 |
sbalukoff | blogan: True. | 22:53 |
blogan | which is why I'm all for staying away from that | 22:53 |
xgerman | so I am not convinced that this is a deciding factor | 22:54 |
sbalukoff | Well, for raw bandwidth, we use iptables. It's handy, doesn't get reset when instances get reset, and is as reliable as the machine itself. | 22:54 |
blogan | xgerman: if they're counters of bandwidth that continue to increase and you hve to take the delta of the counter at different time points to determine the bandwidth that was done in that time, thats when resets cause issues | 22:54 |
xgerman | my proposal is that the Octavia VM always sends deltas... | 22:55 |
sbalukoff | And frequent resets (which will certainly happen when automated auto-scaling is in use) mean that the data loss there is not insignificatn. | 22:55 |
johnsom_ | Next question as I work through this.... Would the Octavia VM be limited to the customer or would multiple customer haproxy instances be stacked on one Octavia VM? | 22:55 |
dougwig | and if you miss a delta? counters that reset to zero are the NORM in management circles. | 22:55 |
blogan | xgerman: i guess the pont is that using haproxy to get teh stats will require some code to deal with the resets | 22:55 |
xgerman | yep | 22:56 |
sbalukoff | johnsom_: Initially, a VM would be used for just one tenant. | 22:56 |
sbalukoff | johnsom_: That's the simplest way to start. | 22:56 |
blogan | sbalukoff: not only that, race conditions have caused some serious overbilling which is the worst problem to have | 22:56 |
sbalukoff | But it doesn't mean we can't revisit that later on. | 22:56 |
johnsom_ | I think we would have "security" push back with multiple tenants per VM | 22:56 |
xgerman | +1 | 22:56 |
sbalukoff | johnsom_: That's why we're saying one tenant per VM for now. | 22:56 |
sbalukoff | It's in theory a solvable problem using network namespaces. | 22:57 |
* dougwig wonders how many topics are being inter-leaved right now. | 22:57 | |
sbalukoff | But... since most of us can just pass the cost for inefficieny on to the customer anyway... it's not something we feel highly motivated to solve. | 22:57 |
*** orion_ has joined #openstack-lbaas | 22:57 | |
dougwig | sbalukoff: the namespaces have been a performance issue for neutron, haven't they? | 22:57 |
sbalukoff | dougwig: I think that's mostly OVS being a performance problem. | 22:57 |
sbalukoff | Namespaces haven't been bad, from our perspective. | 22:57 |
sbalukoff | But OVS is a piece of crap. | 22:58 |
sbalukoff | ;) | 22:58 |
johnsom_ | We get namespace management from neutron. I wouldn't want to duplicate that complexity. | 22:58 |
sbalukoff | johnsom_: Agreed! | 22:58 |
sbalukoff | So yes, at least until someone feel inclined for a *lot* of pain, then we're going to go with "a single Octavia VM can be used with a single tenant" | 22:58 |
dougwig | the other thing that sucks about deltas is when your collector is down, btw. | 22:59 |
sbalukoff | And even if that changes, it would be a configurable option. | 22:59 |
sbalukoff | I don't really see that changing, though. | 22:59 |
sbalukoff | dougwig: +1 | 22:59 |
blogan | dougwig: yes indeed | 22:59 |
johnsom_ | sbalukoff: we agree on one tenant per Octavia VM | 22:59 |
dougwig | i spent far too many years doing network management crap. the resetting counters might be one form of pain, but it's less than deltas. | 22:59 |
dougwig | and i'd expect ceilometer or the like to deal with counters natively and hide all that. if not, i'll be shocked. | 23:00 |
xgerman | yeah, we probably need to explore that more | 23:00 |
sbalukoff | dougwig: It's been a part of MRTG and rrdtool for over 15 years... so.. if they don't we can always tell them to please get with the program and join the 1990's. | 23:00 |
dougwig | it's in those because it's been a part of every SNMP MIB since that protocol was invented. | 23:01 |
sbalukoff | jounsom: Yes, we agree on that. | 23:01 |
blogan | so is the only disagreement still the one listener per instance? | 23:01 |
sbalukoff | blogan: I think so. | 23:01 |
sbalukoff | And I'm being damned stubborn because I'm not really hearing good technical reasons not to go with this. :) | 23:02 |
johnsom_ | Yeah, I think we need to think about that one more. | 23:02 |
dougwig | so if i have a web server that listens on port 80 and 8080, and via ipv4 and ipv6, i'd have to have FOUR load balancers? | 23:02 |
dougwig | sorry, FOUR instances? | 23:02 |
sbalukoff | dougwig: Again, please be careful with your teminology. | 23:02 |
sbalukoff | Haha | 23:02 |
sbalukoff | four haproxy instances. Yes. | 23:03 |
blogan | sbalukoff: i semi-disagree about it being more complex to make it configurable because with the right design it wouldn't matter, though implementing will almost always dig up issues not thought of | 23:03 |
sbalukoff | All of which consume not much RAM. | 23:03 |
johnsom_ | sbalukoff: stubborn is ok, that is part of coming to a good answer, but I disagree that there haven't been a few technical reasons in the negative column. | 23:03 |
dougwig | holy shit, what a colossal waste. | 23:03 |
sbalukoff | dougwig: Really? | 23:03 |
sbalukoff | Why? | 23:03 |
dougwig | it's probably an extra 1-2GB of ram for that example, right? | 23:03 |
dougwig | wait. are these VM instances, or haproxy instances? | 23:03 |
sbalukoff | You mean, 5-10MB? | 23:03 |
xgerman | haproxy instances | 23:03 |
sbalukoff | dougwig: haproxy processes. | 23:03 |
dougwig | hahahahaha. | 23:03 |
dougwig | fuck me. | 23:03 |
dougwig | i could care less how many processes we run. | 23:04 |
xgerman | yeah, I am alreday running on a tight RAM budget | 23:04 |
sbalukoff | I could! | 23:04 |
sbalukoff | Because multiple processes actually simplifies a lot of aspects of writing the code, maintenance and troubleshooting. It also makes individual listeners more reslienet. | 23:04 |
sbalukoff | resilient. | 23:04 |
sbalukoff | All of that for a negligible amount of extra RAM. :/ | 23:05 |
dougwig | it also adds context switching waste, which will matter for high-throughput loads. but not enough to matter for 0.5, 1.0, or even 2.0. | 23:05 |
sbalukoff | dougwig: Not much waste there... a single haproxy instance is essentially doing all that heavy lifting internally anyway, servicing 10's of thousands of of simultaneous connections at once. | 23:06 |
sbalukoff | The OS context switching is, again, negligible compared to that. | 23:06 |
johnsom_ | dougwig: it depends on how many haproxy instances we stack on one Octavia VM | 23:06 |
dougwig | not if it's using epoll correctly; run enough processes, and switching those around might well take up more time than watching sockets. | 23:06 |
sbalukoff | johnsom_: Which is going to be a configurable option. ;) | 23:06 |
dougwig | kernel time, not user time. | 23:06 |
johnsom_ | dougwig: +1 | 23:07 |
xgerman | yeah, unless we throw cpus at the problem (and we are tight on that ones, too) | 23:07 |
*** dlundquist has joined #openstack-lbaas | 23:07 | |
xgerman | we usually only give out one nova cpu | 23:07 |
dougwig | which should be plenty for a non-ssl byte pump. | 23:08 |
xgerman | yep | 23:08 |
dougwig | if one listener per haproxy simplifies things, let's do it for 0.5, and then get some data on if it matters? | 23:08 |
sbalukoff | dougwig: +1 | 23:09 |
dougwig | we're in stackforge, we can try stuff that isn't the end perfect solution. | 23:09 |
xgerman | yeah, we need to do some research. So far we have been speculating | 23:09 |
dlundquist | It might make a significant different on some embedded platforms like older ARMs, but I don't think it would be noticable on a modern x86 | 23:09 |
dougwig | raspberry pi octavia demo! | 23:10 |
*** sbfox has quit IRC | 23:10 | |
*** sbfox has joined #openstack-lbaas | 23:10 | |
sbalukoff | xgerman: Our current load balancer product uses this topology. For our part, it isn't just speculation going into this. ;) | 23:10 |
sbalukoff | dougwig: Haha! | 23:10 |
*** orion_ has quit IRC | 23:10 | |
sbalukoff | dougwig: You make nova capable of spinning up instances on a raspberry pi, and I'll run that demo for you. ;) | 23:11 |
johnsom_ | So in horizon the customer would see one load balancer per VIP:PORT pair? | 23:11 |
xgerman | well, we are thinking of using Atom chips | 23:11 |
dougwig | qemu can already emulate it, i'm sure. | 23:11 |
sbalukoff | johnsom_: Again, please be careful with terminology here. | 23:11 |
*** orion_ has joined #openstack-lbaas | 23:11 | |
sbalukoff | What do you mean by 'load balancer' in that statement? | 23:11 |
dlundquist | Using one process, would allow us to share pools... | 23:11 |
xgerman | sbalukoff, what RAM/CPU are you running? | 23:11 |
sbalukoff | dlundquist: Except I think sharing pools is off the table. | 23:11 |
johnsom_ | sbalukoff: I was being very specific. "load balancer" as is listed in horizon. | 23:12 |
dougwig | atoms, really? they except at low-power applications. i'd think LB would benefit greatly in a cost/performance ratio from hyper-threading? | 23:12 |
xgerman | well, horizon for v2 isn't developed yet | 23:12 |
johnsom_ | Right | 23:12 |
sbalukoff | xgerman: Our typical load balancer "appliances" run with 16GB of RAM, and 8 cores. But we'll put dozens of VIPs on these. | 23:12 |
xgerman | we run 1 GB/1 VCPU | 23:12 |
xgerman | as a nova instance | 23:13 |
xgerman | dougwig: http://h17007.www1.hp.com/us/en/enterprise/servers/products/moonshot/index.aspx | 23:13 |
dougwig | /except/excel/ | 23:13 |
rm_work | lol moonshot | 23:14 |
*** vjay3 has quit IRC | 23:14 | |
xgerman | yeah, everyhting needs to run on moonshot - we don't want people not to buy those servers because they can't run LBaaS | 23:14 |
dougwig | xgerman: how many novas do you put on one of those blades? | 23:14 |
dougwig | xgerman: in that case, i think you need to send us all some sample units. ;-) | 23:15 |
xgerman | well, moonshot will be in our future data centers | 23:15 |
sbalukoff | Heh! | 23:15 |
xgerman | right now we have different hardware | 23:15 |
*** orion_ has quit IRC | 23:15 | |
sbalukoff | xgerman: Again, I remain unconvinced that very slightly less RAM usage and very slightly less CPU usage is going to make or break this product. | 23:16 |
sbalukoff | And I'm certianly unconvinced that it's worth sacrificing a simpler design for such considerations. :/ | 23:17 |
rm_work | I just like the naming | 23:17 |
rm_work | reminds me of Moonraker | 23:17 |
xgerman | sbalukoff, those are operator concerns: Can we run it the cheapest way possible | 23:18 |
rm_work | we're hoping to be able to run on an LXC based solution, I think | 23:18 |
rm_work | but no idea how far out that is | 23:18 |
xgerman | Yeah, without load tests we won't knoe | 23:18 |
sbalukoff | xgerman: I would argue that quicker to market, simpler troubleshooting, faster feature development, etc. make the simpler design the "cheaper" solution. | 23:19 |
xgerman | for sure what the hardware/software does | 23:19 |
dlundquist | I don't see any reason we couldn't make this a configuration option if there is demand for both. | 23:19 |
xgerman | dlundquist +1 : my problem is I don't really know the performance implications without trying it out | 23:20 |
*** sbfox has quit IRC | 23:20 | |
dlundquist | xgerman: we could easily benchmark that outside of octavia | 23:21 |
blogan | alright im out of here | 23:21 |
sbalukoff | I think making that particular design feature a configurable option just gets us the worst of both worlds: We don't actually get simpler code or design. | 23:21 |
xgerman | yep, so maybe that should be the action item for this week :-) | 23:21 |
xgerman | benchmark and report back | 23:21 |
*** crc32 has joined #openstack-lbaas | 23:21 | |
blogan | sbalukoff: i think if designed for both teh code would be simpler and the design may not be that much more complex | 23:22 |
xgerman | +1 | 23:22 |
blogan | but I really don't know until its tried | 23:22 |
xgerman | I don't see that many code compications | 23:22 |
xgerman | yeah, the devil is in the details | 23:22 |
sbalukoff | blogan: Really? You'd have to make the code handle both cases. You don't get the advantage of being able to assume that a single haproxy instance is in charge of one listener (which has repercussions throughout the rest of the tasks that need to be accomplished, from stats gathering to status information...) | 23:23 |
blogan | well handling two different possible locations and configurations could definitel complicate it | 23:23 |
blogan | sbalukoff: if we're just mapping loadbalancer listener combinations to some end value, then it would work for both ways | 23:23 |
sbalukoff | blogan: No, you can make assumptions about paths pretty easily. | 23:24 |
sbalukoff | Er... | 23:24 |
sbalukoff | Sorry, yes. | 23:24 |
blogan | but like I said i know there will be issues that pop up that I'm not thinking about that would complicate it | 23:24 |
blogan | but the same goes with the other way | 23:25 |
sbalukoff | Yes, I'm sorry I'm apparently not articulating that very well. | 23:25 |
blogan | so I still say we just do the single listener per haproxy instance for 0.5 and see how that goes | 23:26 |
blogan | if it becomes a major problem, then spend the time necessary to allow it to be configurable | 23:26 |
sbalukoff | Right. | 23:26 |
dlundquist | If we are going seriously consider this, we should get a comparison doc together with pros and cons of each and benchmarks. | 23:26 |
sbalukoff | xgerman: I think doing the benchmarks would be worthwhile. | 23:26 |
sbalukoff | Though I'm reluctant to sign anyone in particular up for that work. :P | 23:27 |
xgerman | yeah, agreed. | 23:27 |
blogan | dlundquist: i say we seriously consider it after 0.5 | 23:27 |
sbalukoff | blogan: So part of the point of 0.5 is to get the back-end Octavia API mostly baked an in place, with few changes (hopefully) | 23:28 |
dlundquist | blogan: I agree, but think we should be methodical about it if we consider running only a single haproxy process per instance | 23:28 |
sbalukoff | This discussion actually affects that. | 23:28 |
xgerman | dlundquist +1 | 23:28 |
xgerman | we should write that up + our reasoning for the ML | 23:28 |
sbalukoff | Does anyone here want to actually run some benchmarks? | 23:29 |
*** VijayB_ has quit IRC | 23:29 | |
blogan | sbalukoff: thats a good goal for sure, but changes are going to happen regardless | 23:29 |
sbalukoff | blogan: Yeah. This just seems... more fundamental to me. :/ | 23:29 |
dlundquist | I'm a bit concerned about the lack of fault isolation, and since generally epoll loops are single threaded one could achieve better utilization across multiple thread units. | 23:29 |
blogan | you're a fundamentalist! | 23:29 |
sbalukoff | Haha! | 23:30 |
blogan | im leaving for reals now | 23:30 |
sbalukoff | Have a good one! | 23:30 |
blogan | oh yeah | 23:30 |
blogan | people should raise concerns about the object model discussion on the ML now | 23:30 |
sbalukoff | Yes. | 23:30 |
sbalukoff | Yes, indeed! | 23:30 |
*** sbfox has joined #openstack-lbaas | 23:32 | |
*** VijayB has joined #openstack-lbaas | 23:35 | |
dougwig | if +cpu and +ram will kill this product, it shouldn't be in python. :) | 23:36 |
dougwig | keep it simple. iterate. | 23:36 |
xgerman | haproxy is not in npython | 23:36 |
xgerman | we have more cpu/RAM budget for control servers :-) | 23:36 |
*** vjay3 has joined #openstack-lbaas | 23:43 | |
*** vjay3 has quit IRC | 23:48 | |
*** vjay4 has joined #openstack-lbaas | 23:52 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!