rm_work | sbalukoff: responding to … patchset two of your haproxy amp api spec | 00:03 |
---|---|---|
rm_work | <_< | 00:03 |
sbalukoff | :) | 00:03 |
sbalukoff | As in... I uploaded a new patchset while you were still making comments on the old one? | 00:03 |
sbalukoff | Sorry! | 00:04 |
rm_work | well, i was responding to people's comments, not directly the spec | 00:04 |
sbalukoff | I tried to schedule adding my responses to patchset 2 to be quickly followed by the upload of patchset 3. | 00:04 |
rm_work | so hard to do that on the current one | 00:04 |
rm_work | yeah, mostly responding to your responses :P | 00:04 |
sbalukoff | Yeah, I hate how gerrit doesn't carry these things forward effectively. | 00:04 |
rm_work | well, to be fair, 99% of the time it wouldn't make sense | 00:05 |
rm_work | it's only when we're discussing concepts/reasoning, and not the actual lines | 00:05 |
rm_work | sbalukoff: I think I disagreed with you a startling amount on that one :P | 00:07 |
sbalukoff | Sweet! | 00:07 |
rm_work | oh, though actually I need to figure out exactly how HAProxy works | 00:07 |
sbalukoff | Can I call you a doody head and fling mud at you? | 00:07 |
rm_work | I think someone led me to believe that you can pass config into a running HAProxy via a socket so you never have to actually write out a config? | 00:07 |
rm_work | and I don't know that I've ever managed to verify that this isn't a complete fabrication :P | 00:07 |
sbalukoff | Um... I've never seen that. | 00:07 |
rm_work | right | 00:08 |
rm_work | I couldn't find it in the docs | 00:08 |
johnsom | Wow, must be Friday.... | 00:08 |
rm_work | so I have to track down who led me to believe that was possible, and either get them to show me wtf they're talking about, or… maybe get the HAProxy guys to actually make it a thing :P | 00:08 |
sbalukoff | rm_work: Why is this important? | 00:09 |
sbalukoff | To avoid writing stuff to disk? | 00:09 |
rm_work | sbalukoff: it relates to a bunch of stuff | 00:09 |
rm_work | with regard to configuration handling | 00:09 |
rm_work | and TLS Data Security | 00:09 |
rm_work | all the stuff about a memory disk... | 00:09 |
rm_work | and about encryption | 00:09 |
rm_work | and blah | 00:09 |
rm_work | and a few of the config POST/DELETE endpoints on the amp API | 00:10 |
sbalukoff | Hmmm... | 00:10 |
rm_work | if we never write any of it to disk, a lot of that stuff just goes *poof* | 00:10 |
johnsom | HAProxy has a really neat way of switching configs in, so I don't think most people would need the config via socket | 00:10 |
sbalukoff | Yeah, but then process management becomes more difficult. | 00:10 |
rm_work | which I think is ok, because if HAProxy needs to restart, we don't want to restart it, we want to shoot the amphora and make a new one :P | 00:10 |
sbalukoff | johnsom: Yep, the way you restart haproxy is pretty cool, actually. | 00:11 |
rm_work | so we shouldn't care if the config is persistent | 00:11 |
sbalukoff | rm_work: I'm not sure I agree with you there. | 00:11 |
rm_work | yeah that is a thing that should be discussed | 00:11 |
sbalukoff | rm_work: Keep in mind that occasionally an operator may need to log into one of these to troubleshoot shit. | 00:11 |
rm_work | let me see if i can find that etherpad for "discussions to have" | 00:11 |
rm_work | sbalukoff: errrrr | 00:11 |
rm_work | sbalukoff: i hope not | 00:12 |
rm_work | I think we were planning to not even allow SSH access | 00:12 |
sbalukoff | rm_work: Me too, but I live in a real world where sometimes that needs to happen. | 00:12 |
rm_work | T_T | 00:12 |
sbalukoff | rm_work: In general yes, that's the case. | 00:12 |
rm_work | but I *like* candyland | 00:12 |
*** SumitNaiksatam has joined #openstack-lbaas | 00:13 | |
sbalukoff | But I can guarantee you I'm going to get requests to have a 'debug' image that operators can log into to troubleshoot stuff. Because sometimes it's simply the only way to figure out what the heck is going on. | 00:13 |
sbalukoff | Haha | 00:13 |
rm_work | actually, our ops guy specifically said "I don't ever want to have to log into one of these Amphorae to troubleshoot anything" | 00:13 |
rm_work | that is nearly word for word | 00:13 |
sbalukoff | Right. Until his boss is on his ass to fix a problem with a big,, important client and you tell him you can't help him unless he gets your more data. | 00:14 |
rm_work | I wish we could get some time from him at the meetup but he's stuck in SA | 00:14 |
sbalukoff | Then he's going to want to log in. | 00:14 |
johnsom | Just because I don't ever want to log into one, I likely will have to | 00:14 |
rm_work | T_T | 00:14 |
sbalukoff | So... having configs on disk and whatnot is not a bad thing. | 00:14 |
sbalukoff | Just keep all that shit on a memory filesystem and encrypt your swap. | 00:14 |
sbalukoff | Boom! Secure (ish)! | 00:15 |
johnsom | Yes, I think the configs should be on disk | 00:15 |
rm_work | yeah, so, my theory is that if something goes wrong on an Amphora -- the system should shoot it and make a new one. if something is wrong with an amphora 6 times in a row -- ok, let's look at it, but by this point we can use some sort of debug image | 00:15 |
sbalukoff | rm_work: That seems reasonable. | 00:15 |
sbalukoff | Logging or even running a debug image should be the exception. | 00:15 |
sbalukoff | As in, we only do this as a very last resort and only when there's lots of money on the line. | 00:16 |
sbalukoff | :) | 00:16 |
johnsom | And beer | 00:16 |
sbalukoff | Oh, yes, beer, too. | 00:16 |
rm_work | I just DO NOT want amphorae to be stateful -- meaning we should build in such a way that we never NEED "an API resource to DELETE the TLS data"… because any update would automatically wipe every part of the old config anyway | 00:16 |
rm_work | re: my comments on patchset 2 | 00:16 |
sbalukoff | rm_work: That's... pretty much how it works when it comes to configuration. | 00:17 |
rm_work | I'm around for like the next … 30m? I think | 00:17 |
rm_work | sbalukoff: well, it was counter to most of the current opinions that i responded to :P | 00:17 |
sbalukoff | Huh. | 00:17 |
sbalukoff | I'll have a look. | 00:17 |
rm_work | yeah | 00:17 |
rm_work | just read my comments and let me know which ones you vehemently disagree with or if there are any that I did a bad job explaining :P | 00:18 |
rm_work | i am having to concentrate really hard to type coherent sentences this late on a Friday, so it's possible I made some logic jumps somewhere without explaining either the beginning, middle, end, or 2-3 of those | 00:18 |
sbalukoff | Ok, so... | 00:18 |
sbalukoff | recycling an amphora is expensive. | 00:18 |
rm_work | it shouldn't be | 00:18 |
rm_work | but I guess that might be a deployment difference | 00:19 |
sbalukoff | And doing *any* configuration change to the amphora (eg. adding a member) means a new config. | 00:19 |
rm_work | yeah... | 00:19 |
sbalukoff | It's expensive for you too, especially if some idiot starts sending updates out at a rate of 10 per second. | 00:19 |
rm_work | so I guess that does depend a lot on the deployer's implementation | 00:19 |
sbalukoff | You're going to burn through your spares pool much quicker than you can replenish it like that. | 00:19 |
rm_work | we are fairly certain amphora recycling is going to be trivially cheap for us | 00:19 |
rm_work | IE, we were tempted to not run with a spares pool at all | 00:20 |
rm_work | 1-2 second boot times for an amphora | 00:20 |
sbalukoff | If you're recycling an amphora with *every* configuration change, I'll bet I can burn through your pool. :) | 00:20 |
rm_work | WITH a spares pool it should not be an issue whatsoever | 00:20 |
rm_work | but I guess that is a deployer specific thing | 00:20 |
johnsom | I also like the idea of respawning haproxy should it fail for some strange reason | 00:20 |
sbalukoff | johnsom: Agreed. | 00:21 |
rm_work | johnsom: if it failed for some strange reason, I don't want to respawn it, i want it dead and gone and replaced T_T | 00:21 |
sbalukoff | Software has bugs. And sometimes a simple respawn means you have fewer of them which cause annoying problems in production. | 00:21 |
johnsom | sbalukoff +1 | 00:22 |
rm_work | if all we're doing is giving it a config, and it has a bug that causes an error, chances are that respawning it with the same config will lead to the same error | 00:22 |
sbalukoff | rm_work: Suppose there's a bug in our haproxy template which keeps haproxy from running for some reason (ie. passes a syntax check, but results in the process immediately exiting)? | 00:22 |
sbalukoff | rm_work: That's a bad day if it suddenly starts eating a crap-ton of amphorae. | 00:22 |
rm_work | then we're pretty fucked either way :P | 00:22 |
sbalukoff | Not so! | 00:23 |
sbalukoff | You have a chance to catch it and troubleshoot it if you don't immediately recycle the amphora. | 00:23 |
rm_work | well, again, if we notice something like that happening, we would have to switch to debug anyway | 00:23 |
rm_work | and if it didn't happen consistently on each amphora, then recycling was probably the right choice anyway | 00:24 |
johnsom | I'm thinking some edge case where some code issue in HAProxy causes a crash, respawn is a quick fix without major recycle. If it failes more than x times it really dies and fails over | 00:24 |
sbalukoff | Also: If I can replace an haproxy config and restart in 10ms, why would I want to spend 1-2 seconds (of down time) replacing the amphora? | 00:24 |
rm_work | sbalukoff: well, there shouldn't be any downtime for a single amphora failing | 00:25 |
sbalukoff | How do you figure? | 00:25 |
rm_work | well, in our architecture there isn't :P | 00:25 |
sbalukoff | So, you spin up the new one, re-point a FLIP, and off you go to the races? | 00:25 |
rm_work | ACTIVE-ACTIVE only with a minimum of <n> amphora where n >= 2 | 00:26 |
sbalukoff | Wait, wouldn't a config change then recycle *all* the amphora in the ACTIVE-ACTIVE load balancer setup? | 00:26 |
sbalukoff | (At as close to the same time as possible?) | 00:26 |
*** SumitNaiksatam has quit IRC | 00:27 | |
rm_work | … that is an interesting point that I don't know if we considered or not :P | 00:27 |
sbalukoff | Here's another to consider: | 00:27 |
rm_work | I think we'd probably (on a config change) add each replacement to the pool before deleting the amp it replaced | 00:28 |
sbalukoff | The way haproxy does restarts, it doesn't kill currently open connections. (So your customers doing long downloads can still make changes without interrupting their end users.) This isn't possible if you're recycling the amphora. | 00:28 |
rm_work | s/pool/HA Set/ | 00:28 |
rm_work | ok, fair | 00:28 |
sbalukoff | And again, it's *really* cheap to restart haproxy. | 00:28 |
rm_work | I'll talk to our ops guy on Monday | 00:28 |
sbalukoff | Much cheaper than recycling an amphora, even with 1-2 second boot times. | 00:29 |
sbalukoff | (Keep in mind that starting an amphora has a bootstrapping process which hits the controller and, I think, barbican.) | 00:29 |
rm_work | Alright, so even if we don't just always go with the "recycle" option, most of my comments are still relevant (I put a lot of "but even if we didn't…" sections( | 00:29 |
sbalukoff | Ok, I'll respond to those. | 00:30 |
xgerman | well, you know there are DDos attacks for haproxy which make it crash - hence our script has restart in it | 00:30 |
rm_work | sbalukoff: yeah, the hitting Barbican part takes us to 2-4 I think | 00:30 |
rm_work | which is why we opted to have a pool | 00:30 |
sbalukoff | I don't expect the controller to be instantaneous in what it does, either. ;) | 00:30 |
sbalukoff | Especially not the first revision of the code we'll write there. | 00:31 |
xgerman | so let mew tell you if somebody is DDosing you the last thing you want to do is bring up new amphora :-) | 00:32 |
sbalukoff | I like the idea of amphorae being *that much* like cattle though: We should try to make them as recyclable as possible, eh. | 00:32 |
sbalukoff | xgerman: Agreed! | 00:32 |
sbalukoff | Especially if the amphora is being used to amplify the attack in some way. | 00:32 |
rm_work | well, regardless, at least my first 5 comments are completely separate from this recycling discussion | 00:33 |
sbalukoff | Got it. | 00:33 |
rm_work | and most of the rest are A/B options | 00:33 |
sbalukoff | Wait... but no -1? | 00:33 |
sbalukoff | I'm hurt, man. | 00:34 |
xgerman | wait until I -2 :-) | 00:34 |
rm_work | i can't -1 when reviewing an old patchset T_T | 00:34 |
rm_work | I tried | 00:34 |
rm_work | T_T | 00:34 |
rm_work | apparently you have to -1 on the CURRENT code…. pfft | 00:35 |
rm_work | arbitrary restrictions :P | 00:35 |
sbalukoff | Heh! | 00:38 |
rm_work | I am honestly wondering if we can in good faith merge ANY of the currently open Spec stories, until we have our meetup T_T | 00:39 |
rm_work | it doesn't stall us completely, as we can still have some discussion, but... | 00:40 |
rm_work | it is a bit annoying/unavoidable I think | 00:40 |
openstackgerrit | Michael Johnson proposed stackforge/octavia: Octavia Controller specification https://review.openstack.org/136540 | 00:44 |
sbalukoff | rm_work: I want people to not be obstructed in getting good work done. And the meetup is about a month away... | 00:45 |
sbalukoff | And I'm also of the opinion that specs can always be revised based on later discussion / things discovered while coding. | 00:46 |
sbalukoff | Especially if part of that spec is becoming part of our "permanent" documentation (as the haproxy amphora API is.) | 00:46 |
sbalukoff | So, I'd rather unblock people from being able to get good work done rather than make them wait for weeks ironing out each and every small detail in a spec. | 00:47 |
johnsom | Very early, needs a bunch of work, but wanted to get something out there before I take a few days off to be invaded by family types. | 00:47 |
sbalukoff | Thanks, johnsom! | 00:47 |
xgerman | the haproxy Amphora API should be internal | 00:47 |
sbalukoff | That's great! I'll have a look, probably early next week. | 00:48 |
johnsom | No rush... grin | 00:48 |
xgerman | I think we decided if I like to make an nginx amphora I would make a new driver, etc. and not integrate at the REST APOI level | 00:48 |
sbalukoff | xgerman: Yes and no. Users don't see it, but anyone who wants to know how Octavia is supposed to work shouldn't have to try and glean this information directly from the code. | 00:48 |
sbalukoff | xgerman: That's correct. | 00:49 |
xgerman | well, as long as that is clear :-) | 00:49 |
sbalukoff | And if we want to document how the nginx amphora API works in a similar manner, that's perfectly fine by me. | 00:49 |
sbalukoff | I rarely see "too much specific documentation" as being a real problem. :) | 00:49 |
sbalukoff | (So long as that documentation is maintained along with the code.) | 00:50 |
xgerman | well, I just don't want to create an implicit integration point we need to support | 00:50 |
rm_work | yeah, I feel like it might be good to have a second driver/amphora type | 01:00 |
rm_work | just so it's clearer how it is supposed to be done | 01:00 |
sbalukoff | rm_work: Totally just responded to your comments. (in ps 3) | 01:00 |
rm_work | and I think we might learn interesting things about our assumptions for our "generic" interfaces, in the process of actually making an nginx one work | 01:00 |
rm_work | sbalukoff: that's some mental fortitude there | 01:01 |
rm_work | i can never keep everything straight from patchset to patchset | 01:01 |
sbalukoff | Open them up side-by-side | 01:01 |
sbalukoff | Helps a lot. :) | 01:01 |
sbalukoff | That's actually how I usually do incremental reviews. | 01:01 |
sbalukoff | (Just look at the stuff that's changed, along with my comments last time to remember what I said about it then.) | 01:02 |
*** SumitNaiksatam has joined #openstack-lbaas | 01:08 | |
*** mwang2 has quit IRC | 01:10 | |
*** xgerman has quit IRC | 01:11 | |
rm_work | There, I got to -1 this time :P | 01:15 |
rm_work | agree on some, still disagree on others | 01:16 |
rm_work | anywho, gotta go to dinner, people are staring daggers at me for still working | 01:16 |
rm_work | sbalukoff: toodles :P | 01:20 |
sbalukoff | Haha | 01:20 |
sbalukoff | Have a good one! | 01:20 |
*** ajmiller has quit IRC | 01:46 | |
*** fnaval has quit IRC | 01:49 | |
*** ctracey has quit IRC | 02:00 | |
*** rm_work has quit IRC | 02:00 | |
*** rm_you| has quit IRC | 02:00 | |
*** rohara has quit IRC | 02:00 | |
*** sbalukoff has quit IRC | 02:00 | |
*** mestery has quit IRC | 02:00 | |
*** mordred has quit IRC | 02:00 | |
*** SumitNaiksatam has quit IRC | 02:13 | |
*** SumitNaiksatam has joined #openstack-lbaas | 02:14 | |
*** sbalukoff has joined #openstack-lbaas | 02:32 | |
*** rohara has joined #openstack-lbaas | 02:32 | |
*** rm_you| has joined #openstack-lbaas | 02:32 | |
*** rm_work has joined #openstack-lbaas | 02:32 | |
*** ctracey has joined #openstack-lbaas | 02:32 | |
*** fnaval has joined #openstack-lbaas | 02:32 | |
*** mestery has joined #openstack-lbaas | 02:32 | |
*** mordred has joined #openstack-lbaas | 02:32 | |
*** fnaval has quit IRC | 02:32 | |
*** mestery has quit IRC | 02:32 | |
*** mordred has quit IRC | 02:32 | |
*** markmcclain has quit IRC | 02:33 | |
*** fnaval has joined #openstack-lbaas | 02:41 | |
*** mestery has joined #openstack-lbaas | 02:41 | |
*** mordred has joined #openstack-lbaas | 02:41 | |
*** fnaval has quit IRC | 03:48 | |
*** fnaval has joined #openstack-lbaas | 03:49 | |
*** woodster_ has quit IRC | 04:20 | |
*** fnaval_ has joined #openstack-lbaas | 04:43 | |
*** fnaval has quit IRC | 04:43 | |
*** fnaval has joined #openstack-lbaas | 04:44 | |
*** SumitNaiksatam has quit IRC | 05:16 | |
*** SumitNaiksatam has joined #openstack-lbaas | 05:17 | |
*** kobis has joined #openstack-lbaas | 06:00 | |
*** kobis1 has joined #openstack-lbaas | 06:06 | |
*** kobis has quit IRC | 06:06 | |
*** kobis has joined #openstack-lbaas | 06:18 | |
*** kobis1 has quit IRC | 06:18 | |
*** kobis has quit IRC | 06:27 | |
*** kobis has joined #openstack-lbaas | 06:32 | |
*** kobis has quit IRC | 06:57 | |
*** kobis has joined #openstack-lbaas | 06:58 | |
*** kobis has quit IRC | 07:06 | |
*** fnaval has quit IRC | 08:50 | |
*** fnaval has joined #openstack-lbaas | 08:51 | |
*** fnaval has quit IRC | 08:55 | |
*** sbfox has joined #openstack-lbaas | 08:59 | |
*** kobis has joined #openstack-lbaas | 09:26 | |
*** kobis1 has joined #openstack-lbaas | 09:32 | |
*** kobis has quit IRC | 09:32 | |
*** kobis1 has quit IRC | 09:36 | |
*** kobis has joined #openstack-lbaas | 09:38 | |
*** kobis has quit IRC | 09:54 | |
*** kobis has joined #openstack-lbaas | 09:58 | |
*** kobis1 has joined #openstack-lbaas | 10:01 | |
*** kobis has quit IRC | 10:02 | |
*** kobis1 has quit IRC | 10:10 | |
*** sbfox has quit IRC | 10:31 | |
*** ctracey has quit IRC | 11:07 | |
*** rm_work has quit IRC | 11:07 | |
*** rm_you| has quit IRC | 11:07 | |
*** rohara has quit IRC | 11:07 | |
*** sbalukoff has quit IRC | 11:07 | |
*** ctracey has joined #openstack-lbaas | 11:10 | |
*** rm_work has joined #openstack-lbaas | 11:10 | |
*** rm_you| has joined #openstack-lbaas | 11:10 | |
*** rohara has joined #openstack-lbaas | 11:10 | |
*** sbalukoff has joined #openstack-lbaas | 11:10 | |
*** kobis has joined #openstack-lbaas | 12:27 | |
*** rm_you|wtf has joined #openstack-lbaas | 12:39 | |
*** kobis has quit IRC | 12:39 | |
*** kobis1 has joined #openstack-lbaas | 12:39 | |
*** rm_you| has quit IRC | 12:42 | |
*** kobis1 has quit IRC | 12:53 | |
*** fnaval has joined #openstack-lbaas | 13:33 | |
*** sbfox has joined #openstack-lbaas | 14:58 | |
*** woodster_ has joined #openstack-lbaas | 14:59 | |
*** sbfox has quit IRC | 15:00 | |
*** sbfox has joined #openstack-lbaas | 15:16 | |
*** sbfox has quit IRC | 15:41 | |
*** sbfox has joined #openstack-lbaas | 16:03 | |
*** sbfox has quit IRC | 16:04 | |
*** sbfox has joined #openstack-lbaas | 16:20 | |
*** woodster_ has quit IRC | 17:10 | |
*** sbfox has quit IRC | 17:20 | |
*** sbfox has joined #openstack-lbaas | 17:49 | |
*** sbfox has quit IRC | 18:53 | |
*** intr1nsic has quit IRC | 18:58 | |
*** intr1nsic has joined #openstack-lbaas | 19:00 | |
*** sbfox has joined #openstack-lbaas | 19:22 | |
*** sbfox has quit IRC | 19:56 | |
*** sbfox has joined #openstack-lbaas | 20:05 | |
*** sbfox has quit IRC | 20:35 | |
*** SumitNaiksatam has quit IRC | 21:12 | |
*** SumitNaiksatam has joined #openstack-lbaas | 21:20 | |
*** woodster_ has joined #openstack-lbaas | 21:41 | |
*** woodster_ has quit IRC | 23:50 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!