Saturday, 2014-11-22

rm_worksbalukoff: responding to … patchset two of your haproxy amp api spec00:03
sbalukoffAs in... I uploaded a new patchset while you were still making comments on the old one?00:03
rm_workwell, i was responding to people's comments, not directly the spec00:04
sbalukoffI tried to schedule adding my responses to patchset 2 to be quickly followed by the upload of patchset 3.00:04
rm_workso hard to do that on the current one00:04
rm_workyeah, mostly responding to your responses :P00:04
sbalukoffYeah, I hate how gerrit doesn't carry these things forward effectively.00:04
rm_workwell, to be fair, 99% of the time it wouldn't make sense00:05
rm_workit's only when we're discussing concepts/reasoning, and not the actual lines00:05
rm_worksbalukoff: I think I disagreed with you a startling amount on that one :P00:07
rm_workoh, though actually I need to figure out exactly how HAProxy works00:07
sbalukoffCan I call you a doody head and fling mud at you?00:07
rm_workI think someone led me to believe that you can pass config into a running HAProxy via a socket so you never have to actually write out a config?00:07
rm_workand I don't know that I've ever managed to verify that this isn't a complete fabrication :P00:07
sbalukoffUm... I've never seen that.00:07
rm_workI couldn't find it in the docs00:08
johnsomWow, must be Friday....00:08
rm_workso I have to track down who led me to believe that was possible, and either get them to show me wtf they're talking about, or… maybe get the HAProxy guys to actually make it a thing :P00:08
sbalukoffrm_work: Why is this important?00:09
sbalukoffTo avoid writing stuff to disk?00:09
rm_worksbalukoff: it relates to a bunch of stuff00:09
rm_workwith regard to configuration handling00:09
rm_workand TLS Data Security00:09
rm_workall the stuff about a memory disk...00:09
rm_workand about encryption00:09
rm_workand blah00:09
rm_workand a few of the config POST/DELETE endpoints on the amp API00:10
rm_workif we never write any of it to disk, a lot of that stuff just goes *poof*00:10
johnsomHAProxy has a really neat way of switching configs in, so I don't think most people would need the config via socket00:10
sbalukoffYeah, but then process management becomes more difficult.00:10
rm_workwhich I think is ok, because if HAProxy needs to restart, we don't want to restart it, we want to shoot the amphora and make a new one :P00:10
sbalukoffjohnsom: Yep, the way you restart haproxy is pretty cool, actually.00:11
rm_workso we shouldn't care if the config is persistent00:11
sbalukoffrm_work: I'm not sure I agree with you there.00:11
rm_workyeah that is a thing that should be discussed00:11
sbalukoffrm_work: Keep in mind that occasionally an operator may need to log into one of these to troubleshoot shit.00:11
rm_worklet me see if i can find that etherpad for "discussions to have"00:11
rm_worksbalukoff: errrrr00:11
rm_worksbalukoff: i hope not00:12
rm_workI think we were planning to not even allow SSH access00:12
sbalukoffrm_work: Me too, but I live in a real world where sometimes that needs to happen.00:12
sbalukoffrm_work: In general yes, that's the case.00:12
rm_workbut I *like* candyland00:12
*** SumitNaiksatam has joined #openstack-lbaas00:13
sbalukoffBut I can guarantee you I'm going to get requests to have a 'debug' image that operators can log into to troubleshoot stuff. Because sometimes it's simply the only way to figure out what the heck is going on.00:13
rm_workactually, our ops guy specifically said "I don't ever want to have to log into one of these Amphorae to troubleshoot anything"00:13
rm_workthat is nearly word for word00:13
sbalukoffRight. Until his boss is on his ass to fix a problem with a big,, important client and you tell him you can't help him unless he gets your more data.00:14
rm_workI wish we could get some time from him at the meetup but he's stuck in SA00:14
sbalukoffThen he's going to want to log in.00:14
johnsomJust because I don't ever want to log into one, I likely will have to00:14
sbalukoffSo... having configs on disk and whatnot is not a bad thing.00:14
sbalukoffJust keep all that shit on a memory filesystem and encrypt your swap.00:14
sbalukoffBoom! Secure (ish)!00:15
johnsomYes, I think the configs should be on disk00:15
rm_workyeah, so, my theory is that if something goes wrong on an Amphora -- the system should shoot it and make a new one. if something is wrong with an amphora 6 times in a row -- ok, let's look at it, but by this point we can use some sort of debug image00:15
sbalukoffrm_work: That seems reasonable.00:15
sbalukoffLogging or even running a debug image should be the exception.00:15
sbalukoffAs in, we only do this as a very last resort and only when there's lots of money on the line.00:16
johnsomAnd beer00:16
sbalukoffOh, yes, beer, too.00:16
rm_workI just DO NOT want amphorae to be stateful -- meaning we should build in such a way that we never NEED "an API resource to DELETE the TLS data"… because any update would automatically wipe every part of the old config anyway00:16
rm_workre: my comments on patchset 200:16
sbalukoffrm_work: That's... pretty much how it works when it comes to configuration.00:17
rm_workI'm around for like the next … 30m? I think00:17
rm_worksbalukoff: well, it was counter to most of the current opinions that i responded to :P00:17
sbalukoffI'll have a look.00:17
rm_workjust read my comments and let me know which ones you vehemently disagree with or if there are any that I did a bad job explaining :P00:18
rm_worki am having to concentrate really hard to type coherent sentences this late on a Friday, so it's possible I made some logic jumps somewhere without explaining either the beginning, middle, end, or 2-3 of those00:18
sbalukoffOk, so...00:18
sbalukoffrecycling an amphora is expensive.00:18
rm_workit shouldn't be00:18
rm_workbut I guess that might be a deployment difference00:19
sbalukoffAnd doing *any* configuration change to the amphora (eg. adding a member) means a new config.00:19
sbalukoffIt's expensive for you too, especially if some idiot starts sending updates out at a rate of 10 per second.00:19
rm_workso I guess that does depend a lot on the deployer's implementation00:19
sbalukoffYou're going to burn through your spares pool much quicker than you can replenish it like that.00:19
rm_workwe are fairly certain amphora recycling is going to be trivially cheap for us00:19
rm_workIE, we were tempted to not run with a spares pool at all00:20
rm_work1-2 second boot times for an amphora00:20
sbalukoffIf you're recycling an amphora with *every* configuration change, I'll bet I can burn through your pool. :)00:20
rm_workWITH a spares pool it should not be an issue whatsoever00:20
rm_workbut I guess that is a deployer specific thing00:20
johnsomI also like the idea of respawning haproxy should it fail for some strange reason00:20
sbalukoffjohnsom: Agreed.00:21
rm_workjohnsom: if it failed for some strange reason, I don't want to respawn it, i want it dead and gone and replaced T_T00:21
sbalukoffSoftware has bugs. And sometimes a simple respawn means you have fewer of them which cause annoying problems in production.00:21
johnsomsbalukoff +100:22
rm_workif all we're doing is giving it a config, and it has a bug that causes an error, chances are that respawning it with the same config will lead to the same error00:22
sbalukoffrm_work: Suppose there's a bug in our haproxy template which keeps haproxy from running for some reason (ie. passes a syntax check, but results in the process immediately exiting)?00:22
sbalukoffrm_work: That's a bad day if it suddenly starts eating a crap-ton of amphorae.00:22
rm_workthen we're pretty fucked either way :P00:22
sbalukoffNot so!00:23
sbalukoffYou have a chance to catch it and troubleshoot it if you don't immediately recycle the amphora.00:23
rm_workwell, again, if we notice something like that happening, we would have to switch to debug anyway00:23
rm_workand if it didn't happen consistently on each amphora, then recycling was probably the right choice anyway00:24
johnsomI'm thinking some edge case where some code issue in HAProxy causes a crash, respawn is a quick fix without major recycle.  If it failes more than x times it really dies and fails over00:24
sbalukoffAlso:  If I can replace an haproxy config and restart in 10ms, why would I want to spend 1-2 seconds (of down time) replacing the amphora?00:24
rm_worksbalukoff: well, there shouldn't be any downtime for a single amphora failing00:25
sbalukoffHow do you figure?00:25
rm_workwell, in our architecture there isn't :P00:25
sbalukoffSo, you spin up the new one, re-point a FLIP, and off you go to the races?00:25
rm_workACTIVE-ACTIVE only with a minimum of <n> amphora where n >= 200:26
sbalukoffWait, wouldn't a config change then recycle *all* the amphora in the ACTIVE-ACTIVE load balancer setup?00:26
sbalukoff(At as close to the same time as possible?)00:26
*** SumitNaiksatam has quit IRC00:27
rm_work… that is an interesting point that I don't know if we considered or not :P00:27
sbalukoffHere's another to consider:00:27
rm_workI think we'd probably (on a config change) add each replacement to the pool before deleting the amp it replaced00:28
sbalukoffThe way haproxy does restarts, it doesn't kill currently open connections. (So your customers doing long downloads can still make changes without interrupting their end users.) This isn't possible if you're recycling the amphora.00:28
rm_works/pool/HA Set/00:28
rm_workok, fair00:28
sbalukoffAnd again, it's *really* cheap to restart haproxy.00:28
rm_workI'll talk to our ops guy on Monday00:28
sbalukoffMuch cheaper than recycling an amphora, even with 1-2 second boot times.00:29
sbalukoff(Keep in mind that starting an amphora has a bootstrapping process which hits the controller and, I think, barbican.)00:29
rm_workAlright, so even if we don't just always go with the "recycle" option, most of my comments are still relevant (I put a lot of "but even if we didn't…" sections(00:29
sbalukoffOk, I'll respond to those.00:30
xgermanwell, you know there are DDos attacks for haproxy which make it crash - hence our script has restart in it00:30
rm_worksbalukoff: yeah, the hitting Barbican part takes us to 2-4 I think00:30
rm_workwhich is why we opted to have a pool00:30
sbalukoffI don't expect the controller to be instantaneous in what it does, either. ;)00:30
sbalukoffEspecially not the first revision of the code we'll write there.00:31
xgermanso let mew tell you if somebody is DDosing you the last thing you want to do is bring up new amphora :-)00:32
sbalukoffI like the idea of amphorae being *that much* like cattle though: We should try to make them as recyclable as possible, eh.00:32
sbalukoffxgerman: Agreed!00:32
sbalukoffEspecially if the amphora is being used to amplify the attack in some way.00:32
rm_workwell, regardless, at least my first 5 comments are completely separate from this recycling discussion00:33
sbalukoffGot it.00:33
rm_workand most of the rest are A/B options00:33
sbalukoffWait... but no -1?00:33
sbalukoffI'm hurt, man.00:34
xgermanwait until I -2 :-)00:34
rm_worki can't -1 when reviewing an old patchset T_T00:34
rm_workI tried00:34
rm_workapparently you have to -1 on the CURRENT code…. pfft00:35
rm_workarbitrary restrictions :P00:35
rm_workI am honestly wondering if we can in good faith merge ANY of the currently open Spec stories, until we have our meetup T_T00:39
rm_workit doesn't stall us completely, as we can still have some discussion, but...00:40
rm_workit is a bit annoying/unavoidable I think00:40
openstackgerritMichael Johnson proposed stackforge/octavia: Octavia Controller specification
sbalukoffrm_work: I want people to not be obstructed in getting good work done. And the meetup is about a month away...00:45
sbalukoffAnd I'm also of the opinion that specs can always be revised based on later discussion / things discovered while coding.00:46
sbalukoffEspecially if part of that spec is becoming part of our "permanent" documentation (as the haproxy amphora API is.)00:46
sbalukoffSo, I'd rather unblock people from being able to get good work done rather than make them wait for weeks ironing out each and every small detail in a spec.00:47
johnsomVery early, needs a bunch of work, but wanted to get something out there before I take a few days off to be invaded by family types.00:47
sbalukoffThanks, johnsom!00:47
xgermanthe haproxy Amphora API should be internal00:47
sbalukoffThat's great! I'll have a look, probably early next week.00:48
johnsomNo rush... grin00:48
xgermanI think we decided if I like to make an nginx amphora I would make a new driver, etc. and not integrate at the REST APOI level00:48
sbalukoffxgerman: Yes and no. Users don't see it, but anyone who wants to know how Octavia is supposed to work shouldn't have to try and glean this information directly from the code.00:48
sbalukoffxgerman: That's correct.00:49
xgermanwell, as long as that is clear :-)00:49
sbalukoffAnd if we want to document how the nginx amphora API works in a similar manner, that's perfectly fine by me.00:49
sbalukoffI rarely see "too much specific documentation" as being a real problem. :)00:49
sbalukoff(So long as that documentation is maintained along with the code.)00:50
xgermanwell, I just don't want to create an implicit integration point we need to support00:50
rm_workyeah, I feel like it might be good to have a second driver/amphora type01:00
rm_workjust so it's clearer how it is supposed to be done01:00
sbalukoffrm_work: Totally just responded to your comments. (in ps 3)01:00
rm_workand I think we might learn interesting things about our assumptions for our "generic" interfaces, in the process of actually making an nginx one work01:00
rm_worksbalukoff: that's some mental fortitude there01:01
rm_worki can never keep everything straight from patchset to patchset01:01
sbalukoffOpen them up side-by-side01:01
sbalukoffHelps a lot. :)01:01
sbalukoffThat's actually how I usually do incremental reviews.01:01
sbalukoff(Just look at the stuff that's changed, along with my comments last time to remember what I said about it then.)01:02
*** SumitNaiksatam has joined #openstack-lbaas01:08
*** mwang2 has quit IRC01:10
*** xgerman has quit IRC01:11
rm_workThere, I got to -1 this time :P01:15
rm_workagree on some, still disagree on others01:16
rm_workanywho, gotta go to dinner, people are staring daggers at me for still working01:16
rm_worksbalukoff: toodles :P01:20
sbalukoffHave a good one!01:20
*** ajmiller has quit IRC01:46
*** fnaval has quit IRC01:49
*** ctracey has quit IRC02:00
*** rm_work has quit IRC02:00
*** rm_you| has quit IRC02:00
*** rohara has quit IRC02:00
*** sbalukoff has quit IRC02:00
*** mestery has quit IRC02:00
*** mordred has quit IRC02:00
*** SumitNaiksatam has quit IRC02:13
*** SumitNaiksatam has joined #openstack-lbaas02:14
*** sbalukoff has joined #openstack-lbaas02:32
*** rohara has joined #openstack-lbaas02:32
*** rm_you| has joined #openstack-lbaas02:32
*** rm_work has joined #openstack-lbaas02:32
*** ctracey has joined #openstack-lbaas02:32
*** fnaval has joined #openstack-lbaas02:32
*** mestery has joined #openstack-lbaas02:32
*** mordred has joined #openstack-lbaas02:32
*** fnaval has quit IRC02:32
*** mestery has quit IRC02:32
*** mordred has quit IRC02:32
*** markmcclain has quit IRC02:33
*** fnaval has joined #openstack-lbaas02:41
*** mestery has joined #openstack-lbaas02:41
*** mordred has joined #openstack-lbaas02:41
*** fnaval has quit IRC03:48
*** fnaval has joined #openstack-lbaas03:49
*** woodster_ has quit IRC04:20
*** fnaval_ has joined #openstack-lbaas04:43
*** fnaval has quit IRC04:43
*** fnaval has joined #openstack-lbaas04:44
*** SumitNaiksatam has quit IRC05:16
*** SumitNaiksatam has joined #openstack-lbaas05:17
*** kobis has joined #openstack-lbaas06:00
*** kobis1 has joined #openstack-lbaas06:06
*** kobis has quit IRC06:06
*** kobis has joined #openstack-lbaas06:18
*** kobis1 has quit IRC06:18
*** kobis has quit IRC06:27
*** kobis has joined #openstack-lbaas06:32
*** kobis has quit IRC06:57
*** kobis has joined #openstack-lbaas06:58
*** kobis has quit IRC07:06
*** fnaval has quit IRC08:50
*** fnaval has joined #openstack-lbaas08:51
*** fnaval has quit IRC08:55
*** sbfox has joined #openstack-lbaas08:59
*** kobis has joined #openstack-lbaas09:26
*** kobis1 has joined #openstack-lbaas09:32
*** kobis has quit IRC09:32
*** kobis1 has quit IRC09:36
*** kobis has joined #openstack-lbaas09:38
*** kobis has quit IRC09:54
*** kobis has joined #openstack-lbaas09:58
*** kobis1 has joined #openstack-lbaas10:01
*** kobis has quit IRC10:02
*** kobis1 has quit IRC10:10
*** sbfox has quit IRC10:31
*** ctracey has quit IRC11:07
*** rm_work has quit IRC11:07
*** rm_you| has quit IRC11:07
*** rohara has quit IRC11:07
*** sbalukoff has quit IRC11:07
*** ctracey has joined #openstack-lbaas11:10
*** rm_work has joined #openstack-lbaas11:10
*** rm_you| has joined #openstack-lbaas11:10
*** rohara has joined #openstack-lbaas11:10
*** sbalukoff has joined #openstack-lbaas11:10
*** kobis has joined #openstack-lbaas12:27
*** rm_you|wtf has joined #openstack-lbaas12:39
*** kobis has quit IRC12:39
*** kobis1 has joined #openstack-lbaas12:39
*** rm_you| has quit IRC12:42
*** kobis1 has quit IRC12:53
*** fnaval has joined #openstack-lbaas13:33
*** sbfox has joined #openstack-lbaas14:58
*** woodster_ has joined #openstack-lbaas14:59
*** sbfox has quit IRC15:00
*** sbfox has joined #openstack-lbaas15:16
*** sbfox has quit IRC15:41
*** sbfox has joined #openstack-lbaas16:03
*** sbfox has quit IRC16:04
*** sbfox has joined #openstack-lbaas16:20
*** woodster_ has quit IRC17:10
*** sbfox has quit IRC17:20
*** sbfox has joined #openstack-lbaas17:49
*** sbfox has quit IRC18:53
*** intr1nsic has quit IRC18:58
*** intr1nsic has joined #openstack-lbaas19:00
*** sbfox has joined #openstack-lbaas19:22
*** sbfox has quit IRC19:56
*** sbfox has joined #openstack-lbaas20:05
*** sbfox has quit IRC20:35
*** SumitNaiksatam has quit IRC21:12
*** SumitNaiksatam has joined #openstack-lbaas21:20
*** woodster_ has joined #openstack-lbaas21:41
*** woodster_ has quit IRC23:50

Generated by 2.14.0 by Marius Gedminas - find it at!