20:00:06 <xgerman> #startmeeting Octavia
20:00:07 <openstack> Meeting started Wed Aug  5 20:00:06 2015 UTC and is due to finish in 60 minutes.  The chair is xgerman. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:11 <openstack> The meeting name has been set to 'octavia'
20:00:13 <sbalukoff> Howdy folks!
20:00:17 <xgerman> #chair blogan
20:00:18 <openstack> Current chairs: blogan xgerman
20:00:23 <blogan> hi
20:00:26 <minwang2> hi
20:00:29 <xgerman> #topic Announcements
20:00:36 <ajmiller> hello
20:00:40 <xgerman> hi
20:00:46 <johnsom> o/
20:00:46 <madhu_ak> hi
20:01:16 <sbalukoff> Brief announcement: I'm on vacation from tomorrow through Tuesday. So if someone wants to take a stab at Octavia L7 functionality before I get back, I won't be offended. Otherwise, I'll tackle this when I get back.
20:01:33 <xgerman> have a great trip :-)
20:01:39 <sbalukoff> Thanks!
20:02:05 <xgerman> GSLB is now called Kosmos
20:02:20 <sbalukoff> Y'all aren't worried about trademark issues?
20:02:24 <xgerman> johnsom/dougwig anything else from GSLB?
20:02:35 <bana_k> hi
20:02:50 <xgerman> I think we are :-)
20:03:10 <johnsom> The initial API spec is out for review
20:03:15 <sbalukoff> Hah! Ok. Well...  In my brief searching there were a whole bunch of computer-related projects called "cosmos"
20:03:17 <johnsom> Soon to get put in the repo
20:03:29 <xgerman> Kosmos...
20:03:57 <sbalukoff> https://www.kosmoscentral.com/kosmos-esync
20:04:18 <xgerman> mmh
20:04:25 <johnsom> #link https://github.com/gslb/gslb-specs/pull/1/files
20:04:26 <sbalukoff> http://www.kosmos.co.uk/
20:04:50 <sbalukoff> johnsom: Thanks!
20:05:16 <blogan> kosmos kramer
20:05:26 <sbalukoff> Haha
20:05:37 <rm_work> o/ lost my proxy for a sec, but i'm here
20:05:56 <sbalukoff> Welcome, Adam!
20:06:05 <dougwig> oh man, not another naming mess.
20:06:16 <sbalukoff> dougwig: Muahahaha!
20:06:18 <fnaval> hi
20:06:27 <xgerman> well, it’s the GSLB folks...
20:06:40 <ajmiller> Should just name projects with the git hash of the first commit....
20:06:56 <johnsom> +1 for git hash
20:06:59 <dougwig> ajmiller: +1
20:07:05 <sbalukoff> FWIW, I can't find anything software related called "Lichtenberg" and Lichtenberg figures are cool. ;)
20:07:22 <xgerman> :-)
20:07:26 <sbalukoff> And yes--  It's totally not up to me. And I'm sure y'all are thankful for that. XD
20:07:38 <xgerman> moving on  #link https://review.openstack.org/#/c/206757/
20:08:02 <xgerman> this is the first commit in the fancy new LBaaS V2 Horizon panel repo
20:08:03 <sbalukoff> Nice!
20:08:55 <xgerman> I think dougwig wanted us to review
20:08:57 <dougwig> i'll have that updated today.
20:09:03 <xgerman> cool
20:09:26 <xgerman> #topic Brief progress reports
20:09:28 <sbalukoff> Eeexcellent.
20:09:41 <sbalukoff> This could use some love: https://review.openstack.org/#/c/206258/
20:10:00 <xgerman> #link https://review.openstack.org/#/c/206258/
20:10:03 <sbalukoff> Also, progress on updating design and component docs, but nothing to share just yet. :P
20:10:25 <xgerman> yeah, you have been busy. I guess IBM is a good influence :-)
20:10:27 <sbalukoff> This has been updated:
20:10:30 <sbalukoff> #link https://wiki.openstack.org/wiki/Neutron/LBaaS/l7
20:10:48 <johnsom> I have the vip plug update for the REST driver out:
20:10:50 <johnsom> #link https://review.openstack.org/#/c/208793/
20:11:00 <dougwig> progress on neutron driver - i caught a bunch of fish, and had two brown bears pop out right close to us during. coding-wise, the river was not conducive to much.
20:11:16 <sbalukoff> And unless I hear strong objections, I'm going to delete this from the repo, and move it into a wiki page (where it's more easily updated, and more easily found by people getting started with the project):
20:11:19 <johnsom> dougwig Nice!
20:11:20 <sbalukoff> #link https://review.openstack.org/#/c/208763/
20:11:31 <xgerman> dougwig +1
20:11:31 <sbalukoff> dougwig: Awesome!
20:12:11 <johnsom> Just commented on 208763
20:12:31 <blogan> https://review.openstack.org/#/c/209210/
20:12:32 <sbalukoff> johnsom: Yep! I saw. Thanks, eh!
20:12:43 <sbalukoff> Still going to delete it from the repo unless people really want it to stay there. XD
20:13:00 <blogan> ^ does some code consolidation to keep code from being duplicated in other neutron based network drivers
20:13:02 <xgerman> I am doing some work on https://review.openstack.org/#/c/208640/
20:13:06 <xgerman> #link https://review.openstack.org/#/c/208640/
20:13:07 <dougwig> sbalukoff: can we abandon the 2.0 spec that is *ancient* ?
20:13:13 <xgerman> +1
20:13:19 <sbalukoff> dougwig: Aaw, man!
20:13:28 <madhu_ak> progress on scenario tests: #link https://review.openstack.org/#/c/207196/, yet need some attention. do we need two servers or one server with different ports for checking the loadbalancing?
20:13:28 <sbalukoff> Give me a chance to update it first?
20:14:06 <dougwig> sbalukoff: sure.  it's just my oldest patch on my dashbaord, and would've been auto-abandoned in neutron LONG ago.  :)
20:14:19 <sbalukoff> dougwig: Yeah, I now.
20:14:20 <sbalukoff> know.
20:14:40 <sbalukoff> Note that it won't get updated this week, as again, I'm on vacation.
20:14:47 <xgerman> so we should start auto abandoning ?
20:14:56 <sbalukoff> But... I think this is useful for me attempting to recruit more IBM talent to the project.
20:15:12 <johnsom> Brandon and I took a pass over the 1.0 spec, but didn't get to the 2.0 spec
20:15:16 <minwang2> the following 2 pathces need some love as well :#link https://review.openstack.org/#/c/207674/11
20:15:17 <minwang2> #link https://review.openstack.org/#/c/160061/
20:15:18 <xgerman> it not being so old… so we look more desperate?
20:15:52 <sbalukoff> HAha!
20:16:00 <blogan> dougwig: https://review.openstack.org/#/c/197772/
20:16:15 <dougwig> i'd be in favor of auto-abandon (i think neutron is 2 months with no activity, plus a -1 somewhere.)
20:16:17 <sbalukoff> No, it's mostly about knowing what we actually are trying to build.
20:16:27 <blogan> ^ you and mestery, please?
20:16:36 <dougwig> blogan: ok
20:17:16 <xgerman> yeah, now as we are part of the Neutron stadium I was expecting mystery to manage all our auto-abandoning :-)
20:17:24 <blogan> pothole, ajmiller: look at that one too, but dougwig and mestery should definitely approve of it before being merged
20:17:27 <sbalukoff> Heh!
20:17:41 <blogan> xgerman: that's a mestery mystery
20:17:44 <mestery> lol
20:17:48 <dougwig> xgerman: i can add it to his script list, that'd be easy.
20:17:57 <xgerman> yeah, let’s do that
20:18:02 <dougwig> or he can, since he's lurking.  :)
20:18:13 <xgerman> :-)
20:18:14 <dougwig> he likes juicing his stats with it.
20:18:16 <blogan> is mestery a script himself?
20:18:16 <sbalukoff> If y'all want to auto-abandon, I'm OK with that. I'd just like to have a chance to get those two docs updated first... partially because I think it would be funny for a patchset over a year old to actually get merged.
20:18:17 <dougwig> jk
20:18:31 <xgerman> mystery probably is half the time a bot
20:18:33 <dougwig> sbalukoff: you can always hit 'restore'.  they don't vanish.  :)
20:18:49 <xgerman> +1
20:18:49 <sbalukoff> dougwig: I almost would rather they did vanish.
20:18:53 <sbalukoff> ;)
20:19:18 <mestery> rofl
20:19:26 <mestery> blogan: A might fine looking script indeed
20:20:10 <xgerman> #topic Octavia reference implementation in Liberty
20:20:16 <xgerman> let’s review our list
20:20:26 <xgerman> #link https://etherpad.openstack.org/p/YVR-neutron-octavia
20:20:39 <dougwig> before we start, i just want to point out that we have to make a decision on go/no-go on this no later than the end of august.
20:20:44 <blogan> mestery: like the da vinci virus in the movie hackers
20:21:02 <dougwig> i get the feeling that if we had to decide *today*, it'd be a no-go for Liberty.
20:21:25 <sbalukoff> dougwig: I agree.
20:21:32 <TrevorV> I'm going to be working on the octavia driver in neutron lbaas over the next two weeks guys, so...
20:21:34 <blogan> well right now its not all complete
20:21:37 * TrevorV is late, sorry about that
20:22:02 <blogan> I am not so certain on L7 getting into Octavia, still hurdles on that even getting in neutron-lbaas
20:22:06 <sbalukoff> TrevorV: See, now we know you're late. If you'd been silent, we all would have assumed you were being wisely contemplative.
20:22:17 <TrevorV> Oooh ouch
20:22:21 <dougwig> sbalukoff: it's TrevorV, would we really?
20:22:26 <sbalukoff> HAHA
20:22:29 <xgerman> lol
20:22:37 <xgerman> so being optimistic...
20:22:59 <blogan> so everything in the list except L7 should get done by 8/31
20:23:07 <sbalukoff> blogan: Agreed. I think the turn around time between Evgeny uploading a patch and me reacting to that and updating the reference driver might kill us on our ability to get L7 into Liberty.
20:23:09 <blogan> IMO
20:23:27 <sbalukoff> Since it seems to take about a week for this cycle.
20:23:53 <blogan> sbalukoff: what piece is taking the longest?
20:23:54 <dougwig> even if we had horizon and octavia, that'd be good progress for liberty.  if none of it lands, then M will have a large feature set addition indeed.
20:24:00 <sbalukoff> I also think Octavia as a reference driver is way more important than L7
20:24:07 <dougwig> sbalukoff: agree
20:24:14 <xgerman> +1
20:24:47 <sbalukoff> blogan: Just waiting on the new patchset after I make recommendations. Also, as I write the reference driver I'm discovering things that should really go into Evgeny's patch, which makes things take longer.
20:24:49 <xgerman> yeah, also especially for L7 we might have more time
20:25:14 <blogan> sbalukoff: okay, so do you think you could go faster if you could push up your own patchsets? or is there a lot of domain knowledge evgeny has?
20:25:53 <xgerman> the Liberty release is 10/15 — so for L7 I wouldn’t pull the trigger 8/31
20:26:19 <blogan> yeah but thats reserved for special cases, the time between Liberty-3 and the actual release, no?
20:26:23 <sbalukoff> blogan: Honestly, I'm not sure I could go much faster. It's not a domain knowledge thing, per se-- it's the fact that I haven't done much with Neutron LBaaS code specifically, and it's kinda spaghetti. So, I spend more time figuring out *where* I need to make a change rather than knowing what the change I need to make is.
20:26:52 <dougwig> sbalukoff: can you do a dependent patch, and just make those additions directly?
20:27:03 <sbalukoff> dougwig: that's sort of what I have been doing.
20:27:03 <dougwig> 8/31 is feature freeze.
20:27:10 <sbalukoff> dougwig: I'll keep at it in any case.
20:27:15 <blogan> dougwig: that could complicate it even ore
20:27:20 <dougwig> ok
20:28:02 <xgerman> dougwig: I think L7 in particular might be worthy of a brief extension
20:28:18 <xgerman> but let’s cross that bridge in a month :-)
20:28:40 <sbalukoff> I'll do my best on this front. If someone thinks they can do it faster (especially with me going on vacation) they should be feel free to go for it. I won't be offended.
20:28:58 <dougwig> xgerman: if it's tight and contained and close, sure. i think octavia is too big to slip in as an extension.
20:29:03 <sbalukoff> Having said this, I think we should plan on Octavia as a reference driver without L7 in Neutron LBaaS is what we should be shooting for by end of month.
20:29:15 <dougwig> i agree with that goal.
20:29:21 <xgerman> dougwig agreed - I was talking L7 only
20:29:39 <xgerman> sbalukoff +1
20:30:12 <sbalukoff> I'd rather be pleasantly surprised if we get L7 in... rather than disappointed that we didn't get Octavia in.
20:30:24 <xgerman> yep, same here
20:31:19 <johnsom> +1
20:31:21 <xgerman> moved L7 below the line in the etherpad
20:31:21 <sbalukoff> So! With that in mind, would y'all rather I work on something more likely to get Octavia in?
20:31:31 <sbalukoff> I guess I could just claim something...
20:31:35 <blogan> lets just scrap octavia and keep namespace driver
20:31:36 <sbalukoff> When I get back from vacation.
20:31:53 <xgerman> blogan heresy
20:32:08 <blogan> sbalukoff: there is the octavia driver that still needs some love
20:32:20 <sbalukoff> blogan: Ok!
20:32:23 <blogan> unless dougwig has grand plans for it
20:32:28 <xgerman> well, didn’t I hear TrevorV volunteer for that
20:32:34 <sbalukoff> I shall prepare my whips and chains when I get back next week.
20:32:35 <xgerman> earlier?
20:32:38 <blogan> he did but he's still on the failover flow thing
20:32:41 <dougwig> blogan: i think we need to commit the first version of it so other can iterate in parallel.
20:32:52 <sbalukoff> Go for it, TrevorV!
20:32:58 <TrevorV> Yeah, thanks.
20:33:06 <blogan> whomever has cycles now should be able to pick that stuff up, unless TrevorV thinks he get on it soon
20:33:22 <TrevorV> I think the failover bit is down now, but it needs a little more love in testing
20:33:35 <TrevorV> But, I'll know after blogan sits with me for a minute shortly
20:33:40 <blogan> dougwig: yeah we can do that i suppose
20:33:46 <TrevorV> sbalukoff, if you have the cycles now, I'd say jump on the driver
20:34:06 <xgerman> I am hoping one of the HP people have some cycles, too
20:34:12 <sbalukoff> TrevorV: I won't have cycles until next Wednesday or Thusday at the earliest.
20:34:15 <xgerman> but you know: internal stuff...
20:34:21 <blogan> sbalukoff: oh well TrevorV will beat you to it then
20:34:40 <blogan> sbalukoff: you're deemed unnecessary now
20:34:49 <bana_k> I too can work on that
20:34:55 <xgerman> cool!!
20:34:55 <sbalukoff> blogan: Oh, so nothing's changed then?
20:35:02 <blogan> sbalukoff: lol
20:35:05 <sbalukoff> bana_K: Yay!
20:35:18 <crc32> Speaking of which. I'm able to get stats from the stats socket but I need a way to test it in the amp side. IE "how do I get the amphora to deploy with my current octavia code I'm working on"? Also I'd like a more concrete description of how to determin if listeners are up or down.l
20:35:20 <xgerman> #action bana_k, TrevorV work on octavia driver
20:35:35 <blogan> who was pegged for the documentation piece?
20:36:02 <xgerman> nobody — since we felt we could do that after 8/31
20:36:10 <sbalukoff> crc32: Um.... locally? Look at the output of 'nestat -lptn'?
20:36:11 <blogan> xgerman: ah okay
20:36:35 <blogan> sbalukoff: he's talking abotu the image that is built for the amphora to be built with, how to get his code in that
20:36:40 <sbalukoff> Oh!
20:36:52 <xgerman> oh, scp?
20:37:06 <xgerman> or you can just rebuild the image
20:37:17 <blogan> xgerman: it should be built with the image through disk-imagecreate, i think its just the plugin pulling down octavia's master isntead of using his local code
20:37:19 <blogan> not sure though
20:37:43 <blogan> bigger question is what determines a listener being down? stats socket not responding?
20:37:59 <blogan> bc if it responds, that listener is really up right?
20:38:09 <johnsom> You could edit the element and have it build an image with our ref in it
20:38:11 <blogan> responds with the appropriately strucutred payload
20:38:20 <xgerman> yep, that’s my undertsanding
20:38:25 <sbalukoff> blogan: Are we doing separate stats sockets per listener? I thought we had multiple listeners per haproxy process.
20:38:41 <crc32> sbalukoff: Yes I'm quering a UNIX socket file but so far I've been testing by uploading my latest octavia code up the amphora via SSH but thats not ideal since I'm sure a bunch of PIPed packages are out of sync. I need the diskimage create to build with my local /opt/stack/octavia code when building the image.
20:38:42 <blogan> sbalukoff: each listener is a haproxy process
20:38:53 <blogan> sbalukoff: so stats sock for each
20:38:56 <crc32> sbalukoff yes each listener has its own sock file.
20:39:10 <johnsom> But easiest is probably tar/scp or rsync over ssh
20:39:11 <sbalukoff> blogan: Oh! Shit, really? Ok then that's different than the haproxy ref driver I've been working on. Yay!
20:39:14 <xgerman> sbalukoff you voted for that and now you do a 360
20:39:22 <blogan> sbalukoff: yes entirely, you were the proponent of this in the beginning!
20:39:30 <sbalukoff> xgerman: No, I really want it to work that way.
20:39:38 <xgerman> actually only having ONE harpy process might be easier for ACTIVE-PASSIVW
20:39:47 <dougwig> TrevorV, bana_k: let's pow-wow briefly post-meeting so we don't step on each other
20:39:52 <sbalukoff> I thought I had been overridden in the intervening months when I looked at the neutron lbaas ref impl.
20:39:53 <johnsom> sbalukoff Have you come to your senses and we can ditch the 1 process per listener!?????
20:39:54 <blogan> harpy should change their name to haproxy
20:40:18 <xgerman> lol
20:40:19 <TrevorV> Sure thing dougshelley66
20:40:20 <bana_k> gougwig sure thing
20:40:21 <sbalukoff> johnsom: I'm more amenable to the idea of multiple listeners per haproxy.
20:40:22 <TrevorV> dougwig, ***
20:40:44 <crc32> johnsom: yea thats what I'm doing now but I test with python3 and I don't know if the images packages are uptoday with my code. I tried to flat out copy a virtualenv over as well since pip won't work in the amphora and that doesn't work either. L(
20:40:46 <blogan> that would be a lot of code changes right now to switch to that
20:40:59 <sbalukoff> johnsom: But I still think I prefer them separate. I'm having trouble recalling the technical reasons for having them separate at this point. It's been like a year since we had this discussion.
20:41:09 <sbalukoff> blogan: Then let's not switch to that.
20:41:12 <blogan> sbalukoff: there's a wiki article
20:41:12 <sbalukoff> Oh!
20:41:20 <sbalukoff> blogan: Aah! Thank you.
20:41:26 <sbalukoff> I shall revisit that decision.
20:41:29 <crc32> sorry I mean if I run python3 it has no idea where the octavia module is. So the environment is only working for python2 I guess.
20:41:37 <sbalukoff> And hopefully be less disruptful with my hair-brained ideas here.
20:41:37 <johnsom> crc32 once you have it over, just do a python setup.py install.  If setup.cfg chances make sure you remove the old egg directory so it picks up the right one
20:41:40 <xgerman> crc32 the amphora isn’t connected to the Interwebs so no pip
20:41:59 <blogan> xgerman: what makes it more difficult to do multiple haproxy processes with active standby?
20:42:13 <xgerman> well, you are going to set them up as peers
20:42:21 <blogan> ahh
20:42:30 <johnsom> sbalukoff The problem I have argued is the failover is at the IP level, so it's pretty much all or none anyway.
20:42:34 <xgerman> and it’s easier two peer two haproxy than N
20:42:49 <crc32> xgerman: exactly. Pip won't work. virtualenv won't work. Is there no script I can run to build the new image based on /opt/stack/octavia on my host machine?
20:43:02 <johnsom> Plus monitoring n-processes gets ugly for failover
20:43:27 <xgerman> crc32 I run ./diskimge_build or so
20:43:37 <xgerman> but you will need your code somewhere in git
20:43:54 <johnsom> crc32 you would have to change the agent element to pick up yours instead of pulling down head
20:44:14 <crc32> johnsom: yea how would I do that?
20:44:29 <sbalukoff> I don't think we should be revisiting that decision (multiple haproxy vs. 1 haproxy) right now, given the end of month deadline for other priorities.
20:44:47 <sbalukoff> Might be worth revisiting after that, though.
20:44:53 <blogan> xgerman: hmm, i see the problem i think.  peering is not somethign we need for 8/31 right?
20:44:58 <johnsom> crc32 Change this file: octavia/elements/amphora-agent/source-repository-amphora-agent
20:45:01 <xgerman> vi elements/amphora-agent/source-repository-amphora-agent
20:45:13 <xgerman> blogan: correct
20:45:36 <blogan> so we could just go forward with what we have and then move to single process if it is determined that peering overcomplicates it
20:45:47 <xgerman> agreed
20:45:57 <xgerman> yeah, what I said was just informational
20:46:04 <blogan> okay revisit after then, like sbalukoff suggested
20:46:27 <xgerman> and abdelwas might still figure out how to do it with N
20:46:29 <blogan> xgerman: its good to know, bunch of little tings we didn't think about at the beginning of all of this
20:46:39 <xgerman> yep
20:46:47 <sbalukoff> xgerman: I have a couple ideas on that front.
20:47:05 <xgerman> #topic Open Discussion
20:47:16 <xgerman> sbalukoff I am listening :-)
20:47:21 <sbalukoff> Haha! Been in this topic for a while yet.
20:47:23 <crc32> cool so this should work? "amphora-agent git /opt/amphora-agent https://review.openstack.org/openstack/octavia refs/changes/82/201882/20"
20:47:33 <xgerman> crc32 ++
20:47:36 <xgerman> yes
20:47:54 <sbalukoff> xgerman: Could you first explain to me what you mean by peering? Are you talking about haproxy processes on multiple machines communicating some control protocol together?
20:48:26 <blogan> sbalukoff: sharing state data so that session persistence is maintained on failover
20:48:30 <sbalukoff> Or are we talking something lower level, like at the keepalived level?
20:48:31 <crc32> xgerman: in /etc/octavia/octavia what does this option do? Do I need to change it? --> amp_image_id = 9c38127d-e904-4a2d-a665-e8e55943dedb
20:48:40 <sbalukoff> blogan: Ok, right.
20:48:54 <xgerman> blogan +1
20:49:00 <blogan> sbalukoff: one example of it, shares more than just that but thats the easy one to explain for my feable brain
20:49:09 <johnsom> crc32 diskimage-builder/elements/source-repositories/README.rst describes the file format.  They support a tar source.
20:49:13 <sbalukoff> Ok, so my idea I had doesn't apply so much. XD Will have to chew on this a bit more.
20:49:52 <sbalukoff> blogan: I wasn't aware haproxy had that capability already.
20:50:08 <xgerman> yeah, it’s amazing that way :-)
20:50:10 <johnsom> crc32 amp_image_id is the amphora os image loaded into glance to boot the amphora with
20:50:58 <xgerman> also I would remove the qcow2 file prior to building a new image. I run out of disk space quickly
20:50:59 <blogan> sbalukoff: apparently haproxy has all the capabilities, except udp load balancing
20:51:07 <sbalukoff> xgerman: Ok, when we revisit this discussion after the end of the month, I'll try to get specific knowledge of the haproxy capabilities on this front as context then.
20:51:16 <xgerman> yep
20:51:20 <sbalukoff> blogan: Yeah, well...  yeah.
20:51:32 <sbalukoff> Because UDP load balancing is stupid.
20:51:36 <sbalukoff> ;)
20:51:50 <xgerman> nah, we just had somebody ask two days ago
20:51:52 <dougwig> sbalukoff: hey now
20:51:56 <sbalukoff> We should totally do that UDP load balancing session at the bar in Tokyo.
20:51:56 <johnsom> sbalukoff Also think about the shared fate all of the processes have around that IP.  If would process dies, all have to fail over.
20:52:15 <sbalukoff> xgerman: Just because someone asks for it, doesn't mean we should actually give it to them.
20:52:34 <xgerman> brogan said we should write our own UDP load balancer in python
20:52:38 <xgerman> blogan
20:52:39 <blogan> we should
20:52:40 <dougwig> udp is an underrated protocol, because crappy hacks don't know how to use it.
20:52:40 <sbalukoff> johnsom: Oh, I know.
20:53:00 <dougwig> :)
20:53:12 <sbalukoff> dougwig: Hence the reason we shouldn't be to hasty to give those crappy hacks a tool to shoot themselves in the foot.
20:53:13 <xgerman> dougwig I was expecting you to suggest ruby...
20:53:19 <xgerman> but oh well :-)
20:53:23 <sbalukoff> :)
20:53:31 <johnsom> UDP is making a come back.  SPUD research proto and all
20:53:38 <dougwig> hey, i can write the open-source ref in whatever language i want.
20:53:44 <sbalukoff> HAHA
20:54:07 <xgerman> #action dougwig write UDP LB in ruby
20:54:10 <sbalukoff> Ok, folks... um... anything else or move to adjourn?
20:54:12 <madhu_ak> when writing scenario tests, in order to check for loadbalancing requests, can someone recommend to setup two servers and setup webservers on it? or single server with different ports?
20:54:30 <xgerman> I think tempest can start vms?
20:54:36 <madhu_ak> yeah
20:54:39 <xgerman> and then you an use the ajmiller script
20:54:42 <crc32> so did we decide on what criteria from a stats socket determines if the listener is up or down?
20:55:10 <xgerman> we decided if the stats socket answers and spits out well formed stats we consider it alive
20:55:12 <sbalukoff> crc32: Seems like a good solution for today's design.
20:56:06 <johnsom> crc32 The status field doesn't meet the need?
20:56:07 <crc32> ok so thats an alive dead description at the listener level.
20:56:16 <sbalukoff> Yes.
20:57:17 <xgerman> Yeah, just make sure we can reuse it to trigger the failover for active-passive the same way
20:57:33 <xgerman> would head to mark something as bad and not fail-over and vice versa
20:57:40 <xgerman> head=hate
20:58:08 <xgerman> anything else?
20:58:26 <sbalukoff> Thanks, folks!
20:58:30 <xgerman> #endmeeting