*** xgerman has quit IRC | 00:09 | |
sbalukoff | xgerman: Yes, we can discuss it there. | 00:18 |
---|---|---|
sbalukoff | crc32: Technically it was a tie. But we're going to give the IRC thing a shot for a couple weeks and re-evaluate. Again, my prediction is that some of the people voting for IRC probably don't actually intend to attend. However, it's possible they might. Also, it's possible some people who voted for voice might change their opinions. | 00:20 |
*** ptoohill-oo has quit IRC | 00:31 | |
*** sbfox has joined #openstack-lbaas | 00:48 | |
*** crc32 has quit IRC | 00:52 | |
*** sbfox has quit IRC | 00:56 | |
*** sbfox has joined #openstack-lbaas | 01:28 | |
*** amotoki has joined #openstack-lbaas | 01:50 | |
*** woodster_ has quit IRC | 01:55 | |
*** fnaval has joined #openstack-lbaas | 02:20 | |
*** fnaval has quit IRC | 03:29 | |
*** fnaval has joined #openstack-lbaas | 03:30 | |
*** fnaval has quit IRC | 04:17 | |
*** dkehnx has quit IRC | 06:43 | |
*** dkehn has joined #openstack-lbaas | 06:44 | |
*** dkehn has quit IRC | 07:32 | |
*** jschwarz has joined #openstack-lbaas | 07:33 | |
*** rm_you| has joined #openstack-lbaas | 07:38 | |
*** rm_you has quit IRC | 07:40 | |
*** johnsom__ has joined #openstack-lbaas | 08:02 | |
*** electrichead has joined #openstack-lbaas | 08:02 | |
*** sbfox has quit IRC | 08:10 | |
*** redrobot has quit IRC | 08:10 | |
*** johnsom_ has quit IRC | 08:10 | |
*** dkehn has joined #openstack-lbaas | 08:42 | |
*** rm_you has joined #openstack-lbaas | 09:12 | |
*** rm_you has quit IRC | 09:12 | |
*** rm_you has joined #openstack-lbaas | 09:12 | |
*** rm_you| has quit IRC | 09:12 | |
*** rm_you| has joined #openstack-lbaas | 09:21 | |
*** rm_you has quit IRC | 09:24 | |
ciko | Hi, is there any documentation on how graceful server shutdown work with Neutron? Ich found something in https://wiki.openstack.org/wiki/Neutron/LBaaS/API but it basically says that connections might or might not be terminated and Session Persistence will be lost. | 09:25 |
ciko | Aint there a way to just terminate new connections and let all sessions end before shutting down a server? | 09:26 |
*** rm_you|wtf has joined #openstack-lbaas | 09:28 | |
*** rm_you| has quit IRC | 09:30 | |
*** rm_you|wtf has quit IRC | 09:35 | |
*** woodster_ has joined #openstack-lbaas | 12:37 | |
*** sballe has joined #openstack-lbaas | 12:47 | |
*** balles has quit IRC | 13:25 | |
jschwarz | hey guys | 13:34 |
*** amotoki has quit IRC | 13:52 | |
sballe | jschwarz, morning :) | 14:16 |
*** markmcclain has joined #openstack-lbaas | 14:17 | |
jschwarz | :P | 14:17 |
jschwarz | markmcclain, Do you have a minute to talk about one of your old changesets? https://review.openstack.org/#/c/22794/ | 14:19 |
markmcclain | jschwarz: sure what's up? | 14:19 |
jschwarz | markmcclain, specifically, https://review.openstack.org/#/c/22794/4..15/quantum/plugins/services/agent_loadbalancer/drivers/haproxy/cfg.py L56 | 14:20 |
jschwarz | markmcclain, the one with the logging configuration on the right side | 14:21 |
jschwarz | markmcclain, the configuration as is, with the default rsyslog configuration on some redhat based systems, causes some broadcast logs to spam in certain cases | 14:21 |
jschwarz | markmcclain, I found a fix for it which involves changing the log configuration, but thought I'd ask you because you wrote this | 14:22 |
markmcclain | I tested on 12.04 so feel free to change it for the better | 14:22 |
jschwarz | markmcclain, do you remember why are there 2 different facilities (level0 and level1)? | 14:23 |
jschwarz | as far as I understand, only one is needed to produce logs... | 14:24 |
markmcclain | jschwarz: honestly no :) | 14:27 |
markmcclain | it's likely a product of the fact I wrote it < week | 14:27 |
jschwarz | lol | 14:27 |
jschwarz | ^^ | 14:27 |
jschwarz | markmcclain, i'll write the changeset and add you as a reviewer if you could look at it when you get the chance | 14:27 |
markmcclain | sounds good | 14:28 |
jschwarz | if someone has any objections he can take it up with the mighty Gerrit God | 14:28 |
markmcclain | my guess is probably lifted that bit from a conf file I pulled from an internal system at the time | 14:28 |
markmcclain | jschwarz: sounds good | 14:28 |
jschwarz | must be it | 14:28 |
jschwarz | markmcclain, me again :) I can probably make it work so that haproxy sends its logs to the lbaas agent process (instead of syslog->/var/log/messages) | 14:51 |
*** HenryG has joined #openstack-lbaas | 14:51 | |
*** mlavalle has joined #openstack-lbaas | 14:51 | |
jschwarz | markmcclain, though it would require me to create a listening socket in the ns driver | 14:51 |
jschwarz | markmcclain, what do you reckon? | 14:52 |
markmcclain | jschwarz: why do we want to incept the logs? | 14:53 |
jschwarz | markmcclain, tenant operators could notice the error and then realize their backend members are down, for example | 14:54 |
*** openstackgerrit has joined #openstack-lbaas | 14:55 | |
jschwarz | ie. they'll probably be of use to tenant operators (but on second thought they don't usually get access to the lbaas process logs, right?) | 14:55 |
markmcclain | jschwarz: hmmm.. that would be an interesting thing to add | 14:57 |
markmcclain | vs waiting for the monitor timer to fire to determine a member is gone | 14:57 |
jschwarz | markmcclain, actually the logs are triggered when the monitor discovers that all the members are gone | 14:58 |
jschwarz | markmcclain, on the other hand, the difference between lbaas agent's logs and /var/log/messages is not so big (and a tenant operator is unlikely to look at both imo) | 14:59 |
markmcclain | right | 14:59 |
jschwarz | so /var/log/messages it is :) | 15:00 |
*** Zebra has quit IRC | 15:06 | |
*** dkehn has quit IRC | 15:16 | |
*** xgerman has joined #openstack-lbaas | 15:18 | |
*** busterswt has joined #openstack-lbaas | 15:18 | |
*** dkehn__ has joined #openstack-lbaas | 15:19 | |
*** johnsom__ has quit IRC | 15:22 | |
*** dkehn__ is now known as dkehnx | 15:22 | |
*** johnsom__ has joined #openstack-lbaas | 15:22 | |
*** markmcclain has quit IRC | 15:24 | |
sballe | \o? | 15:25 |
*** electrichead is now known as redrobot | 15:26 | |
*** openstack has joined #openstack-lbaas | 16:18 | |
*** sbfox has joined #openstack-lbaas | 16:31 | |
*** barclaac has joined #openstack-lbaas | 16:34 | |
*** barclaac|2 has quit IRC | 16:37 | |
*** mageshgv has joined #openstack-lbaas | 16:51 | |
*** markmcclain has joined #openstack-lbaas | 16:59 | |
*** johnsom__ has quit IRC | 17:10 | |
*** TrevorV_ has joined #openstack-lbaas | 17:14 | |
*** jorgem has joined #openstack-lbaas | 17:14 | |
*** ajmiller has joined #openstack-lbaas | 17:33 | |
*** sbfox has quit IRC | 17:36 | |
*** sbfox has joined #openstack-lbaas | 17:42 | |
*** barclaac|2 has joined #openstack-lbaas | 17:54 | |
*** barclaac has quit IRC | 17:55 | |
*** sbfox has quit IRC | 17:55 | |
*** rohara has joined #openstack-lbaas | 17:57 | |
rohara | sbalukoff: probably going to miss today's meeting | 17:58 |
*** TrevorV_ has quit IRC | 18:00 | |
*** TrevorV_ has joined #openstack-lbaas | 18:00 | |
sbalukoff | rohara: Well, there'll be a transcript for this one. :) In any case, hope you can make it next week. | 18:07 |
rohara | sbalukoff: cool beans. and yeah, i will be there next week for sure | 18:08 |
*** sbfox has joined #openstack-lbaas | 18:10 | |
TrevorV_ | sbalukoff: correct me if I'm wrong, but I won't have to spend time writing up meeting minutes tonight right? | 18:36 |
sbalukoff | TrevorV: Correct. | 18:40 |
sbalukoff | Well, the minutes we get out of the automated system won't be nearly as nice as the ones you've put together over the last few meetings. So if you really want to do that / annotate them or something, that's fine, eh. | 18:40 |
sbalukoff | I'm not really expecting to be able to cover as much material as we have in previous weeks. | 18:41 |
TrevorV_ | So the meeting minutes don't capture the communications? | 18:41 |
TrevorV_ | in IRC I mean? | 18:41 |
sbalukoff | No, they only capture items we specifically point out (in conversation) to capture. Like action items, topic changes, etc. | 18:41 |
sbalukoff | Can't really expect an automated system to be able to comprehend what is actually being said to automatically capture points of interest that fall outside those categories. ;/ | 18:42 |
TrevorV_ | Nah I thought it kept the log of input/output from users too, guess not | 18:45 |
TrevorV_ | I can write stuff up still if that really does help you guys :) | 18:45 |
TrevorV_ | It will also be easier since I can just do a copy-paste of the text being sent around and pull out some useful informations :D | 18:49 |
blogan_ | ping sbalukoff | 18:50 |
sbalukoff | blogan_: Pong | 19:03 |
sbalukoff | TrevorV_: Oh, it does keep a line-by-line transcript of everything. But that's the "full log" not the minutes. | 19:03 |
TrevorV_ | So should I write stuff up or no, sbalukoff ? | 19:04 |
* TrevorV_ is confused :D | 19:04 | |
blogan_ | sbalukoff: i'm responding to your comments | 19:04 |
sbalukoff | TrevorV_: For the full "IRC meeting experience" let's not write anything up for the next couple meetings? | 19:04 |
sbalukoff | (At least) | 19:04 |
sbalukoff | blogan_: Oh, good! | 19:05 |
TrevorV_ | sbalukoff: I'm down with that :) | 19:05 |
blogan_ | sbalukoff: real quick though, the host_id the load_balancer is meant to store something such as the VM id that the loadbalancer exists on | 19:05 |
sbalukoff | blogan_: Ok, so the problem there is that in an active/standby or active/active topology it won't be just one host. | 19:05 |
sbalukoff | So either we make that a list, or we come up with some other entity that describes "the thing hosting the loadbalancer" and the specific Nova VM ids just become a list attached to that. | 19:06 |
blogan_ | sablukoff: yeah and I thought about that and didn't think it mattered, but now i can't remember why I thought that | 19:06 |
sbalukoff | blogan_: If it doesn't matter, why track the information at all? ;) | 19:06 |
blogan_ | but i dont know why I would think that because it does matter | 19:06 |
sbalukoff | Right. | 19:07 |
blogan_ | no i mean it didn't matter to keep track of any but one | 19:07 |
blogan_ | for some reason | 19:07 |
sbalukoff | Because Octavia's controller needs to actually know the Nova VMs hosting loadbalancers, eh. | 19:07 |
sbalukoff | So the question is, are there any additional attributes that would get assigned to that "thing hosting the loadbalancer"? | 19:07 |
sbalukoff | (The answer to that tells me whether we should just list the host ids, or whether we need that separate object.) | 19:08 |
sbalukoff | Perhaps a flavor? | 19:08 |
blogan_ | well i would assume the flavor would jsut be on the loadbalancer | 19:08 |
sbalukoff | Right, that works. | 19:08 |
blogan_ | since that can determine how many vms are being used | 19:08 |
sbalukoff | Yep. | 19:08 |
blogan_ | well i think we need another table | 19:09 |
sbalukoff | So right now, it's sounding like host_id should just be a list. | 19:09 |
blogan_ | a comma delimited list? | 19:09 |
sbalukoff | No-- use another table. | 19:09 |
blogan_ | ah ok i was gonna say | 19:09 |
sbalukoff | Having to parse a field in the database defeats a lot of the reason to use relational databases in the first place. | 19:09 |
blogan_ | lol i know | 19:09 |
blogan_ | this actually brings up another issue then, colocation and apolocation | 19:10 |
sbalukoff | Yep! | 19:10 |
*** sbfox has quit IRC | 19:10 | |
blogan_ | if someone says they want their loadbalancer colocated with antoher does that mean all the haproxy's are supposed ot be on all the same vms | 19:10 |
sbalukoff | Those will also effectively be lists. | 19:10 |
blogan_ | and what if they specify a different flavor | 19:11 |
sbalukoff | blogan_: Yes. | 19:11 |
sbalukoff | Then we have to return an error. | 19:11 |
blogan_ | if you use colocate you have to have the same flavor | 19:11 |
sbalukoff | All colocated loadbalancers must be of the same flavor. | 19:11 |
blogan_ | okay | 19:11 |
blogan_ | were you planning on apolocating allowing a list of loadbalancer ids not to be on? or just one loadbalancer id/ | 19:12 |
blogan_ | ? | 19:12 |
sbalukoff | blogan_: A list. | 19:12 |
sbalukoff | Really, what we're doing here is inventing group logic (again?) | 19:12 |
blogan_ | that will get complicated | 19:12 |
xgerman | all colocated once on the same nova flavor? | 19:12 |
blogan_ | doable | 19:12 |
xgerman | that sounds wrong | 19:12 |
blogan_ | xgerman: no octavia flavor | 19:12 |
xgerman | ok | 19:12 |
blogan_ | but actually that woudl end up causing the same nova flavor | 19:12 |
xgerman | so I can have SSL and non SSL (using different nova flavors colocated) | 19:13 |
sbalukoff | In an indirect way, yes. | 19:13 |
xgerman | well, my advice is to allow different flavors since for SSL we would like to use beefier machines.. | 19:13 |
blogan_ | is SSL supposed to be a feature defined by flavor? | 19:13 |
blogan_ | oh i see | 19:13 |
blogan_ | what you mean | 19:13 |
sbalukoff | Right... | 19:13 |
*** sbfox has joined #openstack-lbaas | 19:13 | |
blogan_ | so with this solution, they user woudl need to use a beefier flavor for the nonssl | 19:14 |
sbalukoff | xgerman: Keep in mind that all listeners on a single loadbalancer end up on the same VM. So, there's implied colocation between listeners when they're on the same loadbalancer. | 19:14 |
sbalukoff | What we're talking about is colocation of loadbalancers | 19:14 |
sbalukoff | (ie. different vips) on the same machine. | 19:14 |
xgerman | yep, and I still might want to colocate SSL (more CPU) with non-SSL... | 19:15 |
sbalukoff | It's a relatively infrequent requirement, but we've seen it come up with users who want to make sure, for example, all their development environments use the same physical hardware to save costs and separate them from production. | 19:15 |
xgerman | well, since we don;t offer that I really have no dog in that... | 19:16 |
sbalukoff | The apolocation stuff is the way to guarantee separation from production. | 19:16 |
sbalukoff | It comes up in private clouds, mostly, where the customer is trying to minimize their hardware costs, yet still have a funcionally equivalent dev environment. | 19:17 |
sbalukoff | Funny thing is this requirement comes out of how things get billed. If you don't bill the same way we do, then it probably won't come up for you at all. | 19:18 |
xgerman | yeah, I just think restricting it to be inside one flavor is limiting... | 19:18 |
sbalukoff | xgerman: I'm not sure I understand why. | 19:18 |
sbalukoff | Are you saying that one flavor could supersede another flavor? That flavor A is a fully-contained subset of flavor B, or something? | 19:18 |
xgerman | well, if I have flavor A and flavor B of a software load balancer I might wnat ti to be on the same hardware | 19:19 |
sbalukoff | I'm not sure we intend for flavors to work that way (that sounds really complicated anyway). | 19:19 |
sbalukoff | xgerman: And by 'hardware' we mean 'Octavia VM', right? | 19:19 |
xgerman | maybe that's my confusion I thought you meant physical box where the VMs run | 19:19 |
sbalukoff | Since the flavor actually defines several key characteristics of the octavia VM (eg. RAM, CPU, HA topology), I don't see how that's possible unless they're the same flavor. | 19:20 |
sbalukoff | xgerman: Oh! No, I don't think so. Though it's probably worth considering whether apolocation requirements should take that into account. | 19:20 |
sbalukoff | Hmmmm... | 19:20 |
sbalukoff | (You know, so that apolocated loadbalancers don't share the same fate.) | 19:20 |
xgerman | yep, I thought you were talking that | 19:21 |
sbalukoff | Yeah, sorry-- I've been somewhat ambiguous as to whether I was talking about physical hardware or virtual hardware. | 19:21 |
*** sbfox has quit IRC | 19:22 | |
xgerman | no worries - I am also easily confused anyway... | 19:22 |
xgerman | I will grab lunch so I am not hungry during our meeting ;-) | 19:23 |
sbalukoff | That sounds like a really good idea. | 19:23 |
*** TrevorV_ has quit IRC | 19:23 | |
blogan_ | sbalukoff: vip_port_id | 19:27 |
blogan_ | sbalukoff: your comments on that are valid, but also brought up another possible issue | 19:28 |
blogan_ | using vip_subnet_id would not be valid if we were using nova-networks, so we can't assume subnets exist for anything the network driver abstracts | 19:28 |
*** barclaac has joined #openstack-lbaas | 19:29 | |
*** barclaac|2 has quit IRC | 19:32 | |
sbalukoff | Hmmm... | 19:47 |
sbalukoff | nova-networks doesn't know what a subnet is? | 19:47 |
sbalukoff | How the heck does it function at all? O.o | 19:47 |
blogan_ | from my quick look at it, nova-networks just has a network entity, but you define a cidr on it | 19:48 |
sbalukoff | Aah. | 19:48 |
blogan_ | its really just a naming issue | 19:48 |
sbalukoff | Well, that's... mostly what we're after, I guess. Do we know how nova-networks deals with overlapping ranges? | 19:48 |
blogan_ | and we can call keep it vip_subnet_id but the network driver will have to know that means network in nova-network terms | 19:48 |
blogan_ | no idea on that one | 19:48 |
sbalukoff | We can invent our own term, or, I dunno... use something more intuitive and "industry standard" | 19:49 |
blogan_ | though, from a very fuzzy memory i think it does validate overlapping cidrs | 19:49 |
sbalukoff | Huh... does it disallow them? | 19:49 |
blogan_ | yes | 19:49 |
*** crc32 has joined #openstack-lbaas | 19:49 | |
blogan_ | by validate i mean it won't allow them | 19:49 |
sbalukoff | As in, no two tenants can use the same back-end ip range (even if it's RFC1918 range)? | 19:49 |
blogan_ | no i mean a tenant cannot have two networks with overlapping cidr blocks | 19:50 |
sbalukoff | Aah! | 19:50 |
blogan_ | i dont see why it would limit it across tenants | 19:50 |
blogan_ | but again, I don't know much about nova-networks | 19:50 |
sbalukoff | Yeah. | 19:50 |
sbalukoff | Neither do I. :/ | 19:50 |
blogan_ | back to the host/apolocation talk | 19:51 |
sbalukoff | Well, it sounds like there probably is a way to abstract it out that makes sense... | 19:51 |
blogan_ | there is | 19:51 |
*** dlundquist has joined #openstack-lbaas | 19:51 | |
blogan_ | and it will just be a naming issue | 19:51 |
sbalukoff | Welcome Dustin! | 19:51 |
sbalukoff | blogan_: Yep. | 19:51 |
dlundquist | Hi all | 19:51 |
blogan_ | wouldn't it be easier to define clusters/groups of vms, and a loadbalancer is assigned to that cluster? | 19:52 |
sbalukoff | blogan_: Yes. Yes it would. | 19:52 |
*** TrevorV_ has joined #openstack-lbaas | 19:52 | |
sbalukoff | Wasn't that part of the original proposal, like months ago? | 19:52 |
sbalukoff | I seem to recall having a conversation about this months ago. | 19:52 |
TrevorV_ | Hey guys, how are we doing the meeting? A different channel or this one? | 19:52 |
sbalukoff | At the ATL summit, right? | 19:52 |
sbalukoff | TrevorV: This channel. | 19:52 |
TrevorV_ | kk sweet | 19:53 |
sbalukoff | Starting in 7 minutes. | 19:53 |
blogan_ | I think so, but wasn't ever written down | 19:53 |
blogan_ | but a problem with that is that it implies that an haproxy instance is installed on all the VMs on a cluster for every loadbalancer on that cluster | 19:53 |
blogan_ | i feel like that could cause a problem with ha topologies | 19:54 |
blogan_ | the differing ones | 19:54 |
sbalukoff | Hmmm... | 19:54 |
sbalukoff | So, I'm not sure I follow, mostly because I'm not sure exactly what the model looks like that you're referring to. We should probably define that first, and extrapolate implications from that. | 19:55 |
blogan_ | but I think if the operator can define clusters and how many vms are in them and whether they are active or standby | 19:55 |
sbalukoff | That's an interesting idea. | 19:55 |
blogan_ | sbalukoff: yes a definition would be great, becasue I may be wrong on this and more thought needs to be put into it | 19:55 |
sbalukoff | Yeah. | 19:56 |
sbalukoff | Perhaps something we can approach after the meeting... unless this is a blocker for getting the DB model stuff sorted? | 19:56 |
blogan_ | well the DB model can change easily | 19:56 |
sbalukoff | If we have to do colocation / apolocation stuff a little later that won't be the end of the world. | 19:56 |
blogan_ | so even if it gets merged, it can be changed quite easily | 19:56 |
sbalukoff | It's a fairly advanced feature, anyway. | 19:56 |
TrevorV_ | sbalukoff (he'll make me do it o_0) | 19:57 |
*** jamiem has joined #openstack-lbaas | 19:57 | |
sbalukoff | Haha | 19:57 |
*** johnsom has joined #openstack-lbaas | 19:57 | |
blogan_ | yeah and I was thinking we would just do the basic features first, and once things got more stable and a workflow is in place, then feature iteration would happen much easier | 19:57 |
sbalukoff | *nod* | 19:58 |
blogan_ | but keeping all these features in mind that we want, because we dont want to paint ourselves in a corner | 19:58 |
xgerman | +1 | 19:58 |
sballe | +1 | 19:59 |
*** tmc3inphilly has joined #openstack-lbaas | 19:59 | |
tmc3inphilly | good day/night all | 19:59 |
sbalukoff | Ok, I think it's about time to start... | 19:59 |
crc32 | 30 seconds | 20:00 |
sballe | blogan_, Can you point me to the bp/review for the data model? | 20:00 |
sbalukoff | #startmeeting Octavia | 20:00 |
crc32 | never mind I'm lagged by 30 seconds | 20:00 |
openstack | Meeting started Wed Aug 27 20:00:12 2014 UTC and is due to finish in 60 minutes. The chair is sbalukoff. Information about MeetBot at http://wiki.debian.org/MeetBot. | 20:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 20:00 |
blogan_ | its our inaugural Octavia IRC meeting | 20:00 |
openstack | The meeting name has been set to 'octavia' | 20:00 |
xgerman | o/ | 20:00 |
sbalukoff | Howdy folks! | 20:00 |
dougwig | o/ | 20:00 |
blogan_ | hello | 20:00 |
sballe | \o/ | 20:00 |
*** min has joined #openstack-lbaas | 20:00 | |
sbalukoff | This is the agenda we're going to be using: | 20:00 |
sbalukoff | #link https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2014-08-27 | 20:00 |
tmc3inphilly | +1 | 20:00 |
*** min is now known as Guest62412 | 20:01 | |
sbalukoff | So, let's get started, eh! | 20:01 |
sbalukoff | #topic Review action items from last week | 20:01 |
blogan_ | sablle: sbalukoff will probably link them in 3..2..1.. | 20:01 |
sbalukoff | Haha! | 20:01 |
sballe | blogan_, ok | 20:01 |
*** juliancash has joined #openstack-lbaas | 20:01 | |
rm_work | I'm here, just distracted by a production issue at the moment :) ping me if you need anything specific | 20:02 |
sbalukoff | Well the first item on the list here is to go over the benchmarks put together by German | 20:02 |
xgerman | #link https://etherpad.openstack.org/p/Octavia_LBaaS_Benchmarks | 20:02 |
*** ptoohill-oo has joined #openstack-lbaas | 20:02 | |
sbalukoff | xgerman: Can you go ahead and speak to that? | 20:02 |
blogan_ | rm_work: great reason to have this on IRC | 20:03 |
xgerman | I compared two listeners in one haproxy / and two haproxy processes with each one listener | 20:03 |
xgerman | the results are fairly similar so either one seems viable | 20:03 |
xgerman | throughput was a tad higher for two listners/1 haproxy but I think tuning can fix that | 20:03 |
xgerman | haproxy - 2 listeners: 545.64 RPS, 430.57 RPS | 20:04 |
xgerman | haproxy - 2 processes: 437.51 RPS, 418.93 RPS | 20:04 |
xgerman | the two values in each line are for the two ports | 20:04 |
sbalukoff | xgerman: How many times did you re-run this benchmark? | 20:04 |
xgerman | each three times | 20:04 |
xgerman | + glided in to the right concurrent requests | 20:04 |
sbalukoff | (Given it was being done on cloud instances, where performance can sometimes be variable. ;) ) | 20:04 |
tmc3inphilly | which version of HAProxy was used for testing? | 20:05 |
xgerman | 1.5 | 20:05 |
dougwig | that kind of variation can just be context switching two processes instead of one, too. | 20:05 |
sbalukoff | Yeah, given the only distinction between the two ports' configurations was the ports themselves, it seems like performance for both in one test ought to be really close. | 20:05 |
xgerman | agreed | 20:05 |
sbalukoff | So, this being one test: haproxy - 2 listeners: 545.64 RPS, 430.57 RPS | 20:06 |
*** barclaac|2 has joined #openstack-lbaas | 20:06 | |
blogan_ | so the difference in the first listeners is not concerning? | 20:06 |
blogan_ | 545 vs 437? | 20:06 |
tmc3inphilly | would it be possible to share your haproxy.conf files? i am curious of you assigned listeners to cores | 20:06 |
sbalukoff | My point is that is wider than I would have expected. | 20:06 |
xgerman | I can share them | 20:06 |
*** jwarendt has joined #openstack-lbaas | 20:06 | |
tmc3inphilly | agreed sbalukoff | 20:06 |
tmc3inphilly | were the tests run once or several times and averaged | 20:07 |
dougwig | not unless it reproduces a lot. at that rate, that's not a lot of variance, given that it's cloud instances. | 20:07 |
blogan_ | i'd like to see benchmarks on much higher RPS | 20:07 |
sbalukoff | dougwig: Right, but that variance is larger than the variance between running 1 process versus 2. | 20:07 |
xgerman | I ran them several times to see if there are big variations but they clocke din to gther | 20:07 |
*** ptoohill-oo has quit IRC | 20:07 | |
*** ptoohill-oo has joined #openstack-lbaas | 20:08 | |
sbalukoff | that tells me the biggest factor for variable performance here is the fact that it's being done on cloud instances. | 20:08 |
xgerman | I was worried about the RPS, too -- but I used standard hardware and a fairly small nox | 20:08 |
dougwig | what is fast enough? the quest for fastest can be endless. does starting simpler mean we can't add in the other? | 20:08 |
blogan_ | sbalukoff: didn't you mentiong something about getting 10k RPS per instance? | 20:08 |
blogan_ | or did I misunderstand that | 20:08 |
sbalukoff | blogan_: Yes, but that's on bare meta. | 20:08 |
sbalukoff | metal. | 20:08 |
blogan_ | ah okay | 20:09 |
*** barclaac has quit IRC | 20:09 | |
xgerman | also the exercise was to comapre the two approaches | 20:09 |
sbalukoff | blogan_: To get that kind of performance... well, apache bench starts to be the bottleneck. | 20:09 |
xgerman | and since they are fairly close I was calling it a tie | 20:09 |
sbalukoff | So you have to hit the proxy / load balancer with several load generators, and the proxy / load balancer needs to be pointed at a bunch of back-ends. | 20:09 |
blogan_ | xgerman: I know, I'm just concerned that the gap could widen with higher loads | 20:09 |
dougwig | was the generator at 100% cpu? ab should scale better than that; it's nowhere near line speed. | 20:10 |
sbalukoff | blogan_: I have that concern too, but probably less so: | 20:10 |
sbalukoff | The reason being that if there were a major difference in performance, it would probably show up at these levels. | 20:10 |
tmc3inphilly | xgerman, is HP using Xen or KVM? | 20:10 |
*** ptoohill-oo has quit IRC | 20:10 | |
xgerman | KVM | 20:10 |
tmc3inphilly | danke | 20:10 |
*** ptoohill-oo has joined #openstack-lbaas | 20:10 | |
sbalukoff | And, let's be honest, the vast majority of sites aren't pushing that kind of traffic. Those that are, are already going to want to be on the beefiest hardware possible. | 20:10 |
*** barclaac has joined #openstack-lbaas | 20:11 | |
sballe | sbalukoff, 100% agree | 20:11 |
xgerman | I also deliberately put the lb on the smallest machine we have to magnify context switching/memory problems | 20:11 |
xgerman | (if any) | 20:11 |
*** ptoohill-oo has quit IRC | 20:11 | |
blogan_ | xgerman: what size were the requests? | 20:11 |
xgerman | 177 Bytes | 20:11 |
dougwig | agree, my point above is that both look fast enough to get started, and not spend too much time here. | 20:11 |
sbalukoff | dougwig: +1 | 20:12 |
*** ptoohill-oo has joined #openstack-lbaas | 20:12 | |
blogan_ | fine by me | 20:12 |
sballe | dougwig, +1 BUT it would be nice if RAX did the benchamrk too to get two samples | 20:12 |
tmc3inphilly | did you directly load test the back end servers to get a baseline? | 20:12 |
xgerman | yes, I did | 20:12 |
tmc3inphilly | what were you seeing direct? | 20:13 |
*** barclaac|2 has quit IRC | 20:13 | |
sbalukoff | On that note, is everyone up to speed on the other points of discussion here (that happened on the mailing list, between me and Michael)? | 20:13 |
ptoohill | xgerman, do you have the configs/files for your tests so it could be easily duplicated | 20:13 |
xgerman | Requests per second: 990.45 [#/sec] (mean) | 20:13 |
xgerman | yes, I will upload the haproxy files | 20:13 |
blogan_ | lets talk more about the benchmarks after the meeting | 20:13 |
sbalukoff | Let me rephrase that: Has anyone not read that e-mail exchange, where we discussed pros and cons of each approach? | 20:13 |
sballe | sbalukoff, I haven't but I am stil cathing up... Will do rigth after the meeting | 20:14 |
dougwig | i have not, but nor do i feel strongly either way. | 20:14 |
blogan_ | I've read it and multiple processes is fine, but was waiting on benchmarks too | 20:14 |
dougwig | because last i read, this wasn't a corner painting event. | 20:14 |
sbalukoff | dougwig: Mostly it affects a couple workflow issues having to do with communication with the Octavia VM. So no, if we have to change it won't be *that* hard. | 20:15 |
blogan_ | dougwig: i think it is | 20:15 |
sbalukoff | blogan_: Oh? | 20:15 |
dougwig | please expand? because if it is, i think we've got a crap interface to the VMs. | 20:15 |
blogan_ | you mean the process per listener vs process per loadbalancer? | 20:16 |
sbalukoff | blogan_: That is what we're discussing, I think. | 20:16 |
dougwig | correct. | 20:16 |
*** TrevorV_ has quit IRC | 20:17 | |
blogan_ | well stats gathering would be different for each approach, provisioning, updating all of that would be affected | 20:17 |
*** samuelbercovici has joined #openstack-lbaas | 20:17 | |
sbalukoff | blogan_: That is true. | 20:18 |
xgerman | +1 | 20:18 |
blogan_ | but that woudl fall under sbalukoffs "not that hard" | 20:18 |
tmc3inphilly | xgerman, could you please rerun your tests with the keepalive flag (-k) enabled? This will greatly increase your performance | 20:18 |
barclaac | dougwig, why do you think we have a "crap interface to the VMs" | 20:18 |
sbalukoff | Haha! | 20:18 |
sbalukoff | You're right. I'm probably trivializing it too much. | 20:18 |
xgerman | and that leads me to wondering if we should abstract the vm <-> controller interface and not just ship haproxy files back and forth | 20:18 |
dougwig | barclaac: if it's something other than "not that hard", it implies we're not abstracting the vm implementation enough. | 20:18 |
sballe | xgerman, +1 I agree | 20:19 |
xgerman | dougwig +1 | 20:19 |
barclaac | Shouldn't the decision on process/listener or process/lb be an implementation detail within the VM? It seems we're making a low level decision and then allowing that to percolate through the rest of Octavia | 20:19 |
dougwig | barclaac: yes, that's another way of saying exactly what i'm trying to communicate. | 20:19 |
sbalukoff | barclaac: Octavia is an implementation. | 20:19 |
sbalukoff | So implementation details are rather important. | 20:19 |
barclaac | But within Octavia we want to try to have as loose a coupling as possible between components. | 20:20 |
dougwig | (although it's vm implementation + vm driver implementation, to be precise.) | 20:20 |
sballe | sbalukoff, But also a framework so yes implementaiton is important but we need the right level of abstraction | 20:20 |
barclaac | If the control plane "knows" that HAProxy is in use we're leaking architectural concerns | 20:20 |
sballe | barclaac, +1 | 20:20 |
xgerman | +1 | 20:20 |
sbalukoff | Um... | 20:20 |
sbalukoff | Octavia knows that haproxy is in use. | 20:20 |
sballe | sbalukoff, We need to keep the interface clean | 20:20 |
dougwig | disagree, but if something outside the controller/vm driver knows about haproxy, that's too much leak. | 20:21 |
barclaac | I think that's the statement that I don't agree with. | 20:21 |
sbalukoff | It's sort of a central design component. | 20:21 |
blogan_ | sbalukoff: do we need to store anything haproxy specific in the database? | 20:21 |
barclaac | I'm not sure haproxy is a central design component. Herding SW load balancers is the central abstraction. HAProxy is an implementation detail. | 20:22 |
sballe | sbalukoff, I always looked at ha-proxy as the first implementation. We could switch ha-proxy with somethign else in the future | 20:22 |
barclaac | If I want to create a VM with nginx would that be possible? | 20:22 |
sbalukoff | barclaac: You have no idea how much I hate the phrase "it's an implementation detail" especially when we're discussing an implementation. :P | 20:22 |
sballe | barclaac, +1 | 20:22 |
dougwig | the skeleton that blogan_ and i briefly discussed had the notion of "vm drivers", which would encapsulate the haproxy or nginx or other implementation details that lived outside the VMs. | 20:22 |
barclaac | sbalukoff +2 :-) | 20:22 |
blogan_ | from my understanding, we were not going to do anything beside haproxy at first, but abstract everything enough to allow easy pluggability | 20:23 |
tmc3inphilly | wouldn't we need to support flavors if we want to have different backends? | 20:23 |
xgerman | dougwig +1 that would also allow a future UDP load balacning solution to use Octavia | 20:23 |
sbalukoff | Well, the haproxy implementation will be the reference. | 20:23 |
dougwig | sballe +1, blogan_ +1 | 20:23 |
sbalukoff | And implementations have quirks. | 20:23 |
dougwig | xgerman: bring on the udp. | 20:23 |
xgerman | :-) | 20:24 |
sbalukoff | It's going to be the job of any other impelementation to work around those quirks or otherwise have similarly predictable behavior. | 20:24 |
xgerman | I want to avoid that each implementation has to talk in haproxy config files... | 20:24 |
barclaac | xgerman, you've just hit the nail on the head! | 20:24 |
sballe | sbalukoff, I thought dougwig might want to switch ha-proxy out with his a10 back-end in the future | 20:25 |
barclaac | We should abstract those files | 20:25 |
dougwig | pull up, pull up. do we agree that we're going to have an haproxy vm, and a file or module that deals with that vm, and that module is not going to be "octavia", but rather something like "octavia.vm.driver.haproxy", right ? in which case, i'm not sure this detail of how many processes really matters. | 20:25 |
sbalukoff | xgerman: Right. It depends on which level you're talking about, and we are sure as hell not being specific enough about that in this discussion. | 20:25 |
sbalukoff | sballe: Yes, and he should be able to. | 20:25 |
sbalukoff | dougwig: Thank you. Yes. | 20:25 |
sballe | sbalukoff, ok so we need t make sure we have enought abraction to allow him todo that and not add too much ha.proxy stuff into the main frameork | 20:26 |
* dougwig suspects that we are all violently agreeing. | 20:27 | |
sbalukoff | So it matters for the reference implementation (which is what the point of the discussion was, I thought). | 20:27 |
sbalukoff | dougwig: We are. | 20:27 |
blogan_ | I think the only thing that will know about haproxy is a driver on the controller, and then also if the VM is running Octavia code it would need an haproxy driver | 20:27 |
rm_work | AGREEMENT. | 20:27 |
* rm_work is caught up | 20:27 | |
tmc3inphilly | +1 blogan_ | 20:27 |
xgerman | +1 | 20:28 |
dougwig | alright, circling back to 1 vs 2 processes, does it matter? | 20:28 |
blogan_ | it matters in the haproxy driver right? | 20:28 |
sbalukoff | dougwig: It does for the reference implementation. | 20:28 |
blogan_ | did i just invoke "implemenation detail"? | 20:28 |
sbalukoff | So, would anyone object to making a simple vote on this and moving on? | 20:29 |
tmc3inphilly | couldn't it be a configurable option of the haproxy driver? | 20:29 |
blogan_ | tmc3inphilly: two drivers | 20:29 |
dougwig | right, let whoever writes that driver/VM decide whether to add the complexity now or in a future commit. | 20:29 |
sbalukoff | tmc3inphilly: I don't see a poitn in that. | 20:29 |
blogan_ | sbalukoff: really it could just be two haproxy drivers | 20:29 |
sbalukoff | dougwig: The reason we are discussing this is because people from HP have (had?) a strong opinion about it. | 20:29 |
dougwig | and right now we have 0 drivers, so cart, meet the horse. | 20:30 |
sbalukoff | This conflicted with my usual strong opinions about everything. | 20:30 |
dougwig | so HP can write the driver. | 20:30 |
sbalukoff | Hence the discussion. | 20:30 |
dougwig | :) | 20:30 |
sballe | blogan_, we would have to maintain extra code | 20:30 |
xgerman | we did our benchmark and it looks like a tie - so we bow to whatever driver is written for now | 20:30 |
blogan_ | sballe: totally agree, but thatst he point, what is the way WE want to go wtih this? and if we have two different views then two drivers would need to be created? | 20:30 |
blogan_ | so it sounds to me like we can make a consensus decision on what this haproxy drvier will do | 20:32 |
sbalukoff | Ok, I want to move on. | 20:32 |
dougwig | anyone strongly opposed to 1 lb:1 listener to start? | 20:32 |
dougwig | if not, let's move on. | 20:32 |
sbalukoff | But I don't want to do so without a simple decision on this... | 20:32 |
sbalukoff | So. let's vote on it. | 20:32 |
dougwig | abstain | 20:32 |
xgerman | abstain | 20:32 |
sbalukoff | #startvote Should the (initial) reference implementation be done using one haproxy process per listener? | 20:32 |
openstack | Begin voting on: Should the (initial) reference implementation be done using one haproxy process per listener? Valid vote options are Yes, No. | 20:32 |
openstack | Vote using '#vote OPTION'. Only your last vote counts. | 20:32 |
blogan_ | +1 for multiple processes | 20:32 |
blogan_ | oh | 20:32 |
sbalukoff | #vote Yes | 20:32 |
blogan_ | #vote Yes | 20:33 |
sbalukoff | No one else want to vote? | 20:33 |
sbalukoff | I'll give you one more minute... | 20:34 |
xgerman | you got a runaway victory :-) | 20:34 |
sbalukoff | Haha! | 20:34 |
sballe | lol | 20:34 |
dlundquist | #vote Yes | 20:34 |
dlundquist | On account of failure isolation | 20:34 |
sbalukoff | Ok, voting is about to end... | 20:35 |
sbalukoff | #endvote | 20:35 |
openstack | Voted on "Should the (initial) reference implementation be done using one haproxy process per listener?" Results are | 20:35 |
jorgem | ... | 20:35 |
blogan_ | no results! | 20:35 |
juliancash | German gets my vote. :-) | 20:35 |
blogan_ | no one wins! | 20:35 |
tmc3inphilly | #drumroll | 20:35 |
sbalukoff | A mystery apparently. | 20:35 |
sbalukoff | HAHA | 20:35 |
barclaac | I'm fine with the single/multiple HAProxy question - my issue is the control plane knowing about HAProxy instead of talking in terms of IPs, backend nodes etc. | 20:35 |
dougwig | barclaac: we all agree on that point. | 20:36 |
sbalukoff | barclaac: Have you been following the rest of the Octavia discussion at this point? | 20:36 |
sbalukoff | Because i think you're arguing something we've agreed upon for... probably months. | 20:36 |
sbalukoff | Anyway, moving on... | 20:36 |
sbalukoff | #topic Ready to accept v0.5 component design? | 20:36 |
blogan_ | sbalukoff: I think he's getting at making sure we're abstracting enough, which seemed to be in question a few minutes ago | 20:37 |
blogan_ | but i think we all agree on that now | 20:37 |
xgerman | blogan_ +1 - | 20:37 |
xgerman | next topic | 20:37 |
sbalukoff | Heh! Indeed. | 20:37 |
blogan_ | I'm ready to accept it | 20:37 |
barclaac | Of course. It was blogans comment above about having an haproxy driver in the controller (sorry for delay, had to read back up the stack) | 20:37 |
dougwig | sbalukoff: i'm not today, but i will commit to having a + or - by friday. | 20:38 |
sbalukoff | Ok, so I'm not going to +2 the design since I wrote it, but we'll need a couple of the other octavia cores to do so. :) | 20:38 |
blogan_ | i haven't give a +2 because I think this is important enough to have everyone onboard | 20:38 |
xgerman | we just discovered that we can't have Neutron Floating IPs in private networks | 20:39 |
sbalukoff | On that note then: Does anyone have any major issues with the design that are worth discussing here? | 20:39 |
dougwig | if i have any, i'll put them on gerrit well before friday, and ping you directly. | 20:39 |
sballe | sbalukoff, I am fine with the spec assumign that we are flexible with changing things as we start implementing it. | 20:39 |
sbalukoff | xgerman: I think if we are working with an abstract networking interface anyway, that shouldn't be a problem. | 20:39 |
sbalukoff | sballe: Yes, of course. This spec is to determine initial direction. | 20:40 |
xgerman | yep. agreed | 20:40 |
sbalukoff | Basically, I look at it as the 10,000 mile overview map of how the components fit together. | 20:40 |
sbalukoff | It's definitely open to change as we do the actual implementation and discover where our initial assumptions might have been wrong. | 20:41 |
blogan_ | I totally expect that to happen | 20:41 |
sbalukoff | (Not that I'm ever wrong about anything, of course. I thought I was wrong once, but I was mistaken.) | 20:41 |
blogan_ | I'd be amazed if everything went according to plan | 20:41 |
sballe | blogan_, sbalukoff me too | 20:41 |
xgerman | <put in approbiate A-Team reference> | 20:42 |
tmc3inphilly | I love it when a plan comes together | 20:42 |
xgerman | :-) | 20:42 |
sbalukoff | Ok, so: Doug, German, and/or Brandon: Can I get a commitment to do a (final-ish) review of the component design this week and work toward a merge? | 20:42 |
dougwig | yes | 20:43 |
blogan_ | I can +2 it now and let doug be the final +2 | 20:43 |
blogan_ | or -1 | 20:43 |
blogan_ | depending on what he decides | 20:43 |
sbalukoff | #action v0.5 component design to be reviewed / moved to merge this week. | 20:43 |
xgerman | yeah, I can +2, too | 20:43 |
sbalukoff | Well, if there's a -1, we can probably fix that this week, too. | 20:43 |
xgerman | dougwig just ping us and we will +2 | 20:43 |
sbalukoff | Ok, let's move on to the next topic | 20:44 |
sbalukoff | #topic Ready to accept Brandon's initial database migrations? | 20:44 |
blogan_ | NO | 20:44 |
blogan_ | lol | 20:44 |
dougwig | lol | 20:44 |
blogan_ | per your comments, some need to be changed | 20:44 |
sbalukoff | Haha! Indeed. | 20:44 |
xgerman | well, if the creator has no confidence | 20:44 |
blogan_ | actually brings up another topic we can discuss | 20:44 |
sbalukoff | blogan_: Ok, so, do you have enough info there to make the fixes, or are there details that will need to be discussed in the group? | 20:45 |
sbalukoff | blogan_: Oh, what's that? | 20:45 |
blogan_ | I think we should try to start with basic load balancing features first, and then iterate on that after we have a more stable workflow and codebase | 20:45 |
xgerman | +1 | 20:45 |
blogan_ | so should colocation, apolocation, health monitor settings, etc be saved for that | 20:46 |
xgerman | yeah, we can put them in with the understanding they might need to be refactored/refined | 20:46 |
sbalukoff | blogan_: I'm game for that, so long as we don't do anything to paint ourselves into a corner with regard to later planned features. | 20:46 |
sbalukoff | blogan_: Want to call that v0.1 or something? ;) | 20:46 |
sballe | I might be naive but I am not sure how we can have a LB without health monitoring | 20:46 |
xgerman | 0.5 Beta | 20:46 |
sbalukoff | (I don't think we need a component design document for it, per se.) | 20:46 |
blogan_ | sbalukoff: v0.25 | 20:46 |
sbalukoff | Heh! Ok. | 20:46 |
blogan_ | sballe: i mean extra health monitor settings | 20:47 |
sbalukoff | sballe: I don't think health monitoring is one of the non-basic featured. | 20:47 |
sbalukoff | features. | 20:47 |
sbalukoff | Yes. | 20:47 |
sballe | oh ok | 20:47 |
sbalukoff | blogan_ Agreed. | 20:47 |
blogan_ | basically I'm kind of following what neutron lbaas has exposed | 20:47 |
sbalukoff | v2? | 20:47 |
blogan_ | yes | 20:47 |
sbalukoff | Ok, that works. | 20:48 |
sbalukoff | Non-shitty core object model and all. :) | 20:48 |
sbalukoff | Any objections? | 20:48 |
dougwig | should be easy enough to do a v1 driver on that first pass, and get a stable cli/ui to test with. | 20:48 |
tmc3inphilly | #salty | 20:48 |
ptoohill | #teamsalty | 20:48 |
sbalukoff | dougwig: +1 | 20:48 |
sbalukoff | Ok then! | 20:48 |
blogan_ | ill get those changes up today and please look give it some love and attention | 20:49 |
sbalukoff | #agreed We will start with a basic (Neutron LBaaS v2) feature set first and iterate more advanced features on that. We shall call it v0.25 | 20:49 |
xgerman | dougiwg - no v1 driver we migth end up with users | 20:49 |
sbalukoff | #action Review blogan's changes regarding the v0.25 iteration. | 20:49 |
blogan_ | next topic! | 20:50 |
sbalukoff | I want to skip ahead a bit on this since we're almost out of time. | 20:50 |
sbalukoff | #topic Discuss ideas for increasing project velocity | 20:50 |
sbalukoff | So! Right now, it feels like there ought to be more room for many people to be doing things to bring this project forward. | 20:51 |
dougwig | blueprints, milestones, track deliverables to dates. | 20:51 |
sballe | sbalukoff, +1 | 20:51 |
blogan_ | should we put all these tasks as blueprints in launchpad and allow people to just take htem as they want? | 20:51 |
sbalukoff | blogan_: I'm thinking that's probably a good idea. | 20:52 |
sballe | blogan_, sounds like a good idea. | 20:52 |
xgerman | +1 | 20:52 |
johnsom | Some have volunteered for work already | 20:52 |
sballe | blogan_, We can always coordinate among ourselves | 20:52 |
xgerman | johnsom +1 | 20:52 |
sbalukoff | johnsom: Yep! So, we'll want to make sure that's reflected in the blueprints. | 20:52 |
xgerman | sbalukoff, you wnated to start a standup etherpad | 20:53 |
blogan_ | okay I'll put as many blueprints as I can | 20:53 |
sbalukoff | xgerman: Thanks-- yes. I forgot about that. | 20:53 |
johnsom | Agreed. The blueprint process is on my list to learn in the near term | 20:53 |
sbalukoff | #action sbalukoff to start a standup etherpad for Octavia | 20:53 |
sbalukoff | johnsom: I think it's on many of our lists. :) | 20:53 |
sballe | same here | 20:54 |
blogan_ | ill add as many blueprints as i can, but they're probably not going to be very detailed | 20:54 |
xgerman | no problem we can detail | 20:54 |
sbalukoff | I also wanted to ask publicly (and y'all can feel free to respond publicly or private as you see fit), what else can I be doing to both make sure you and your individual teams know what they can be doing to be useful, and what else can I do to help you convince your management it's worthwhile to spend time on Octavia? | 20:54 |
sballe | blogan_, how are we going ot do design review before the code shows up in gerrit? and it takes forever for us to understand the code? | 20:54 |
sbalukoff | blogan_: I'll also work on adding blueprints and filling in detail. | 20:55 |
blogan_ | sballe: good question, and that should be the spec process, but I really think the spec process will totally slow down velocity in the beginning for trivial tasks | 20:55 |
sbalukoff | blogan_: +1 | 20:55 |
sballe | blogan_, sbalukoff I would like to see details in the bp so we know how it will be implemented with adjustments later if needed | 20:56 |
sbalukoff | For major features (eg. TLS or L7) we'll definitely want them specced (probably just copied from work on neutron LBaaS), but yes, trivial things shouldn't require a spec up front right now. | 20:56 |
sballe | maybe I am talking about spces and not bp | 20:56 |
blogan_ | sballe: I think the blueprints can be created initially to give out a task list, but when someone grabs that blueprint to work on, they should be responsble for detailing it out | 20:56 |
dougwig | sballe: let's take one example, which is the directory skeleton for octavia. the best spec for that is going to be the code submission. and in the early days, that's going to be true for a fair number of commits. | 20:56 |
sballe | blogan_, we are totally in agreemet. | 20:56 |
sbalukoff | sballe: Cool. After the meeting, can you point out a few specific areas where you'd like to see more detail to me? | 20:57 |
blogan_ | i have a lot | 20:57 |
sbalukoff | We've got three minutes left. Anyone have anything dire they want to bring up? | 20:57 |
sballe | dougwig, with the old LBaaS team we ran into an issue that they didn't document anything and we were told the code was the documentaiton and design document. | 20:57 |
xgerman | yeah, let's avoid that | 20:58 |
xgerman | also we need to bring in QA early | 20:58 |
sbalukoff | sballe: Yeah, I hate that, too. | 20:58 |
samuelbercovici | anyone have spoken or heard from mestrey? | 20:58 |
sbalukoff | I loves me some good docs. | 20:58 |
blogan_ | sbalukoff: dont we know it | 20:58 |
blogan_ | samuelbercovici: nope we ahve not | 20:58 |
sbalukoff | samuelbercovici: I have not. | 20:58 |
dougwig | samuelbercovici: he was at the neutron meeting monday. | 20:58 |
sbalukoff | And, still no incubator update, IIRC. | 20:59 |
blogan_ | some discussions going on the ML about it | 20:59 |
dougwig | i asked about that at the meeting, and was told to expect it yesterday. | 20:59 |
blogan_ | but nothing official | 20:59 |
blogan_ | I was told to expect it 2 or 3 weeks ago | 20:59 |
barclaac | I chatted with him briefly yesterday. I'd expect by the end of the week. | 20:59 |
sbalukoff | Ok, folks! Meeting time is up. Yay IRC. :P | 20:59 |
samuelbercovici | i was not able to get from him a reason why lbaas should fo to incubation | 20:59 |
blogan_ | I now expect it at the samet ime Half-Life 3 is released | 20:59 |
sbalukoff | #endmeeting | 21:00 |
openstack | Meeting ended Wed Aug 27 21:00:03 2014 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 21:00 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/octavia/2014/octavia.2014-08-27-20.00.html | 21:00 |
samuelbercovici | LOL | 21:00 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/octavia/2014/octavia.2014-08-27-20.00.txt | 21:00 |
openstack | Log: http://eavesdrop.openstack.org/meetings/octavia/2014/octavia.2014-08-27-20.00.log.html | 21:00 |
sbalukoff | Not that we have to stop discussing things, eh. | 21:00 |
dougwig | blogan_: as long as it's not a duke nukem forever quality release. | 21:00 |
samuelbercovici | lets hope it will not be a new duekm newkem | 21:00 |
rm_work | alright now time for me to go to the airport :P | 21:00 |
dougwig | hahaha. | 21:00 |
dougwig | i now have to go back into the sales training conference. | 21:00 |
*** jwarendt has quit IRC | 21:00 | |
samuelbercovici | dougwin:like the way you are thinking... | 21:01 |
sbalukoff | dougwig: You poor soul. | 21:01 |
sballe | dougwig, have fun | 21:01 |
sballe | rm_work, safe travel | 21:01 |
samuelbercovici | bye | 21:01 |
blogan_ | I'm taking a break now | 21:01 |
*** samuelbercovici has quit IRC | 21:01 | |
rm_work | :) | 21:02 |
*** rm_work is now known as rm_work|away | 21:03 | |
*** tmc3inph_ has joined #openstack-lbaas | 21:04 | |
*** tmc3inphilly has quit IRC | 21:05 | |
*** jamiem has quit IRC | 21:08 | |
*** markmcclain has quit IRC | 21:12 | |
*** ptoohill-oo has quit IRC | 21:15 | |
*** tmc3inph_ has quit IRC | 21:16 | |
*** markmcclain has joined #openstack-lbaas | 21:18 | |
*** sbfox has joined #openstack-lbaas | 21:27 | |
blogan_ | so how did everyone like the irc part? | 21:34 |
xgerman | I didn't have to shave - so this is a + | 21:34 |
blogan_ | ol | 21:34 |
blogan_ | and lol | 21:34 |
*** ajmiller has quit IRC | 21:47 | |
sbalukoff | Heh! I feel like we got about 1/3rd accomplished during this meeting than what we took 40 minutes to do last time. | 21:48 |
sbalukoff | Also, I see crc32's point about it being difficult to keep track of what is being discussed when there isn't the exclusivity of one person talking at a time. | 21:48 |
sbalukoff | (ie. the "text orgy" thing happened several times in that meeting.) | 21:49 |
sbalukoff | Also, the automated meeting minutes are not as good as the ones Trevor prepared. | 21:49 |
sbalukoff | Because our conversation was pretty free-form at times (today), and it's difficult to know when we've reached a solid point that should be noted in the meeting minutes sometimes, without someone going back and reviewing. | 21:50 |
sbalukoff | Also, xgerman: I don't shave for the video meetings. You're lucky I'm wearing clothing at all. XD | 21:51 |
xgerman | in that case to allow for your office nudity we should stay in irc ;-) | 21:52 |
*** rm_work|away is now known as rm_work | 21:57 | |
sbalukoff | HAHA | 21:58 |
rm_work | well, I liked it, because i had to be distracted for a bit due to a production issue, but was able to catch back up easily | 21:58 |
rm_work | if i'd popped back into webex 30 min through, I would have been like "lolwut" | 21:59 |
sbalukoff | rm_work: Yes, it would have taken more time to get up to speed on the first half of the meeting. | 21:59 |
rm_work | like, until after the meeting when notes were posted <_< | 22:00 |
rm_work | so, not really possible | 22:00 |
*** Guest62412 has quit IRC | 22:05 | |
*** sbfox has quit IRC | 22:19 | |
*** sbfox has joined #openstack-lbaas | 22:32 | |
*** barclaac|2 has joined #openstack-lbaas | 22:33 | |
*** sballe_ has joined #openstack-lbaas | 22:35 | |
*** barclaac has quit IRC | 22:36 | |
*** sballe has quit IRC | 22:37 | |
*** markmcclain has quit IRC | 22:38 | |
dougwig | sbalukoff: i like the text orgy, but i read and type quickly. i think chairing an IRC meeting, keeping stuff out of the weeds, kicking things to the ML, etc., is something that happens after chairing a few times, and that meeting let a few topics wander and go on too long, which exacerbated the "not getting as much done" issue. | 22:45 |
* rm_work agrees with dougwig | 22:46 | |
*** jorgem has quit IRC | 22:59 | |
blogan_ | dougwig: +1 | 23:02 |
blogan_ | plus I think we got a lot more people able to voice their opinions than if it were in webex | 23:03 |
rm_work | it's hard to actually physically interrupt someone else speaking when it's on IRC, even if you're talking in parallel :P | 23:04 |
rm_work | which kept happening in webex >_< | 23:04 |
rm_work | I often didn't say things I was thinking just because it wasn't easy to get in the flow of conversation | 23:04 |
rm_work | then again, some people would consider that a positive :P | 23:04 |
*** crc32 has quit IRC | 23:05 | |
rm_work | brb boarding | 23:05 |
blogan_ | rm_work: yes it has the potential to be a positive | 23:05 |
blogan_ | but I think its a negative more times than not | 23:05 |
rm_work | yes, because hopefully people DO want to hear my thoughts :) | 23:05 |
rm_work | err, right, boarding | 23:05 |
blogan_ | have fun | 23:06 |
*** rm_work is now known as rm_work|away | 23:08 | |
*** sbfox has quit IRC | 23:23 | |
*** dlundquist has left #openstack-lbaas | 23:25 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!