*** chlong has joined #openstack-lbaas | 00:55 | |
*** chlong has quit IRC | 01:49 | |
*** mlavalle has quit IRC | 02:11 | |
*** bana_k has joined #openstack-lbaas | 02:44 | |
*** clev-away has quit IRC | 03:26 | |
*** clev has joined #openstack-lbaas | 03:31 | |
*** I has joined #openstack-lbaas | 03:35 | |
*** I is now known as Guest25059 | 03:36 | |
*** Guest25059 has quit IRC | 03:49 | |
*** haigang has joined #openstack-lbaas | 03:49 | |
*** bharathm has joined #openstack-lbaas | 04:12 | |
crc32 | 'any one around? I can't rebase anything from CR 202336 on up. | 04:21 |
---|---|---|
*** bana_k has quit IRC | 04:37 | |
openstackgerrit | Carlos Garza proposed openstack/octavia: Adding amphora failover flows https://review.openstack.org/202336 | 04:45 |
openstackgerrit | Carlos Garza proposed openstack/octavia: health manager service https://review.openstack.org/160061 | 04:47 |
*** ig0r__ has joined #openstack-lbaas | 04:59 | |
*** ig0r_ has quit IRC | 04:59 | |
openstackgerrit | Carlos Garza proposed openstack/octavia: Implement UDP heartbeat sender and receiver https://review.openstack.org/201882 | 05:00 |
*** haigang has quit IRC | 05:23 | |
*** numans has joined #openstack-lbaas | 05:39 | |
*** haigang has joined #openstack-lbaas | 05:47 | |
openstackgerrit | Carlos Garza proposed openstack/octavia: Implementing EventStreamer https://review.openstack.org/218735 | 06:01 |
openstackgerrit | Stephen Balukoff proposed openstack/neutron-lbaas: Make pools independent of listeners https://review.openstack.org/218560 | 06:09 |
*** Alex_Stef has joined #openstack-lbaas | 06:23 | |
*** haigang has quit IRC | 06:27 | |
*** haigang has joined #openstack-lbaas | 06:28 | |
*** haigang has quit IRC | 06:29 | |
*** haigang has joined #openstack-lbaas | 06:36 | |
*** apuimedo_ has joined #openstack-lbaas | 07:27 | |
*** jschwarz has joined #openstack-lbaas | 07:32 | |
openstackgerrit | Brandon Logan proposed openstack/octavia: Do not remove egress sec group rules on plug vip https://review.openstack.org/218754 | 07:53 |
*** apuimedo_ has quit IRC | 08:05 | |
*** Jianlee has joined #openstack-lbaas | 08:11 | |
*** kiran-r has joined #openstack-lbaas | 08:15 | |
openstackgerrit | Stephen Balukoff proposed openstack/neutron-lbaas: Make pools independent of listeners https://review.openstack.org/218560 | 08:20 |
*** mixos has quit IRC | 08:28 | |
*** Jianlee is now known as Jian0612 | 08:31 | |
*** nmagnezi has joined #openstack-lbaas | 08:34 | |
*** amotoki has joined #openstack-lbaas | 08:41 | |
*** haigang has quit IRC | 08:44 | |
*** haigang has joined #openstack-lbaas | 08:44 | |
*** haigang has quit IRC | 08:50 | |
*** haigang has joined #openstack-lbaas | 08:52 | |
*** crc32 has quit IRC | 08:55 | |
*** bharathm has quit IRC | 09:12 | |
*** haigang has quit IRC | 09:20 | |
*** haigang has joined #openstack-lbaas | 09:33 | |
*** numans has quit IRC | 09:36 | |
*** ig0r_ has joined #openstack-lbaas | 09:37 | |
*** ig0r__ has quit IRC | 09:39 | |
*** bharathm has joined #openstack-lbaas | 10:01 | |
*** numans has joined #openstack-lbaas | 10:05 | |
*** bharathm has quit IRC | 10:06 | |
*** haigang has quit IRC | 10:14 | |
*** haigang has joined #openstack-lbaas | 10:15 | |
*** Jian0612 has quit IRC | 10:30 | |
*** nmagnezi has quit IRC | 10:53 | |
*** bharathm has joined #openstack-lbaas | 11:02 | |
*** bharathm has quit IRC | 11:07 | |
*** nmagnezi has joined #openstack-lbaas | 11:22 | |
*** nmagnezi has quit IRC | 11:24 | |
*** nmagnezi has joined #openstack-lbaas | 11:29 | |
*** amotoki has quit IRC | 11:39 | |
*** amotoki has joined #openstack-lbaas | 11:39 | |
*** openstackgerrit has quit IRC | 11:46 | |
*** openstackgerrit has joined #openstack-lbaas | 11:47 | |
*** bharathm has joined #openstack-lbaas | 12:51 | |
*** bharathm has quit IRC | 12:56 | |
*** kbyrne has quit IRC | 13:19 | |
*** kbyrne has joined #openstack-lbaas | 13:32 | |
*** kiran-r has quit IRC | 13:48 | |
*** numans has quit IRC | 14:36 | |
*** bharathm has joined #openstack-lbaas | 14:40 | |
*** apuimedo has joined #openstack-lbaas | 14:42 | |
*** mixos has joined #openstack-lbaas | 14:42 | |
*** bharathm has quit IRC | 14:45 | |
*** mixos has quit IRC | 14:46 | |
*** Alex_Stef has quit IRC | 14:56 | |
*** apuimedo is now known as apuimedo|away | 15:13 | |
*** woodster_ has joined #openstack-lbaas | 15:18 | |
*** sbalukoff has quit IRC | 15:29 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/octavia: Updated from global requirements https://review.openstack.org/218911 | 15:35 |
xgerman | blogan, johnsom, rm_work: Please don’t approve any patches until the smoke cleared: http://lists.openstack.org/pipermail/openstack-dev/2015-August/073270.html | 15:41 |
johnsom | Joy | 15:42 |
johnsom | Actually that global update above seems to address this issue | 15:43 |
*** nmagnezi has quit IRC | 15:48 | |
xgerman | oh | 15:48 |
xgerman | confusing times — gave it a +A | 15:49 |
*** amotoki has quit IRC | 15:50 | |
*** mlavalle has joined #openstack-lbaas | 15:54 | |
*** apuimedo|away has quit IRC | 16:05 | |
*** jschwarz has quit IRC | 16:06 | |
openstackgerrit | Michael Johnson proposed openstack/neutron-lbaas: Set up LBaas V2 tempest test gate against Octavia. https://review.openstack.org/209675 | 16:06 |
*** apuimedo|away has joined #openstack-lbaas | 16:09 | |
*** bharathm has joined #openstack-lbaas | 16:12 | |
*** kiran-r has joined #openstack-lbaas | 16:13 | |
*** bharathm has quit IRC | 16:16 | |
*** clev is now known as clev-away | 16:34 | |
*** bana_k has joined #openstack-lbaas | 16:44 | |
*** kiran-r has quit IRC | 17:10 | |
*** madhu_ak has joined #openstack-lbaas | 17:20 | |
*** TrevorV has joined #openstack-lbaas | 17:28 | |
TrevorV | johnsom, xgerman hey guys you around? | 17:34 |
johnsom | Hi | 17:34 |
xgerman | Hi | 17:34 |
TrevorV | I didn't actually test the failover with REST stuffs. Have either of you? | 17:35 |
johnsom | Not yet. I was going to give it a go today. I'm addressing comments on the UDP patch at the moment. After that I will give it a whirl | 17:36 |
*** bharathm has joined #openstack-lbaas | 17:41 | |
xgerman | I had a horrible time getting my devstack to work… but will try today | 17:41 |
*** clev-away is now known as clev | 17:41 | |
TrevorV | I'm in no real hurry, was only curious. I'm working on some diagrams for internal stuff, but since its not "merged" it popped in my head that I hadn't looked at that side. | 17:43 |
*** ig0r__ has joined #openstack-lbaas | 17:43 | |
blogan | looks like gate is clear | 17:45 |
blogan | we can +A things again i believe | 17:45 |
TrevorV | No... no I can't :( | 17:46 |
*** ig0r_ has quit IRC | 17:46 | |
johnsom | It's still very slow though. I have a job in zuul that has been waiting 1:40 | 17:47 |
openstackgerrit | Merged openstack/octavia: Do not remove egress sec group rules on plug vip https://review.openstack.org/218754 | 17:54 |
blogan | that merged fast! | 17:54 |
openstackgerrit | Merged openstack/octavia: Updated from global requirements https://review.openstack.org/218911 | 18:00 |
*** abdelwas has joined #openstack-lbaas | 18:05 | |
blogan | btw the docs merged for the tls and sni attributes | 18:10 |
blogan | http://developer.openstack.org/api-ref-networking-v2-ext.html#createListener | 18:11 |
johnsom | Excellent | 18:11 |
*** sbalukoff has joined #openstack-lbaas | 18:14 | |
johnsom | TrevorV Is it ok if I do the rebase on the failover flow? | 18:31 |
sbalukoff | Aah, it's been a while since I've argued technical design with other people. It's good to get back into it, eh. | 18:34 |
sbalukoff | FWIW, I'd love to see others' opinions on this, too: https://review.openstack.org/#/c/218560/ | 18:34 |
openstackgerrit | Trevor Vardeman proposed openstack/octavia: Adding amphora failover flows https://review.openstack.org/202336 | 18:36 |
johnsom | TrevorV - I guess you were ahead of me on that rebase.... | 18:36 |
TrevorV | Just finished it johnsom | 18:36 |
TrevorV | :D | 18:36 |
TrevorV | Was on laptop, chat on desktop, didn't see it until just now :D | 18:36 |
johnsom | Thanks | 18:36 |
openstackgerrit | Michael Johnson proposed openstack/octavia: health manager service https://review.openstack.org/160061 | 18:37 |
openstackgerrit | Michael Johnson proposed openstack/octavia: Implement UDP heartbeat sender and receiver https://review.openstack.org/201882 | 18:38 |
openstackgerrit | Sherif Abdelwahab proposed openstack/octavia: Amphora Flows and Service Drivers for Active Standby https://review.openstack.org/206252 | 18:45 |
*** ig0r__ has quit IRC | 18:48 | |
openstackgerrit | Michael Johnson proposed openstack/octavia: Implement UDP heartbeat sender and receiver https://review.openstack.org/201882 | 19:07 |
rm_work | I'm ... "off" today, but looking at a couple reviews really quick to see if we can get stuff moving ahead | 19:17 |
rm_work | need failover flows to merge before we can merge HMService | 19:17 |
rm_work | are we going to try to merge those today or can I wait to do a full review until tomorrow? | 19:18 |
TrevorV | rm_work, probably wait, since failover hasn't actually been tested for the REST impl | 19:22 |
TrevorV | just SSH | 19:22 |
rm_work | kk | 19:28 |
*** TrevorV has quit IRC | 19:52 | |
openstackgerrit | Bertrand Lallau proposed openstack/octavia: Fix doctrings typo in delete_member https://review.openstack.org/219008 | 19:56 |
xgerman | TrevorV found an issue with failover | 20:12 |
xgerman | will do some more tests | 20:13 |
xgerman | ok, second test failed as well | 20:18 |
*** diogogmt has joined #openstack-lbaas | 20:36 | |
blogan | fyi dont +A anything again | 21:06 |
rm_work | lol | 21:09 |
rm_work | gotta love the gate right before code freeze :P | 21:09 |
rm_work | everyone wants to submit stuff and something external always borks everything T_T | 21:09 |
blogan | indeed | 21:10 |
blogan | sbalukoff: when you get a chance to talk abotu the independent pool stuff id like to talk about it here | 21:12 |
sbalukoff | blogan: Okeedokee! | 21:47 |
sbalukoff | blogan: I'm actually free right now. Do you have time to chat now? | 21:48 |
blogan | sbalukoff: in 30 mins? | 21:52 |
sbalukoff | Works for me! | 21:52 |
rm_work | dougwig: can review https://review.openstack.org/#/c/217330/ ? I think it is good, and the change that relies on it on the project-side depends-on it | 22:07 |
rm_work | dougwig: https://review.openstack.org/#/c/167885/ <-- passes with experimental queue job (the one that I made to test the change above) | 22:07 |
*** mlavalle has quit IRC | 22:12 | |
dougwig | rm_work: commented | 22:13 |
rm_work | dougwig: well, it's *replacing* the existing gate | 22:14 |
rm_work | nothing can pass both the existing gate and this one | 22:14 |
dougwig | rm_work: sure, but you could keep it separate and make it non-voting in check for a few days, and *then* cut it over. | 22:14 |
dougwig | that's what i'd do. but if you're confident enough to just switch, hey, go nuts. :) | 22:14 |
rm_work | err, check what? that it fails on everything but a single CR? | 22:14 |
johnsom | dougwig While you are here... Min and I are running into some interesting issues booting the amp vm in the gate. | 22:14 |
rm_work | dougwig: want to make sure i'm not missing something here | 22:15 |
dougwig | johnsom: shoot | 22:15 |
rm_work | it would just fail on everything but the single CR that changes the gate code on the project side (https://review.openstack.org/#/c/167885/) | 22:15 |
johnsom | The nova scheduler is rejecting the amp build claiming there is only 1024 of storage available. (we had a similar issue with RAM, but I created a smaller flavor) | 22:15 |
rm_work | and that review couldn't merge until the gate is replaced | 22:15 |
johnsom | Any thoughts about how to get devstack to use more of the 30GB of free disk for nova hosts? | 22:16 |
blogan | sbalukoff: ya there? | 22:17 |
sbalukoff | I am indeed! | 22:17 |
dougwig | rm_work: ok, so you have two jobs. the passing one, and a failing one that doesn't vote. then you merge the barbican change, and the non-voting one starts to pass. then you get a few days data that it's stable, so you make it voting (and kill the old one/rename it). | 22:17 |
dougwig | johnsom: it's an evil thought, but you could set the disk overcommit in nova.conf via local.conf. | 22:17 |
rm_work | dougwig: if we left it in that state for a few days, nothing else could merge, because it would all be failing the old gate check? | 22:17 |
johnsom | dougwig Hahaha, I like evil... | 22:18 |
rm_work | dougwig: the important part being, this is not backwards compatible with the old gate job (which will fail everything) | 22:18 |
johnsom | Interesting thought. | 22:18 |
dougwig | rm_work: is the old devstack service disappearing or something? aren't those separate code paths? i must be missing why you need/want a big bang in the middle of the process. | 22:19 |
rm_work | dougwig: the old process is phased out with the new one | 22:19 |
dougwig | rm_work: but you're in control of that, right? you don't have to do that at the same time. devstack won't melt. :) | 22:20 |
blogan | sbalukoff: so you decided to leave loadbalancer_id off of the pool? | 22:20 |
rm_work | I guess it might be possible to do it in multiple steps <_< | 22:20 |
sbalukoff | blogan: No, actually, I decided to keep it, and leave listener_id off of it. | 22:20 |
rm_work | though we are a bit short on time because of code freeze being in two days | 22:20 |
blogan | sbalukoff: but loadbalancer_id is required now? | 22:20 |
rm_work | so this is a little bit accelerated | 22:20 |
sbalukoff | blogan: In a round about way, yes. When using the API or the CLI you can specify just the listener_id... but all the code does is look for which loadbalancer the listener is using and associates the pool with that (on creation only-- can't switch with an update, just like listeners) | 22:21 |
sbalukoff | I've also updated the CLI and API such that when creating a listener, you can just specify the pool it should be associated with, and it'll automatically get assigned to the loadbalancer. | 22:22 |
sbalukoff | (I figured, if we allow that pattern for pool creation, why not allow it for listener creation?) | 22:22 |
blogan | but what your code is doing is exposing a loadbalancer_id attribute off the pool, which seems odd to me | 22:23 |
johnsom | dougwig Just as we were typing my latest job finished. This worked: VOLUME_BACKING_FILE_SIZE=24G export VOLUME_BACKING_FILE_SIZE | 22:23 |
xgerman | blogan +1 | 22:23 |
sbalukoff | blogan: Why? | 22:23 |
xgerman | can’ we have a many-many on listener-id? | 22:23 |
sbalukoff | Pools exist in the context of a loadbalancer. | 22:23 |
xgerman | do they? | 22:23 |
blogan | sbalukoff: bc pools are directly (or indirectly via L7) tied to listeners, which are then tied to load balancers | 22:23 |
sbalukoff | xgerman: You mean pool <-> listener relationship? Yes. | 22:23 |
xgerman | yep, that’s all I think we need | 22:24 |
blogan | sbalukoff: tying to a LB should just be as simple as tying to a Listener right now, its just an indirect coupling, im sure im missing something though | 22:24 |
xgerman | well, we had people lobby for sharing pools between LBs | 22:24 |
sbalukoff | blogan: It is that simple, from the user's perspective. | 22:24 |
xgerman | but I am pretty sure we decided against that | 22:25 |
sbalukoff | xgerman: We did. I used to be a supporter of it. I would still be if we limit the sharing to providers.... | 22:25 |
sbalukoff | ie... you can share pool and listeners between loadbalancers so long as those loadbalancers are all using the same driver. | 22:25 |
sbalukoff | That's not what I've written right now. | 22:25 |
sbalukoff | But I would be in favor of that. | 22:25 |
blogan | sharing pools across LBs can be something we solve for if needed after doing it within an LB | 22:25 |
sbalukoff | I am not in favor of sharing listeners / pools across drivers / vendors. | 22:25 |
xgerman | agreed. But hanging the pool on the LB would make that unatural | 22:26 |
sbalukoff | The one use case where it makes sense is having the same pool for an IPv4 and IPv6 loadbalancer. | 22:26 |
sbalukoff | xgerman: It does. It's the same thing for listeners, actuall. | 22:26 |
sbalukoff | actually. | 22:26 |
sbalukoff | So... | 22:26 |
sbalukoff | When we introduce support for IPv6 load balancers, then it might be time to revisit whether we want to decouple listeners and pools from loadbalancers. | 22:27 |
sbalukoff | I don't really see much of a case for doing that before then. | 22:27 |
dougwig | johnsom: sweet. | 22:27 |
blogan | me either | 22:27 |
sbalukoff | IMO, anyway. | 22:27 |
xgerman | I though we support ipv6 | 22:27 |
blogan | digression completed! | 22:27 |
sbalukoff | That change will involve a massive API change... | 22:27 |
blogan | xgerman: i think he means ipv4 and ipv6 | 22:27 |
sbalukoff | xgerman: I thought support was removed with a recent commit? | 22:27 |
xgerman | we have a ton of test cases for ipv4 and ipv6 | 22:28 |
blogan | sbalukoff: it should just be a matter of choosing an ipv6 subnet | 22:28 |
sbalukoff | blogan: Ok, I've not done that on my side yet. But if y'all say it works, then I'll believe you. | 22:28 |
sbalukoff | Do you think it' worthwhile decoupling both listeners and pools from loadbalancers at this time? | 22:29 |
blogan | sbalukoff: well i honestly havent tested it out yet, so i cant say that it works | 22:29 |
blogan | other than what people have said | 22:29 |
blogan | no i dont | 22:29 |
blogan | its not | 22:29 |
sbalukoff | Ok. | 22:29 |
blogan | i want to solve having pools being shared within the context of an LB | 22:29 |
blogan | but also solve it in a way that feels natural | 22:29 |
xgerman | agreed | 22:29 |
blogan | and loadbalancer_id off pool doesnt seem natural | 22:29 |
sbalukoff | Ok, my code essentially does that. I just need to finish getting the last few broken tests fixed. | 22:29 |
sbalukoff | I'm still not sure I understand what you mean when you say it doesn't seem natural. | 22:30 |
blogan | sbalukoff: with your code can i create a pool without being tied to a listener or load balancer? | 22:30 |
sbalukoff | blogan: It will be tied to a loadbalancer, ultimately. | 22:30 |
blogan | sbalukoff: natural in the model relationship hierarchy | 22:30 |
sbalukoff | It's not tied to a listener. | 22:30 |
sbalukoff | Not really. | 22:30 |
blogan | sbalukoff: lb -> listener -> pool or lb -> listener -> l7 stuff -> pool | 22:30 |
blogan | sbalukoff: but its tied to an LB no? | 22:31 |
sbalukoff | blogan: Yes, it is. | 22:31 |
sbalukoff | Ultimately, I know that for haproxy a 'pool' without a listener doesn't make any sense. A pool just ends up becoming a backend config in a haproxy config file.... | 22:31 |
sbalukoff | But other vendors do need to create "pool" entities on their devices. | 22:32 |
blogan | sbalukoff: so what if a user can either 1) create a pool and have it tied to a listener, 2) create a pool and have it tied to an L7 policy/rule (cant remember which one has the pool) or 3) create a pool not tied to either | 22:32 |
sbalukoff | And these exist on their devices independently from the listener... until they're bound to the listener in some way. | 22:32 |
blogan | sbalukoff: yeah if a pool doesn't have a parent, then it wouldn't go to the driver at all, neutron-lbaas would store it in the db | 22:32 |
blogan | well scratch that, it can go to the driver | 22:33 |
blogan | and it should go to the driver | 22:33 |
blogan | but in an haproxy case, nothing would happen | 22:33 |
sbalukoff | Right. | 22:33 |
sbalukoff | So creating a pool tied to an l7 policy is like tying it to the listener, since l7 policies cannot exist independent of listeners. | 22:33 |
sbalukoff | Or, shouldn't be able to exist. | 22:34 |
blogan | yeah | 22:34 |
blogan | i agree on that | 22:34 |
sbalukoff | I'll need to check Evgeny's code there again. | 22:34 |
sbalukoff | So... if you think I ought to decouple the pool from the loadbalancer... that means the pool could potentially be shared between load balancers. | 22:35 |
blogan | so it might be easiest to just allow the pool to be created without a parent or attached to a listener immediately, and leave the L7 immediate attachment to another patch | 22:35 |
sbalukoff | The problem with treating it like just an entry in the database is that it's not just that for some vendors... and pools have statuses. | 22:35 |
blogan | sbalukoff: no, bc once its associated with a listener, we can set up a check to see whether its being associated with an entity on another lb | 22:36 |
sbalukoff | blogan: Again, if you're asking me to decouple the pool from the loadbalancer, then we might as well do this for the listener too, at the same time. It'll be about as much work (which is to say, a metric crap-ton.) | 22:36 |
blogan | sbalukoff: which is why i corrected myself and said we would send it to the driver, and the driver decides whether its useful or not | 22:36 |
sbalukoff | After all a listener (ie. tcp port) independent of a listener makes about as much sense as a pool independent of a listener. | 22:37 |
sbalukoff | er.. pool independent of a loadbalaner. | 22:37 |
blogan | sbalukoff: i think we haev a misunderstanding, im not saying decouple from LB, just allow for parentless until it has been given a parent and then its tightly coupled to the LB indirectly | 22:37 |
sbalukoff | If you want to go down that road, then we need to come up with a different model for metrics and status. Because the current one isn't going to cut it when we're spanning providers. | 22:38 |
sbalukoff | blogan: I think it's a better idea to have it associated with a listener or loadbalancer on creation. After all, people don't usually create pools without intending them to be used somewhere. | 22:38 |
sbalukoff | So, I don't think it breaks any workflows to require a listener or loadbalancer to be specified when you create the pool. | 22:39 |
blogan | sbalukoff: okay that'd be fine, but we'd have to add an L7_policy/rule attribute to the pool then | 22:39 |
sbalukoff | (Certainly not with current users of Neutron LBaaS v2-- since right now a pool is tightly coupled to a listener.) | 22:39 |
sbalukoff | blogan: No, we don't. | 22:39 |
sbalukoff | That attribute is an attribute of the L7 policy. | 22:40 |
blogan | okay so if there wasn't a loadbalancer_id on the pool, just listener_id then how would a user put a pool on the L7 policy? | 22:41 |
sbalukoff | pools should not care whether they're being accessed as a 'default_pool' (attribute of the listener) or as the result of an L7 policy (attribute of the policy) | 22:41 |
blogan | okay, okay, so duh yeah i see the code you're allowing default_pool_id to be updateable now | 22:41 |
sbalukoff | Ok, so I think you missed it earlier: When you create a pool you must specify either loadbalancer_id or listener_id. What really happens in the code is that if you specify the listener_id, we look up which loadbalancer_id it is using and use that for the pool. | 22:42 |
blogan | i got that | 22:42 |
sbalukoff | That workflow is allowed so we don't break people's existing workflows, eh. | 22:42 |
blogan | yeah | 22:42 |
sbalukoff | So a created pool ALWAYS has a loadbalancer_id. | 22:42 |
sbalukoff | And it never has a listener_id. | 22:42 |
blogan | can you do this over hangouts? | 22:42 |
sbalukoff | (It never did-- default_pool_id was always an attribute of the listener.) | 22:42 |
sbalukoff | blogan: Potentially. | 22:43 |
blogan | it was but not updateable, i made that decision early so we could update it later and its still backwards compatible | 22:43 |
sbalukoff | Right. | 22:43 |
blogan | sbalukoff: so hmmm | 22:43 |
sbalukoff | Would you rather talk about this on a hangout? | 22:44 |
blogan | sbalukoff: if its not too much trouble | 22:44 |
blogan | you wont see my pretty face | 22:44 |
sbalukoff | Okeedokee! | 22:44 |
blogan | no webcam support | 22:44 |
sbalukoff | Well, then I'm in! | 22:44 |
sbalukoff | ;) | 22:44 |
blogan | but voice is all i need | 22:44 |
blogan | lol | 22:44 |
blogan | i just need to hear your soothing voice | 22:44 |
sbalukoff | Ok, anyone else want to be part of this hangout? | 22:44 |
blogan | i mean loud piercing voice | 22:44 |
sbalukoff | HAHA! | 22:45 |
xgerman | I am always game for hangouts | 22:45 |
sbalukoff | So shrew-like. | 22:45 |
johnsom | Sure | 22:45 |
blogan | can we use the one we do for testing? | 22:45 |
sbalukoff | Ok, let me remember how to do this.... (Gotta use hangouts while I can, eh... IBM will eventually move us off gmail...) | 22:45 |
sbalukoff | You have a hangout you use for testing? | 22:45 |
johnsom | Yes | 22:45 |
sbalukoff | Ok, how do I access it? | 22:45 |
xgerman | http://bit.ly/LBaaS_HP_Hangout | 22:45 |
xgerman | follow the link | 22:46 |
*** apuimedo|away has quit IRC | 23:07 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!