Monday, 2016-01-04

*** ducttape_ has quit IRC00:06
*** ducttape_ has joined #openstack-lbaas00:07
*** ducttape_ has quit IRC00:15
*** ducttape_ has joined #openstack-lbaas00:23
*** ducttape_ has quit IRC00:41
*** chlong has joined #openstack-lbaas01:25
*** armax has quit IRC01:29
*** manishg has joined #openstack-lbaas01:31
*** manishg_ has quit IRC01:34
*** manishg has quit IRC01:59
*** bana_k has joined #openstack-lbaas02:26
openstackgerritBo Chi proposed openstack/octavia: Assign load_balancer in _port_to_vip()
*** bana_k has quit IRC03:35
*** manishg has joined #openstack-lbaas04:38
*** armax has joined #openstack-lbaas04:53
*** prabampm has joined #openstack-lbaas05:01
*** manishg has quit IRC05:18
*** manishg has joined #openstack-lbaas05:19
*** manishg has quit IRC05:24
*** bochi-michael has joined #openstack-lbaas05:59
*** bana_k has joined #openstack-lbaas06:07
openstackgerrittianqing proposed openstack/neutron-lbaas: Remove invalid fileds of healthmonitor when its type is TCP/PING
*** bana_k has quit IRC06:32
*** armax has quit IRC06:52
*** chlong has quit IRC06:55
*** sbalukoff has quit IRC07:13
*** ljxiash has joined #openstack-lbaas07:17
*** sbalukoff has joined #openstack-lbaas07:17
ljxiashusing devstack, after, there are two q-lbaas service are running, is there anyone can explain it, why there are two?07:19
*** numans has joined #openstack-lbaas07:20
ljxiashHi, is there someone can kindly help me on the [-] Stats socket not found for pool a5ed91d2-fe1e-4555-90b8-deb7cee61bd507:27
ljxiashIt is a WARNING07:27
*** amotoki has joined #openstack-lbaas07:38
*** Aish has joined #openstack-lbaas07:41
*** Aish has quit IRC07:41
*** Aish has joined #openstack-lbaas07:42
*** Aish has quit IRC07:43
openstackgerrittianqing proposed openstack/neutron-lbaas: Remove invalid fileds of healthmonitor when its type is TCP/PING
*** openstackgerrit has quit IRC10:02
*** openstackgerrit has joined #openstack-lbaas10:03
*** bochi-michael has quit IRC10:35
*** ljxiash has quit IRC10:50
*** chlong has joined #openstack-lbaas12:03
*** rtheis has joined #openstack-lbaas12:14
*** doug-fish has joined #openstack-lbaas12:31
*** ducttape_ has joined #openstack-lbaas13:07
*** chlong has quit IRC13:20
*** ducttape_ has quit IRC13:29
*** chlong has joined #openstack-lbaas13:33
*** m1dev has joined #openstack-lbaas14:26
*** numans has quit IRC14:29
*** prabampm has quit IRC14:39
*** ducttape_ has joined #openstack-lbaas14:57
*** manishg has joined #openstack-lbaas15:13
*** shakamunyi has quit IRC15:15
*** barra204 has quit IRC15:15
*** TrevorV has joined #openstack-lbaas15:15
*** markvan has quit IRC15:20
*** markvan has joined #openstack-lbaas15:22
*** manishg_ has joined #openstack-lbaas15:22
*** ajmiller has joined #openstack-lbaas15:23
*** manishg has quit IRC15:25
*** shakamunyi has joined #openstack-lbaas15:29
*** liamji has joined #openstack-lbaas15:43
*** doug-fish has quit IRC15:45
liamjiAbout, is the bytes-in in the response a aggregate-value? thx15:49
*** doug-fish has joined #openstack-lbaas15:51
*** doug-fish has quit IRC15:56
xgermanaggregate liamji16:04
*** johnsom has quit IRC16:20
*** prabampm has joined #openstack-lbaas16:25
*** amitry has joined #openstack-lbaas16:30
amitryGood morning, happy New Year.  Are any of the public OpenStack clouds running OpenStack LBaaS? We are working on an app that leverages the OpenStack LBaaS API and have verified against DevStack but would like to test against a public cloud as well.16:34
*** manishg_ has quit IRC16:34
xgermanI know Rackspace has plans to run LBaaS… Helion Open Stack runs LBaaS V1 and V216:35
xgermanbut Helion is not a public cloud16:35
amitryxgerman: thanks, I'll double check on Rackspace, I think you are right that they have plans but what they offer now is not OpenStack LBaaS16:37
bloganamitry: i'm from rackspace and yes we do have plans, but yes our current offering does not use openstack lbaas (or any openstack product)16:37
amitryblogan: thanks, is there a beta we could test against?16:38
liamjixgerman: thanks16:38
bloganamitry: nope not right now16:39
amitryblogan: ok, thanks16:39
bloganamitry: np16:39
*** liamji has quit IRC16:39
*** johnsom has joined #openstack-lbaas16:40
*** _cjones_ has joined #openstack-lbaas16:49
*** barra204 has joined #openstack-lbaas16:51
*** shakamunyi has quit IRC16:51
*** armax has joined #openstack-lbaas17:07
*** bana_k has joined #openstack-lbaas17:08
*** diogogmt has joined #openstack-lbaas17:11
*** Aish has joined #openstack-lbaas17:13
*** bharathm has joined #openstack-lbaas17:26
*** manishg has joined #openstack-lbaas17:34
*** Aish has quit IRC17:42
*** Aish has joined #openstack-lbaas17:51
*** ajmiller has quit IRC17:55
*** ajmiller has joined #openstack-lbaas18:00
*** sbalukoff has quit IRC18:02
*** harlowja has quit IRC18:04
*** harlowja has joined #openstack-lbaas18:04
*** doug-fish has joined #openstack-lbaas18:10
*** armax has quit IRC18:11
*** bana_k has quit IRC18:14
*** bharathm has quit IRC18:26
*** manishg has quit IRC18:28
*** openstackgerrit has quit IRC18:32
*** manishg has joined #openstack-lbaas18:32
*** openstackgerrit has joined #openstack-lbaas18:33
*** prabampm has quit IRC18:33
*** manishg has quit IRC18:34
*** manishg has joined #openstack-lbaas18:36
*** doug-fish has quit IRC18:38
*** manishg has quit IRC18:41
*** doug-fish has joined #openstack-lbaas18:41
*** doug-fish has quit IRC18:49
*** doug-fish has joined #openstack-lbaas18:50
*** doug-fish has quit IRC18:51
*** bharathm has joined #openstack-lbaas18:51
*** madhu_ak has joined #openstack-lbaas18:51
*** bana_k has joined #openstack-lbaas18:54
*** bana_k has quit IRC18:58
*** manishg has joined #openstack-lbaas19:05
*** manishg has quit IRC19:09
*** sc68cal has quit IRC19:11
*** bharathm has quit IRC19:11
*** bharathm has joined #openstack-lbaas19:12
*** sc68cal has joined #openstack-lbaas19:14
*** bana_k has joined #openstack-lbaas19:16
*** intr1nsic has joined #openstack-lbaas19:18
*** sbalukoff has joined #openstack-lbaas19:20
*** sbalukoff has quit IRC19:23
*** bharathm has quit IRC19:28
*** nmagnezi has joined #openstack-lbaas19:30
*** bharathm has joined #openstack-lbaas19:32
*** Kiall has quit IRC19:36
*** Kiall has joined #openstack-lbaas19:36
*** manishg has joined #openstack-lbaas19:38
*** manishg has quit IRC19:43
*** manishg has joined #openstack-lbaas19:45
*** bharathm has quit IRC19:46
*** bdrich has joined #openstack-lbaas19:47
*** bharathm has joined #openstack-lbaas19:50
nmagnezixgerman, ping. question about lb-network19:53
*** m1dev has quit IRC19:59
*** bharathm has quit IRC20:00
*** manishg has quit IRC20:00
*** bharathm has joined #openstack-lbaas20:05
*** manishg has joined #openstack-lbaas20:06
*** woodster_ has joined #openstack-lbaas20:06
*** manishg has quit IRC20:13
*** manishg has joined #openstack-lbaas20:13
*** manishg_ has joined #openstack-lbaas20:16
*** manishg_ has quit IRC20:16
*** manishg_ has joined #openstack-lbaas20:18
*** manishg has quit IRC20:18
*** bharathm has quit IRC20:21
*** minwang2 has joined #openstack-lbaas20:24
*** bharathm has joined #openstack-lbaas20:24
*** bdrich_ has joined #openstack-lbaas20:29
*** bharathm has quit IRC20:31
*** bdrich has quit IRC20:31
*** bharathm has joined #openstack-lbaas20:34
*** bdrich_ has quit IRC20:37
*** bdrich has joined #openstack-lbaas20:37
*** bharathm has quit IRC20:40
*** minwang2 has quit IRC20:40
nmagnezixgerman, hi20:42
nmagnezixgerman, still around?20:42
*** bharathm has joined #openstack-lbaas20:42
nmagnezixgerman, great :) so re: lb-network20:42
nmagnezixgerman, will all amphora vms be members of that network? even if they were created from within different tenants?20:43
xgermanyep, they all will be in that network so we can control them20:43
nmagnezixgerman, so what if I have large amount of amphoras? won't it be a issue?20:44
*** minwang2 has joined #openstack-lbaas20:44
xgermanmaybe… we thought about having several networks to shard that… but never got implemented20:45
nmagnezixgerman, did you test for high amount of amphoras? say I have 250 loadbalancers.. will heartbeat work well? will failover (which is new) be affected?20:46
xgermanno, we did not20:46
xgermanmaybe blogan has more tests20:47
nmagnezixgerman, also, if you don't mind, a small question about active/standby. how is the scheduling done? does it make sure that both active and standby vms are not on the same compute node?20:47
nmagneziblogan, ^^ when you see it, please let me know :)20:47
xgermanit’s not doing any scheduling right now20:48
nmagnezixgerman, so who decide where the vm is spawned?20:48
xgermanbut we wanted to tap into nova’s anti affinity.. also M is still under development ;-)20:48
nmagnezixgerman, care to share the patch url? :-)20:48
xgermanwell, I think nova has that feature20:50
xgermanso we need to code that for active-passive20:50
nmagnezixgerman, IMHO, this is very important. two amphora vms spawned on the same compute node.. is.. well.. it's just bad practice :)20:54
xgermanyeah, I know...20:54
nmagnezixgerman, ack. also today I've tested the image-builder script. worked fine with ubuntu. failed for both centos and fedora :(20:55
xgermanyeah, we only test with ubuntu20:55
johnsomYeah, sadly the Redhat side has not had much testing.20:56
nmagnezixgerman, I will try to help with that20:56
nmagnezijohnsom, ^20:56
johnsomPlease feel free to put up some patches20:56
nmagneziwill do20:56
*** doug-fish has joined #openstack-lbaas20:56
johnsomCool, ping me if you have questions about the image-builder scripts20:56
nmagnezijohnsom, sure will!20:57
nmagnezijohnsom, could you tell me, in short, aside from haproxy and keepalived, what else is running on the amphora? (not sure what responds to the health monitor calls)20:58
xgermanwe run our amphora-agent20:59
johnsomOther than those, and the standard cloudinit stuff, we install octavia in the image and run the amphora-agent (python code)20:59
nmagnezixgerman, btw, the docs about housekeeping need some elaboration (what are amphora spare pools?):
xgermanoh, the idea is that spinning up a vm takes time and so you can spin them up[ beforehand21:00
xgermanand the spare pool is how many you keep around21:00
xgermanthat should make failover (if you don’t use active-standby) and provisioning faster21:01
nmagnezixgerman, is that configurable? (amount of amphoras etc)21:01
nmagnezijohnsom, this code? (somewhere here):
nmagnezixgerman, octavia.conf i presume21:02
johnsomYes, that is the agent code21:03
blogannmagnezi: i have no additional tests for this yet, and we're not going to be using active-passive internally as well, but for our lb-network we will be using a provider network at first, and then later to improve scalability doing something else21:03
blogannmagnezi: but we'll be doign an active active thing, and yes the anti-affinity we will need as well21:04
johnsomJust to track it21:04
bloganjohnsom: excellent21:05
nmagneziblogan, hi :) i'm not 100% sure I follow that provider network thing. also in active passive what replaces lb-network?21:05
nmagnezijohnsom, thanks!!21:05
blogannmagnezi: oh well i guess i mentioned the provider network because if we used an isolated network we (internally) have a limit on the number of ports taht can be on that network, so we're using a provider network21:06
blogannmagnezi: i dont follow your second question21:06
nmagneziblogan, this is exactly my concern. if we have more than 256 amphoras a class C network won't be enough21:07
nmagneziblogan, as for my second question: " and we're not going to be using active-passive internally as well" did you mean you are not going to use the lb-network the same as you are using today for active-passive?21:08
blogannmagnezi: yep we have the same limit21:08
nmagneziblogan, so, is there a plan to solve it?21:08
blogannmagnezi: yeah we won't be using it for the heartbeating bc we won't be doing active-passive, but we will be using it for the communication and configuratin for the amphorae21:08
nmagneziblogan, so how will heatbeat communication work?21:09
blogannmagnezi: we do have a plan internally to solve it that we were going to discuss with the community once we have it fleshed out more, but we're not there yet21:09
nmagneziblogan, ack21:09
blogannmagnezi: sorry, i meant more the vrrp heartbeat, the heartbeat that octavia uses will be using the lb-network21:10
bloganand we will be using that21:10
blogani'm sure i have been more confusing than helpful :)21:10
nmagnezibtw I would love to join your weekly meetings, any chance to make them more Europe friendly? (timezone wise)21:10
nmagneziblogan, no you are not :) so how will vrrp heatbeat work between amphoras?21:11
blogan10-11 pm isn't europe friendly? :)21:11
nmagnezisure it is21:11
johnsomRight now the heartbeat is on the tenant network21:11
nmagnezijohnsom, is that intentional? why is it like that?21:12
johnsomIt was discussed and just ended up that way.  It probably should be changed in the future.  I argued for it's own network.21:13
nmagnezibtw guys, one final question for you today (and thanks a lot for your answers). how is a setup admin expected to update his amphoras? say tomorrow we want to use a newer version of keepalived21:13
nmagnezijohnsom, got it21:13
johnsomUpdate the image, update the conf to use the new image, failover the amphora21:13
nmagnezijohnsom, no solution without killing amphoras21:14
nmagnezibtw how do you manually trigger a failover? (that's a sub-question :) )21:14
johnsomNot currently.  I think the ubuntu image has security updates enabled, but that might not work in all environments.  If the operator has some other infrastructure, they can always update the image to use it.21:15
*** bharathm has quit IRC21:16
johnsomRight now, you can stop the amphora-agent or turn off the network port.  In the future that should be part of the operator API.21:16
blogannmagnezi: we won't be using vrrp for our amphorae, we'll be doing a "special" active/active topology that won't require vrrp (though we still will do a fast failure detection)21:25
*** bharathm has joined #openstack-lbaas21:25
bloganjohnsom: do you know if taskflow has something akin to a foreachflow (my new name for this), for example: i want a set of tasks to run for each item in an iterable?21:25
johnsomYes, they do21:26
bloganjohnsom: link?21:26
bloganjohnsom: can't seem to find it in the docs21:27
johnsomI think it is this:
johnsomBut still looking for a better example21:28
johnsomNo, that isn't it.  hang on21:29
johnsomI may be getting this confused with the retry foreach.  Let's ping the expert, harlowja do you have an answer for blogan?21:37
johnsomYou can set them up as parallel sub-flows as flow definition time.  That's how we build multiple amphora in parallel for the active/standby.21:39
bloganjohnsom: yeah but flow definition time is not in time to know how many items are going ot be in that list21:40
blogani mean its too early21:40
bloganit has to be known during run time21:41
nmagneziblogan, so when active/active gets merged, it deprecates active/standby? or users(admins) can choose and configure?21:56
*** diogogmt has quit IRC21:57
*** manishg_ has quit IRC21:58
blogannmagnezi: pretty sure it'll be configurable, seems like it'd be a good fit for flavors so users acna have different types of load balancers, but thats yet to be determined21:58
*** manishg has joined #openstack-lbaas22:01
*** TrevorV has quit IRC22:01
harlowjajohnsom blogan  yo22:04
nmagneziblogan, I agree. it's kind-of like service level for an amphora22:04
nmagneziblogan, thanks for the answer22:05
*** bdrich has quit IRC22:05
johnsomharlowja blogan had a question about iterating tasks22:05
*** bharathm has quit IRC22:05
harlowjablogan johnsom so taskflow currently doesn't have that, thats basically a dynamic flow, all the current stuff is statically compiling down to a DAG that is then traversed over22:06
harlowjawhat u are thinking seems more akin to a DAG that changes22:06
harlowja*which isn't impossible, just doesn't exist22:06
harlowjaall current patterns get smashed together when is called22:06
harlowjaand a DAG is formed (with other nodes that are used for internals as needed) which is then used during running22:07
bloganharlowja: ah okay, so i was thinking maybe a ForEachFlow could be created that takes the name of an iterable in the __init__ of that flow, and that name should be provided be some previous atom/task in that flow (or store)22:07
harlowjaright, then it expands the DAG when its activated ?22:08
bloganharlowja: oh i see, since its all smashed together at compile, and this iterable size would only be known during run, it can't be done right now22:08
harlowjaright blogan not impossible, just would require some work22:08
harlowjajust hasn't been done, but could be22:08
harlowjaif u dare to try, def could be made possible22:09
harlowjaif u accept this mission22:09
harlowjathis message will self-destruct22:09
bloganharlowja: yeah, i think having something like that would really increase the reusability of tasks and flows, just form my limited experience22:09
*** bharathm has joined #openstack-lbaas22:10
bloganbc right now i think i'm going to have to recreate all the tasks in a flow i want to use that does the looping22:10
bloganDoOneA, DoOneB -> DoManyA, DoManyB22:10
johnsomWell, you could just "run" n flows passing in the unique data22:11
bloganbut doing it that way has a hacky feel because i want to reuse the code thats in the DoOneA.execute, so I'm tempted to instantiate DoOneA and call execute22:11
harlowjaso a way that could be done, is basically u need figure out how to add a 'delayed' node into the DAG at
bloganinside a loo22:11
harlowjaand then at runtime, figure out what to do when that delayed node is encountered22:11
harlowjawhich would involve some work around (and subsequent code)22:11
harlowjadepends on how much work u want to do/try/explore ;)22:12
harlowja*if u want to do it22:12
harlowjait should be something that shouldn't be super-hard, but will require some exploration22:13
blogani'd actually like to, but then I may do a hacky way first and depending on how dirty i feel about it, attempt to do that22:13
harlowjathats fine22:13
bloganharlowja: well there's definitely a deep dive into taskflows guts that is required22:13
harlowjai offer free consulations/deep dives22:13
harlowjafor limited time only22:13
harlowjafree of charge22:13
bloganand then you require payment eh?22:13
harlowjacredit card number, that i will charge monthly after 30 days22:14
blogansounds like a drug dealer22:14
blogana sophisticated drug dealer!22:14
harlowjagotta keep up with the times u know22:14
bloganandroid pay/apple pay?22:14
harlowjanot that sophisticated22:14
bloganalright thanks for the info22:15
bloganif i/someone did get this in taskflow i'd feel better about taskflow22:16
harlowjataskflow-self-improvement program ftw22:17
harlowjaTFSI for short22:17
blogantake a SIP of TaskFlow22:17
*** rtheis has quit IRC22:18
harlowja is how another similarish library/framework does it, perhaps we can do similar stuff22:18
harlowjaalthough its slightly different, but something to look at/think about22:18
harlowjadepends on how TFSIP would work out22:19
harlowja*in this case22:19
harlowja^ something like that would probably not work exactly, but we can work through something that would22:21
harlowja*mainly due to out of process execution being possible in taskflow22:21
harlowjaand yielding from something out of process back to its 'engine' process is ummm, gonna be odd22:21
harlowjaodd/not work, lol22:22
harlowja*although we could mark such 'delayed' things as not being able to run out of process22:22
*** manishg has quit IRC22:35
*** manishg_ has joined #openstack-lbaas22:37
*** manishg has joined #openstack-lbaas22:40
*** minwang2 has quit IRC22:42
*** manishg_ has quit IRC22:42
*** bharathm has quit IRC22:52
*** woodster_ has quit IRC22:56
*** bharathm has joined #openstack-lbaas23:01
nmagneziblogan, hi, just noticed an snat namespace when using regular (single amphora, why is this needed?23:16
*** sbalukoff has joined #openstack-lbaas23:19
blogannmagnezi: where do you see the snat namespace?23:25
nmagneziblogan, i'm running an all-in-one devstack node23:26
blogannmagnezi: oh that might be for the L3 agent23:26
blogannmagnezi: bc that doesn't sound like something needed for octavia23:27
blogani could be wrong though23:27
nmagneziblogan, ack :)23:27
nmagneziblogan, just read about active/standby - keepalived if I understand correctly, is running inside the amphoras - right?23:28
blogannmagnezi: correct23:28
nmagneziblogan, so both instances of keepalived communicate from within their amphoras, via.. lb-network?23:28
blogannmagnezi: if my memory serves me correct, and it probably doesn't, its doing it over teh vip network, which i believe has its own problems,but we know about them23:32
nmagneziblogan, by vip network you mean the floating ip network (externa)? or the tenant netwok?23:33
nmagneziand what problems? are there any bugs available to read?23:33
bloganthe network the user specifices they want their VIP to be allocated on, most likely a tenant network yes23:34
bloganbut the problem with that is the vrrp communication needs a port to listen on and that can conflict with a port a user wants their load balancer to listen on23:35
nmagneziblogan, yes, this is why in ha routers they create a dedicated network23:35
blogannmagnezi: yep and thats probably the solution we'll go with to fixing this23:39
nmagneziblogan, ack, i will try to follow-up23:39
nmagneziblogan, leaving now, thanks a lot for the answers :)23:39
blogannmagnezi: np, anytime23:40
*** bharathm has quit IRC23:40
*** nmagnezi has quit IRC23:44
rm_workanyone know what time evgeny usually gets on? :)23:45
*** minwang2 has joined #openstack-lbaas23:46
*** manishg has quit IRC23:49
*** bharathm has joined #openstack-lbaas23:49
*** yuanying has quit IRC23:50
*** ducttape_ has quit IRC23:50
*** yuanying has joined #openstack-lbaas23:51

Generated by 2.14.0 by Marius Gedminas - find it at!