Monday, 2018-08-13

openstackgerritJacky Hu proposed openstack/neutron-lbaas-dashboard master: Replace noop tests with registration test
*** longkb has joined #openstack-lbaas00:38
openstackgerritJacky Hu proposed openstack/neutron-lbaas-dashboard master: Replace noop tests with registration test
openstackgerritJacky Hu proposed openstack/neutron-lbaas-dashboard master: Removes testr and switches cover to karma-coverage
*** colby_home has quit IRC01:29
*** hongbin has joined #openstack-lbaas02:19
*** ramishra has joined #openstack-lbaas03:53
*** ramishra has quit IRC03:54
*** ramishra has joined #openstack-lbaas03:54
*** rcernin has quit IRC04:54
*** rcernin has joined #openstack-lbaas04:54
*** openstackgerrit has quit IRC05:18
*** hongbin has quit IRC05:20
*** pcaruana has joined #openstack-lbaas05:59
*** pcaruana has quit IRC06:05
*** openstackgerrit has joined #openstack-lbaas06:12
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas master: Imported Translations from Zanata
*** pcaruana has joined #openstack-lbaas06:19
*** ispp has joined #openstack-lbaas06:52
*** nmagnezi has quit IRC07:01
*** rcernin has quit IRC07:03
*** nmagnezi has joined #openstack-lbaas07:20
*** nmagnezi has quit IRC07:26
*** nmagnezi has joined #openstack-lbaas07:32
*** rpittau has joined #openstack-lbaas07:43
*** pcaruana has quit IRC07:43
*** pcaruana has joined #openstack-lbaas07:57
*** ispp has quit IRC08:11
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: Update links in README.rst
*** ktibi has joined #openstack-lbaas08:37
*** salmankhan has joined #openstack-lbaas09:00
openstackgerritNir Magnezi proposed openstack/octavia master: Leave VIP NIC plugging for keepalived
*** ktibi has quit IRC09:30
*** ktibi has joined #openstack-lbaas09:31
*** rpittau has quit IRC09:36
*** rpittau has joined #openstack-lbaas09:36
*** celebdor has joined #openstack-lbaas10:18
*** numans has quit IRC10:37
*** numans has joined #openstack-lbaas10:41
*** dmellado has quit IRC10:46
*** sapd1 has joined #openstack-lbaas11:16
*** longkb has quit IRC11:34
*** amuller has joined #openstack-lbaas12:01
*** rpittau is now known as rpittau|afk12:10
*** rpittau|afk is now known as rpittau12:10
*** celebdor has quit IRC12:17
*** dmellado has joined #openstack-lbaas12:50
*** celebdor has joined #openstack-lbaas12:58
*** ramishra has quit IRC13:04
*** ramishra has joined #openstack-lbaas13:48
*** savvas has joined #openstack-lbaas13:51
*** fnaval has joined #openstack-lbaas14:00
*** devfaz has joined #openstack-lbaas14:45
devfazHi, are there any open bugs regarding heat-tempest-lbaasv2-tests against octavia?14:46
*** savvas has quit IRC15:23
*** pcaruana has quit IRC15:37
*** devfaz has left #openstack-lbaas16:00
*** devfaz has joined #openstack-lbaas16:00
*** pcaruana has joined #openstack-lbaas16:40
*** openstackgerrit has quit IRC17:19
*** celebdor has quit IRC17:23
*** jiteka has quit IRC17:31
*** eandersson has quit IRC17:32
*** jiteka has joined #openstack-lbaas17:34
*** ktibi has quit IRC17:34
*** ianychoi_ has joined #openstack-lbaas17:38
*** ianychoi has quit IRC17:41
*** pcaruana has quit IRC17:59
*** salmankhan has quit IRC18:15
mnaserare Guru Meditation reports not available in octavia-health-manager ?19:06
johnsommnaser: I am still out on vacation until tomorrow. I though we had it in all of them.19:13
mnaserjohnsom: no worries, enjoy it.  i might be hitting
mnaseri have like 500 threads per octavia-hm process19:14
*** lxkong_ has joined #openstack-lbaas19:17
*** lxkong has quit IRC19:25
*** ptoohill- has quit IRC19:25
*** lxkong_ is now known as lxkong19:25
*** amotoki has quit IRC19:27
*** amotoki has joined #openstack-lbaas19:30
colby_When addressing the octavia directly via the cli I specified the subnet id but the network id and ip address do not match the subnet (it fails with port not found when trying to build the lb)19:32
colby_one thing I was wondering. Does the management network need to belong to the service project (where octavia user is) or the admin project? In docs it said admin.19:33
colby_sorry Im trying to spin up a lb with openstack loadbalancer create19:34
*** salmankhan has joined #openstack-lbaas19:39
*** salmankhan has quit IRC19:44
*** amuller has quit IRC19:56
*** salmankhan has joined #openstack-lbaas20:32
rm_workmnaser: which version are you on?21:12
rm_worki think you said you run Master?21:12
mnaserrm_work: queens21:12
mnaserBut up to date from stable branch21:12
rm_workyeah i thought we got backports in for all the applicable HM issues I had found...21:13
mnaserThis keeps happening.. do you think it could be related to the machine having a high core count21:13
rm_workyou shouldn't have that many threads though :P21:13
rm_workunless by threads you mean, actual processes21:13
rm_workbecause i switched it from threading to multiprocess21:13
mnaserIt has 40 cores.  I end up with like a ton of health manager processes with like 500 or so threads inside each21:13
rm_work(specifically because threads were broken)21:14
mnaserI would do ps -T -p pid and it would read some 500 entries21:14
rm_worklet me check something21:14
mnaserstrace with -fF shows it detach from a ton of things when I ctrl+c21:14
mnaserAlso, during all this, I get the warning that health check processing took too long21:15
rm_workyeah wtf hold on21:16
rm_workno, confirmed it should be using processes not threads on queens21:22
rm_workthere's no way you should have that many threads21:23
rm_workcan you spot-check the code to make sure it looks like this?
*** abaindur has joined #openstack-lbaas21:29
colby_will octavia create the ports automatically for the vip in neutron?21:32
*** rcernin has joined #openstack-lbaas22:01
*** abaindur has quit IRC22:04
*** abaindur has joined #openstack-lbaas22:04
lxkongcolby_: i think so, you could specify a subnet id, octavia will create port inside.22:12
colby_hmm ok it does not seem to be working correctly Im trying to narrow down. Im getting a Port Not Found error.22:13
*** rtjure has quit IRC22:35
rm_workjohnsom: so when you're back tomorrow -- having issues with SOURCE_IP algorithm22:44
rm_workit seems to always just direct all traffic to one member regardless of anything22:45
johnsomrm_work: stuck in traffic and not driving, so can chat22:45
rm_work3 members, all seem up and good, but only one receiving ALL traffic (from many many different source IPs)22:45
rm_workTHAT stuck? T_T22:45
johnsomNot pushing 5mph22:46
rm_workwell, i can paste you stuff later, but22:46
rm_worki'm at loss... did verification that everything is good in config and health is good for all members via the socket stats22:46
rm_workeverything looks like it SHOULD be balancing22:46
rm_workbut it just isn't <_<22:46
rm_workI don't know how to get further insight into why it's making the routing decisions it is22:47
johnsomSo, wait, source_ip is session persistence, so yeah, should be stuck22:47
rm_worksource_ip *balancing alg*22:47
rm_workon the pool22:47
rm_workit's HTTPS with no termination22:47
rm_workno session persistence on the pool22:48
rm_workbut yes, it should still "do session persistence" kinda22:48
rm_workexcept, LITERALLY ALL TRAFFIC from hundreds of different source IPs, all going to member 122:48
rm_worknothing to any other member22:48
rm_workthat isn't "balancing" lol22:48
johnsomHmm, so source_IP is hash based.  How many members and what are their weights?22:50
rm_work3 members, all weight 122:50
rm_workremoving the member that is receiving 100% of the traffic appears to cause all traffic to redirect to one of the remaining two <_<22:50
johnsomSo divide the ip address range into three buckets, that is how it will balance if my memory is good22:51
rm_workso what it IS doing, is this:22:51
rm_worktraffic comes in -> goes to bucket 122:51
johnsomRight, you would have a 50-50 chance22:51
rm_workso now imagine we have hundreds of unique IPs hitting it22:52
rm_workand ALL of them are going to bucket 122:52
johnsomBy hash though22:52
rm_workso let's see what the probability is22:52
rm_workthat all 300 or so unique IPs that have hit the LB22:52
rm_workall hash to the same member of those 322:52
rm_workthat's 33%...22:52
johnsomCould happen, but would be odd.22:53
rm_worksooooo PROBABLY NOT lol22:53
johnsomIt is skewed though, as large blocks of possible IPs are reserved, etc22:53
rm_workwhich brings me back to, something is up with source_Ip22:53
rm_workso even within a block22:54
rm_workimagine 64 sequential IPs22:54
rm_worki can't imagine any hashing algorithm that isn't absolute *trash* that would hash all 64 the same22:54
rm_work"oh, that IP starts with 172.157, guess it hashes to 3"22:55
rm_workwe're so far beyond "maybe randomly it just worked out that way" that i can't even begin to describe how ridiculous that suggestion is, lol22:59
rm_workmaybe if it was like 4 source_IPs *maybe*22:59
rm_workbut even that is statistically pretty unlikely22:59
rm_worklike, ~1%22:59
rm_workthis is hundreds of source IPs, and not a single byte of traffic to any other member23:00
johnsomI am not saying it randomly worked out that way, I am saying the hash for source IP by default is pretty dumb.23:01
johnsomCheck the haproxy docs for the balance and hash-type keywords23:02
johnsomIt is intended, in default, to be a poor mans session persistence23:02
rm_workwhen we have no way to TLS terminate23:03
johnsomI would expect that most of the common IP fit in one bucket with three servers23:03
rm_workand someone has a requirement for session persistence :P23:03
rm_workyou would actually expect their hashing algorithm to be so bad that 100% of IPs hash the same?23:04
johnsomYou can use round robin with source ip SP23:04
johnsomOn TCP flows23:04
rm_workI will have them try that23:05
rm_workbut like23:05
rm_workshould we *disable* source_ip as an algorithm?23:05
rm_workif it is this broken?23:05
rm_workthis is unusable23:05
johnsomWell, I can’t see your IP list, but a bucket is a third of the IPs with one bucket probably reserved addresses23:06
rm_worknot just "bad"23:06
rm_workone sec23:06
johnsomIt isn’t broken, I bet it is doing exactly how it is configured23:06
rm_worki got an IP list23:08
rm_workI was wrong, it's not hundreds23:08
johnsomWhich is not what you want or expect23:08
rm_workit's 285523:08
rm_work2855 different source IPs23:08
rm_workand every single connection, 100%, went to a single member23:08
johnsomYeah, that is a pretty small spread23:08
rm_workI can give you the list if you want23:08
rm_workare you fucking kidding me right now? lol23:08
rm_workalmost 3000 IPs23:08
johnsomWell, I am on mobile so no use23:09
rm_workand "ah yeah, they all hash the same, that makes sense"???23:09
rm_workthat is a completely broken hashing algorithm23:09
rm_workthat's like a random number generator where the code is "return 5  # because I like the number 5"23:09
johnsom3k, clustered IPs, out of 4.293 million addresses.  Draw two horizontal lines on a whiteboard, divide the ip ranges evenly, starting with in the first section, and ending with in the bottom section.23:12
*** fnaval has quit IRC23:13
johnsomThat is how you have it configured right now23:13
rm_workthat's not a hashing algorithm at all23:13
rm_workso you're saying the SOURCE_IP algorithm isn't hash based23:14
rm_workok no, what you are saying does not line up with what is in the HAProxy configuration manual23:16
rm_work"The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request."23:17
johnsomOk, well maybe I am wrong. I can’t exactly research right now23:17
johnsomYeah, that quote is what I was saying23:17
rm_workbut it's *hashed*23:18
rm_work"the hash table is a static array containing all alive servers. The hashes will be very smooth, will consider weights, but will be static in that weight changes while a server is up will be ignored. This means that there will be no slow start."23:18
johnsomThat does not mean it has an even distribution23:19
rm_work"sdbm   this function was created initially for sdbm (a public-domain reimplementation of ndbm) database library. It was found to do well in scrambling bits, causing better distribution of the keys and fewer splits. It also happens to be a good general hashing function with good distribution"23:19
johnsomMaking an int from an IP and dividing by the buckets is a hash23:20
johnsomIs sdbm the default?23:20
rm_work"The default hash type is "map-based" and is recommended for most usages. The23:20
rm_workdefault function is "sdbm", the selection of a function should be based on23:20
rm_workthe range of the values being hashed."23:20
*** salmankhan has quit IRC23:25
*** hvhaugwitz has quit IRC23:31
*** hvhaugwitz has joined #openstack-lbaas23:31
abaindurjohnsom: We made some progress too on setting up octavia from the issues last week23:41
abaindurbut now i had some questions regarding tenancy23:41
abaindurwe have to give the octavia user some credentials in the service_auth section, with a tenant specified. Our octavia user is in services tenant23:43
abaindurthe issue is, the LBs come up in this tenant. is there a way to have them come up in the tenant of the user who created the LBs?23:43
abaindurOtherwise they are hidden to this tenant23:44
abaindur(the amphora VMs)23:44
johnsomNo, it is intended that these are not visible to the tenant and do not impact their service quotas23:45
johnsomThis is by design23:45
abaindursecond, does the mgmt LB network need to be shared, or owned by whatever tenant the octavia user is? I am assuming if the "services" tenant doesnt have access to this network, the amphora cant be bototed up on this network23:45
johnsomAmphora are not a concept users should be aware of23:46
abaindurah... ok23:46
abaindurSO i'm guessing we should create a special LB tenant that the octavia user is a part when we specify the creds (since we dont want it in services tenant). and We should create the LB network as owned by this tenant23:47
johnsomLb-mgmt is purely for the internal Octavia use,  so only needs to be available to the octavia service account23:47
johnsomYeah, I think OSA uses services project and octavia username. But projects are cheap...  grin23:48
abaindurHow does the amphora attach to the backend networks we want to load balance to btw?23:49
abaindurhere i am assuming they will not be shared, but isolated tenant networks23:49
johnsomHowever if you do not use the service project you may need to do some RBAC config in the other services that is usually handled already for the service project23:50
abaindurNow that our amphora and LB mgmt net is owned by services (or whatever new tenant we use), it can still attach to non-shared networks used to backend servers?23:50
abaindurwhich migth be in "tenantA" or B etc...23:51
johnsomWe have RBAC rights to attach those tenant networks23:51
abainduri dont believe we are using rbac... using the old admin_or_owner policy.json that was supplied23:51
johnsomThat is RBAC, just a simple rule set23:51
johnsomThese RBAC settings are not in Octavia, but the other services23:52
johnsomNova, neutron, glance, barbican, etc.23:53
johnsomThe service project usually has these rules already in those services23:53
abaindurah ok thanks. sorry my knowledge on RBAC is limited :)23:55
lxkongjohnsom, rm_work, hi could you please take a look at!/story/2003413? I'd like to hear your suggestions23:57
*** irenab has quit IRC23:58
*** abaindur has quit IRC23:58
*** abaindur has joined #openstack-lbaas23:58
johnsomI can look tomorrow. Still on vacation and disappearing again now23:59
lxkongjohnsom: kk, have fun :-)23:59

Generated by 2.15.3 by Marius Gedminas - find it at!