Monday, 2019-03-11

*** luksky has quit IRC01:27
*** hongbin has joined #openstack-lbaas02:02
*** hongbin has quit IRC03:58
*** yamamoto has joined #openstack-lbaas04:32
*** ramishra has joined #openstack-lbaas04:45
*** fnaval has quit IRC05:35
*** sapd1 has joined #openstack-lbaas06:08
*** ivve has joined #openstack-lbaas06:28
*** ianychoi has quit IRC06:32
*** ianychoi has joined #openstack-lbaas06:32
*** ianychoi has quit IRC06:35
*** ianychoi has joined #openstack-lbaas06:36
*** ccamposr has joined #openstack-lbaas06:51
*** pcaruana has joined #openstack-lbaas07:00
*** rcernin has quit IRC07:03
*** ltomasbo has joined #openstack-lbaas07:07
*** gthiemonge has joined #openstack-lbaas07:25
*** gcheresh_ has joined #openstack-lbaas07:40
*** gthiemonge has quit IRC07:51
*** gthiemonge has joined #openstack-lbaas07:51
*** rpittau|afk is now known as rpittau08:09
*** sapd1 has quit IRC08:15
*** dmellado has quit IRC08:20
*** luksky has joined #openstack-lbaas08:38
*** gthiemonge has quit IRC09:15
*** gthiemonge has joined #openstack-lbaas09:48
*** yamamoto has quit IRC10:17
*** yamamoto has joined #openstack-lbaas10:46
*** yamamoto has quit IRC10:52
*** yamamoto has joined #openstack-lbaas10:56
*** dmellado_ has joined #openstack-lbaas11:04
*** dmellado_ is now known as dmellado11:05
*** ivve has quit IRC11:31
kmadac2Hi folks, when I create loadbalancer in horizon dashboard, I can see 4 amphorae VMs created when I do openstack server list. spare_amphora_pool_size is set to 1. What could be the reason for so many aphoraes per LB?11:38
cgoncalveskmadac2, how many LBs do you have created and what is their topology (active-standby/standalone)?11:39
kmadac2@cgoncalves: there is only 1 LB and loadbalancer_topology = SINGLE11:41
cgoncalveskmadac2, so 2 out of 4 amphora VMs are known. the other 2 might have been of previous LB creations that for some reason didn't get delete...?11:43
cgoncalvesin case those 2 extra amphora VMs are up and sending heartbeats, they should have been detected by the health manager service (as "zombie amphorae") and deleted11:45
kmadac2cgoncalver: I had an issue to delete previous LBs, but then I used --cascade argument and it looked that LB has been deleted11:45
cgoncalvesclarification to my previous comment: the health manager only deletes them if there's a record of them in the DB, otherwise it skips touching them11:47
kmadac2cgoncalver: why should there be 2 amphoraes, when only LB with SINGLE topology is created?11:47
cgoncalveskmadac2, because you have 1 LB in SINGLE topology so that is 1 amphora, plus another amphora in the spare pool11:47
kmadac2cgoncalver: i see11:48
kmadac2cgoncalver: so let back to main issue. After deletion of all old LBs, 3 amphoraes still existed so I deleted VMs manually in horizon. Then I created one new LB and 4 amphoraes were started. What is the best way to ensure that everyting is really cleaned up?11:52
kmadac2cgoncalver: I just deleted that on single LB and I still have 2 amphoraes running11:55
kmadac2cgoncalver: does it take a while till spare amphorae is deleted?11:56
cgoncalveskmadac2, spare amphorae will not be deleted even if there are no LBs created12:06
cgoncalvesgiven you configured spare_amphora_pool_size=1, you should have 1 amphora in the spare pool at (almost) all times12:08
cgoncalvesI'm not sure how you ended up with 4 amphorae created with 1 LB in single topology and spare_amphora_pool_size=1. it should have been 2 amphorae total12:09
cgoncalvesplease check the octavia logs12:09
cgoncalvesif possible, share the output of these two commands: "openstack loadbalancer list" and "openstack loadbalancer amphora list"12:11
cgoncalvesI'll be back in 1 hour (lunch)12:11
*** luksky has quit IRC12:13
dmelladocgoncalves: o/12:13
*** ivve has joined #openstack-lbaas12:14
cgoncalvesdmellado, o/ !!12:17
*** trown|back11mar is now known as trown12:46
*** luksky has joined #openstack-lbaas12:53
*** gcheresh has joined #openstack-lbaas13:05
*** gcheresh_ has quit IRC13:06
*** gcheresh has quit IRC13:11
*** gcheresh has joined #openstack-lbaas13:11
*** yamamoto has quit IRC13:24
*** ianychoi has quit IRC13:28
*** ianychoi has joined #openstack-lbaas13:29
*** sapd1 has joined #openstack-lbaas13:31
*** henriqueof has quit IRC13:41
*** lxkong has quit IRC13:42
*** yamamoto has joined #openstack-lbaas13:43
kmadac2@cgoncalves: here is output of those commands: I had some misconfigurations in dashboard, so it could cause such inconsistency between amphoras and load balancers.13:43
kmadac2cgoncalver: I'm going to delete all amphora records in DB and will try to create LBs again.13:44
cgoncalveskmadac2, looks like you have 3 amphorae in spare pool. not sure how that happened. the other two in PENDING_CREATE might have been from previous failed attempted...?13:46
kmadac2@cgoncalves: how can I know, that those are from spare pool? is it because the do not have other than mgmt IP?13:58
cgoncalveskmadac2, that plus status READY13:59
cgoncalvesI think I once stumbled on something very similar where I ended up having more spare amps than set. I should had opened a bug...14:00
kmadac2@cgoncalves: ok, thanks. So I suppose that only way how to solve it is to delete them from DB and delete the VMs.14:00
*** henriqueof has joined #openstack-lbaas14:05
cgoncalvesah, the extra amp count I experienced was when I set pool size to X and later decreased it. I was expecting the housekeeping to delete the extra ones14:05
cgoncalveskmadac2, maybe that is happening to you too?14:06
*** KeithMnemonic has joined #openstack-lbaas14:11
johnsomkmadac2: It is normal for the spares pool to over create a few on initial startup. When it is enabled, it is a minimum value. Typically the few extra get consumed with LB builds pretty quickly.14:12
KeithMnemonicjohnsom: Hi Michael how are you doing. What is the use case for having multiple amp_boot_network_list networks14:12
kmadac2@cgoncalves: I havent changed pool size, so that wont be my issue here. What is interesting is that even I remove all records from DB (tables amphora and amphora_health), VMs amphoras are built again and again14:13
johnsomWe just haven’t spent the time to make sure it doesn’t over create.14:13
johnsomKeithMnemonic: zero14:13
johnsomKeithMnemonic: IMO it was a botched attempt to allow one to be picked from the list.14:14
johnsomKeithMnemonic: instead it got implemented that all of them get plugged.14:15
*** mkuf has quit IRC14:31
openstackgerritGregory Thiemonge proposed openstack/octavia master: Fix typo and remove useless lines in user guide
*** sapd1 has quit IRC15:05
cgoncalvesjohnsom, how come the housekeeping service over creates spare amps?15:11
johnsomWe never put the locking in for it.15:11
johnsomEveryone was “do we care?” and evidently the answer was, no15:12
johnsomSo, it is sloppy15:12
johnsomMaking sure there is a minimum was the higher priority. Plus with act/stdby and the nova limitations, spares became less important.15:13
cgoncalvesso N HK instances starting at same time can create N*spare_amphora_pool_size amps?15:15
cgoncalvesI guess the same is valid even after if they run the periodic check at approx. same time15:16
cgoncalvesover SLAing :)15:18
*** yamamoto has quit IRC15:21
*** fnaval has joined #openstack-lbaas15:22
*** yamamoto has joined #openstack-lbaas15:29
xgermancompute is cheap15:43
*** yamamoto has quit IRC15:44
*** yamamoto has joined #openstack-lbaas15:45
*** yamamoto has quit IRC15:59
*** yamamoto has joined #openstack-lbaas16:00
*** ramishra has quit IRC16:02
*** gcheresh has quit IRC16:12
*** luksky has quit IRC16:13
*** yamamoto has quit IRC16:17
*** pcaruana has quit IRC16:23
*** pcaruana has joined #openstack-lbaas16:23
*** yamamoto has joined #openstack-lbaas16:25
*** trown is now known as trown|lunch16:30
*** ivve has quit IRC16:32
johnsomOk, caught up with the weekend fun... Commented on the project deletion community goal.  I.e. Let's think about immutable objects..... grin16:34
cgoncalvesdrop octavia; mysqldump < octavia-schema.sql ? :P16:36
cgoncalvesoh, never mind, per project :)16:37
johnsomThat sounds more like "service" deletion than project deletion.....16:37
colin-anybody operating Mellanox network hardware in support of octavia? news of their acquisition had me wondering how widespread they are in private cloud16:38
johnsomIt's also a bit of "stuff is going to go wrong, how should we handle it"16:38
johnsomcolin- Offloading nics are fairly widely used.  Since the amphora sit on top of neutron networking, we benefit from any optimization that is done at the neutron level, such as offloading nics.16:39
johnsomUsually what I see is setups using the VXLAN offload for an overlay network.16:40
*** gthiemonge has quit IRC16:48
xgermanyep, we did some experimenting with SR-IOV two years back… but most people are not really into optimizing the flow16:51
*** gthiemonge has joined #openstack-lbaas16:52
johnsomYeah, I have been shocked at how little node-to-node performance seems to matter in some deployments....16:52
*** gthiemonge has quit IRC16:53
johnsomI know one product guy that was beating on the table that we need huge performance from the load balancer. I had them do a benchmark on the cloud without the load balancer. Come to find out they could use WiFi for their interconnect and wouldn't notice....16:55
*** rpittau is now known as rpittau|afk17:09
*** gthiemonge has joined #openstack-lbaas17:10
*** luksky has joined #openstack-lbaas17:12
openstackgerritGerman Eichberger proposed openstack/octavia-tempest-plugin master: Save amphora logs in gate
openstackgerritGerman Eichberger proposed openstack/octavia master: Amphora logging
cgoncalvesxgerman, commit message in says unit tests are missing. I see them but perhaps you want to add more or just forgot to delete that note?17:13
*** kklimonda_ has quit IRC17:14
*** kklimonda has joined #openstack-lbaas17:14
xgermanyeah, I wanted to add more but will see why it doesn’t work in the gate and then resubmit with better commit… johnsom aso wanted anotehr set of logging hosts (one for system meesages and one for haproxy)17:15
xgermanvery likelt early Train17:15
cgoncalvesxgerman, we're talking about different patches17:15
cgoncalvesI'm talking about "Refactor the unplugging of the VIP"17:16
xgermanAh, that one17:16
xgermanok, yeah, will remove that message17:16
openstackgerritGerman Eichberger proposed openstack/octavia master: Refactor the unplugging of the VIP
xgermanyeah, would love to have that in S17:18
cgoncalvesa more detailed commit message and/or release note and/or story would have helped and better prioritize reviews17:20
xgermanwell, it’s up since October so people had 5 months to comment :-)17:28
johnsomI did look at the patch review list this morning.17:29
xgermanI was working through it...17:29
johnsomI still need to dig through stories that don't have patches yet.17:30
openstackgerritGerman Eichberger proposed openstack/octavia master: db: fix length of fields
*** ivve has joined #openstack-lbaas17:35
xgerman"ImportError: No module named ‘octavia_lib’” shouldn’t tox just run?17:44
xgermanmaybe -rv?17:44
xgermanone sec17:45
johnsomIf there has been a requirements change, you need to run tox with the -r option to pull in the new requirements17:45
johnsomI guess it's been a while since we added a requirement.... grin17:45
xgermanwell, the crazy tox in docker I run usually poull in reqs all the time but who knows17:45
*** trown|lunch is now known as trown17:49
cgoncalvesthe LB delete flow deletes listeners. it shouldn't, right?17:58
johnsomIt should if it's a cascade delete17:59
cgoncalvesbut it is not17:59
cgoncalvesthe cascade delete deletes listeners and pools17:59
johnsomI mean you can't have a listener without a load balancer18:00
johnsomI agree, that is odd and probably a mistake, but would have to dig through the flows to confirm it's not needed for some odd reason.18:01
johnsomThe API will block an LB delete if there is a listener unless it's a cascade.18:02
openstackgerritGerman Eichberger proposed openstack/octavia master: Allows failover if port is not deallocated by nova
cgoncalveswell, xgerman is (accidentally?) removing that listeners_delete flow18:02
cgoncalvesthe API doesn't allow deleting LB with just a listener as child18:03
cgoncalvesleft quick comment on
xgermanah, thanks… gotta run to get a haircut, meet people, etc.18:05
openstackgerritMerged openstack/octavia master: Fix performance of housekeeping DB clean up
openstackgerritMerged openstack/octavia master: Fix parallel plug vip
openstackgerritMerged openstack/octavia master: Fix an amphora driver bug for TLS client auth
*** ccamposr has quit IRC19:01
*** yamamoto has quit IRC19:14
openstackgerritguang-yee proposed openstack/octavia master: Update osutil support for SUSE distro
openstackgerritguang-yee proposed openstack/octavia master: Update osutil support for SUSE distro
openstackgerritCarlos Goncalves proposed openstack/octavia stable/rocky: Fix performance of housekeeping DB clean up
*** lxkong has joined #openstack-lbaas20:00
*** yamamoto has joined #openstack-lbaas20:06
*** yamamoto has quit IRC20:12
*** gthiemonge has quit IRC20:54
*** trown is now known as trown|outtypewww21:05
*** abaindur has joined #openstack-lbaas21:06
*** abaindur has quit IRC21:06
*** abaindur has joined #openstack-lbaas21:06
*** henriqueof has quit IRC21:08
*** kosa777777 has joined #openstack-lbaas21:23
*** pcaruana has quit IRC21:45
*** ivve has quit IRC21:54
johnsomlol, interesting22:29
*** threestrands has joined #openstack-lbaas22:50
*** fnaval has quit IRC23:00
colin-is there anything in the documentation that describes when we source nat and don't23:05
colin-or anything on the topic of snat in octavia23:05
johnsomcolin- We don't really SNAT. The octavia driver is a full proxy, which means there is a connection to the VIP and a separate connection to the member from the amphora. It's not a NAT however, it's a proxy.23:19
johnsomThere is a ton a subtlety in that, I get it.23:20
johnsomActually, I guess the one case where we do something similar to a NAT is for UDP.23:22
johnsomBut all of the TCP based protocols are two separate sessions.23:22
johnsomThis is why we can accept IPv4 on the VIP and connect to IPv6 members on the back, or the other way around.23:23
colin-so, the back end hosts in that case don't see the source IP for TCP VIPs?23:30
johnsomCorrect, they won't see the client source IP that connected to the VIP. They have to use the PROXY protocol or the header insertion to see that data.23:30
*** fnaval has joined #openstack-lbaas23:31
*** fnaval has quit IRC23:33
*** ccamposr has joined #openstack-lbaas23:33
*** abaindur has quit IRC23:36
*** luksky has quit IRC23:40
*** ccamposr has quit IRC23:40
colin-and when you say use the PROXY protocol you're referring to the protocol argument the pool create command supports?23:43
colin-that's the first oppurtunity i see to use it23:43
johnsomYes, It's the protocol, set on the pool, used to connect to the member servers. PROXY is a TCP protocol that prefixes the normal TCP data flow with a prefix of information about the flow.23:44
colin-understood just wanted to make sure that i understood where octavia implements it23:45
johnsomThat is a good description, but it is also used outside of HAProxy23:45
colin-and that i wasn't missing anything by only looking at the pool create23:45
*** threestrands has quit IRC23:45
johnsomYeah, listener is the protocol on the frontend, pool is the protocol on the backend connection.23:45
johnsomcolin- BTW, if you find things that we haven't described well in the docs, feel free to open stories for us. I know there are bunch of things missing, etc. But having stories makes it easier to track the important things, etc.23:50
colin-ok, i'll log one with this and a couple of other generic questions that i have rattling around23:53

Generated by 2.15.3 by Marius Gedminas - find it at!