*** luksky has quit IRC | 01:27 | |
*** hongbin has joined #openstack-lbaas | 02:02 | |
*** hongbin has quit IRC | 03:58 | |
*** yamamoto has joined #openstack-lbaas | 04:32 | |
*** ramishra has joined #openstack-lbaas | 04:45 | |
*** fnaval has quit IRC | 05:35 | |
*** sapd1 has joined #openstack-lbaas | 06:08 | |
*** ivve has joined #openstack-lbaas | 06:28 | |
*** ianychoi has quit IRC | 06:32 | |
*** ianychoi has joined #openstack-lbaas | 06:32 | |
*** ianychoi has quit IRC | 06:35 | |
*** ianychoi has joined #openstack-lbaas | 06:36 | |
*** ccamposr has joined #openstack-lbaas | 06:51 | |
*** pcaruana has joined #openstack-lbaas | 07:00 | |
*** rcernin has quit IRC | 07:03 | |
*** ltomasbo has joined #openstack-lbaas | 07:07 | |
*** gthiemonge has joined #openstack-lbaas | 07:25 | |
*** gcheresh_ has joined #openstack-lbaas | 07:40 | |
*** gthiemonge has quit IRC | 07:51 | |
*** gthiemonge has joined #openstack-lbaas | 07:51 | |
*** rpittau|afk is now known as rpittau | 08:09 | |
*** sapd1 has quit IRC | 08:15 | |
*** dmellado has quit IRC | 08:20 | |
*** luksky has joined #openstack-lbaas | 08:38 | |
*** gthiemonge has quit IRC | 09:15 | |
*** gthiemonge has joined #openstack-lbaas | 09:48 | |
*** yamamoto has quit IRC | 10:17 | |
*** yamamoto has joined #openstack-lbaas | 10:46 | |
*** yamamoto has quit IRC | 10:52 | |
*** yamamoto has joined #openstack-lbaas | 10:56 | |
*** dmellado_ has joined #openstack-lbaas | 11:04 | |
*** dmellado_ is now known as dmellado | 11:05 | |
*** ivve has quit IRC | 11:31 | |
kmadac2 | Hi folks, when I create loadbalancer in horizon dashboard, I can see 4 amphorae VMs created when I do openstack server list. spare_amphora_pool_size is set to 1. What could be the reason for so many aphoraes per LB? | 11:38 |
---|---|---|
cgoncalves | kmadac2, how many LBs do you have created and what is their topology (active-standby/standalone)? | 11:39 |
kmadac2 | @cgoncalves: there is only 1 LB and loadbalancer_topology = SINGLE | 11:41 |
cgoncalves | kmadac2, so 2 out of 4 amphora VMs are known. the other 2 might have been of previous LB creations that for some reason didn't get delete...? | 11:43 |
cgoncalves | in case those 2 extra amphora VMs are up and sending heartbeats, they should have been detected by the health manager service (as "zombie amphorae") and deleted | 11:45 |
kmadac2 | cgoncalver: I had an issue to delete previous LBs, but then I used --cascade argument and it looked that LB has been deleted | 11:45 |
cgoncalves | clarification to my previous comment: the health manager only deletes them if there's a record of them in the DB, otherwise it skips touching them | 11:47 |
kmadac2 | cgoncalver: why should there be 2 amphoraes, when only LB with SINGLE topology is created? | 11:47 |
cgoncalves | kmadac2, because you have 1 LB in SINGLE topology so that is 1 amphora, plus another amphora in the spare pool | 11:47 |
kmadac2 | cgoncalver: i see | 11:48 |
kmadac2 | cgoncalver: so let back to main issue. After deletion of all old LBs, 3 amphoraes still existed so I deleted VMs manually in horizon. Then I created one new LB and 4 amphoraes were started. What is the best way to ensure that everyting is really cleaned up? | 11:52 |
kmadac2 | cgoncalver: I just deleted that on single LB and I still have 2 amphoraes running | 11:55 |
kmadac2 | cgoncalver: does it take a while till spare amphorae is deleted? | 11:56 |
cgoncalves | kmadac2, spare amphorae will not be deleted even if there are no LBs created | 12:06 |
cgoncalves | given you configured spare_amphora_pool_size=1, you should have 1 amphora in the spare pool at (almost) all times | 12:08 |
cgoncalves | I'm not sure how you ended up with 4 amphorae created with 1 LB in single topology and spare_amphora_pool_size=1. it should have been 2 amphorae total | 12:09 |
cgoncalves | please check the octavia logs | 12:09 |
cgoncalves | if possible, share the output of these two commands: "openstack loadbalancer list" and "openstack loadbalancer amphora list" | 12:11 |
cgoncalves | I'll be back in 1 hour (lunch) | 12:11 |
*** luksky has quit IRC | 12:13 | |
dmellado | cgoncalves: o/ | 12:13 |
*** ivve has joined #openstack-lbaas | 12:14 | |
cgoncalves | dmellado, o/ !! | 12:17 |
*** trown|back11mar is now known as trown | 12:46 | |
*** luksky has joined #openstack-lbaas | 12:53 | |
*** gcheresh has joined #openstack-lbaas | 13:05 | |
*** gcheresh_ has quit IRC | 13:06 | |
*** gcheresh has quit IRC | 13:11 | |
*** gcheresh has joined #openstack-lbaas | 13:11 | |
*** yamamoto has quit IRC | 13:24 | |
*** ianychoi has quit IRC | 13:28 | |
*** ianychoi has joined #openstack-lbaas | 13:29 | |
*** sapd1 has joined #openstack-lbaas | 13:31 | |
*** henriqueof has quit IRC | 13:41 | |
*** lxkong has quit IRC | 13:42 | |
*** yamamoto has joined #openstack-lbaas | 13:43 | |
kmadac2 | @cgoncalves: here is output of those commands: http://paste.openstack.org/show/747536/. I had some misconfigurations in dashboard, so it could cause such inconsistency between amphoras and load balancers. | 13:43 |
kmadac2 | cgoncalver: I'm going to delete all amphora records in DB and will try to create LBs again. | 13:44 |
cgoncalves | kmadac2, looks like you have 3 amphorae in spare pool. not sure how that happened. the other two in PENDING_CREATE might have been from previous failed attempted...? | 13:46 |
kmadac2 | @cgoncalves: how can I know, that those are from spare pool? is it because the do not have other than mgmt IP? | 13:58 |
cgoncalves | kmadac2, that plus status READY | 13:59 |
cgoncalves | I think I once stumbled on something very similar where I ended up having more spare amps than set. I should had opened a bug... | 14:00 |
kmadac2 | @cgoncalves: ok, thanks. So I suppose that only way how to solve it is to delete them from DB and delete the VMs. | 14:00 |
cgoncalves | yes | 14:01 |
*** henriqueof has joined #openstack-lbaas | 14:05 | |
cgoncalves | ah, the extra amp count I experienced was when I set pool size to X and later decreased it. I was expecting the housekeeping to delete the extra ones | 14:05 |
cgoncalves | kmadac2, maybe that is happening to you too? | 14:06 |
*** KeithMnemonic has joined #openstack-lbaas | 14:11 | |
johnsom | kmadac2: It is normal for the spares pool to over create a few on initial startup. When it is enabled, it is a minimum value. Typically the few extra get consumed with LB builds pretty quickly. | 14:12 |
KeithMnemonic | johnsom: Hi Michael how are you doing. What is the use case for having multiple amp_boot_network_list networks | 14:12 |
kmadac2 | @cgoncalves: I havent changed pool size, so that wont be my issue here. What is interesting is that even I remove all records from DB (tables amphora and amphora_health), VMs amphoras are built again and again | 14:13 |
johnsom | We just haven’t spent the time to make sure it doesn’t over create. | 14:13 |
johnsom | KeithMnemonic: zero | 14:13 |
KeithMnemonic | lol | 14:14 |
johnsom | KeithMnemonic: IMO it was a botched attempt to allow one to be picked from the list. | 14:14 |
johnsom | KeithMnemonic: instead it got implemented that all of them get plugged. | 14:15 |
KeithMnemonic | thanks! | 14:30 |
*** mkuf has quit IRC | 14:31 | |
openstackgerrit | Gregory Thiemonge proposed openstack/octavia master: Fix typo and remove useless lines in user guide https://review.openstack.org/642473 | 14:37 |
*** sapd1 has quit IRC | 15:05 | |
cgoncalves | johnsom, how come the housekeeping service over creates spare amps? | 15:11 |
johnsom | We never put the locking in for it. | 15:11 |
johnsom | Everyone was “do we care?” and evidently the answer was, no | 15:12 |
johnsom | So, it is sloppy | 15:12 |
johnsom | Making sure there is a minimum was the higher priority. Plus with act/stdby and the nova limitations, spares became less important. | 15:13 |
cgoncalves | so N HK instances starting at same time can create N*spare_amphora_pool_size amps? | 15:15 |
cgoncalves | I guess the same is valid even after if they run the periodic check at approx. same time | 15:16 |
johnsom | Right | 15:18 |
cgoncalves | over SLAing :) | 15:18 |
*** yamamoto has quit IRC | 15:21 | |
*** fnaval has joined #openstack-lbaas | 15:22 | |
*** yamamoto has joined #openstack-lbaas | 15:29 | |
xgerman | compute is cheap | 15:43 |
*** yamamoto has quit IRC | 15:44 | |
*** yamamoto has joined #openstack-lbaas | 15:45 | |
*** yamamoto has quit IRC | 15:59 | |
*** yamamoto has joined #openstack-lbaas | 16:00 | |
*** ramishra has quit IRC | 16:02 | |
*** gcheresh has quit IRC | 16:12 | |
*** luksky has quit IRC | 16:13 | |
*** yamamoto has quit IRC | 16:17 | |
*** pcaruana has quit IRC | 16:23 | |
*** pcaruana has joined #openstack-lbaas | 16:23 | |
*** yamamoto has joined #openstack-lbaas | 16:25 | |
*** trown is now known as trown|lunch | 16:30 | |
*** ivve has quit IRC | 16:32 | |
johnsom | Ok, caught up with the weekend fun... Commented on the project deletion community goal. I.e. Let's think about immutable objects..... grin | 16:34 |
cgoncalves | drop octavia; mysqldump < octavia-schema.sql ? :P | 16:36 |
cgoncalves | oh, never mind, per project :) | 16:37 |
johnsom | That sounds more like "service" deletion than project deletion..... | 16:37 |
colin- | anybody operating Mellanox network hardware in support of octavia? news of their acquisition had me wondering how widespread they are in private cloud | 16:38 |
johnsom | It's also a bit of "stuff is going to go wrong, how should we handle it" | 16:38 |
johnsom | colin- Offloading nics are fairly widely used. Since the amphora sit on top of neutron networking, we benefit from any optimization that is done at the neutron level, such as offloading nics. | 16:39 |
johnsom | Usually what I see is setups using the VXLAN offload for an overlay network. | 16:40 |
*** gthiemonge has quit IRC | 16:48 | |
xgerman | yep, we did some experimenting with SR-IOV two years back… but most people are not really into optimizing the flow | 16:51 |
*** gthiemonge has joined #openstack-lbaas | 16:52 | |
johnsom | Yeah, I have been shocked at how little node-to-node performance seems to matter in some deployments.... | 16:52 |
*** gthiemonge has quit IRC | 16:53 | |
johnsom | I know one product guy that was beating on the table that we need huge performance from the load balancer. I had them do a benchmark on the cloud without the load balancer. Come to find out they could use WiFi for their interconnect and wouldn't notice.... | 16:55 |
*** rpittau is now known as rpittau|afk | 17:09 | |
*** gthiemonge has joined #openstack-lbaas | 17:10 | |
*** luksky has joined #openstack-lbaas | 17:12 | |
openstackgerrit | German Eichberger proposed openstack/octavia-tempest-plugin master: Save amphora logs in gate https://review.openstack.org/626406 | 17:12 |
openstackgerrit | German Eichberger proposed openstack/octavia master: Amphora logging https://review.openstack.org/624835 | 17:12 |
cgoncalves | xgerman, commit message in https://review.openstack.org/#/c/613685/ says unit tests are missing. I see them but perhaps you want to add more or just forgot to delete that note? | 17:13 |
*** kklimonda_ has quit IRC | 17:14 | |
*** kklimonda has joined #openstack-lbaas | 17:14 | |
xgerman | yeah, I wanted to add more but will see why it doesn’t work in the gate and then resubmit with better commit… johnsom aso wanted anotehr set of logging hosts (one for system meesages and one for haproxy) | 17:15 |
xgerman | very likelt early Train | 17:15 |
cgoncalves | xgerman, we're talking about different patches | 17:15 |
cgoncalves | I'm talking about "Refactor the unplugging of the VIP" | 17:16 |
xgerman | Ah, that one | 17:16 |
xgerman | ok, yeah, will remove that message | 17:16 |
openstackgerrit | German Eichberger proposed openstack/octavia master: Refactor the unplugging of the VIP https://review.openstack.org/613685 | 17:17 |
xgerman | done | 17:18 |
xgerman | yeah, would love to have that in S | 17:18 |
cgoncalves | a more detailed commit message and/or release note and/or story would have helped and better prioritize reviews | 17:20 |
xgerman | well, it’s up since October so people had 5 months to comment :-) | 17:28 |
johnsom | I did look at the patch review list this morning. | 17:29 |
xgerman | I was working through it... | 17:29 |
johnsom | I still need to dig through stories that don't have patches yet. | 17:30 |
openstackgerrit | German Eichberger proposed openstack/octavia master: db: fix length of fields https://review.openstack.org/625560 | 17:31 |
*** ivve has joined #openstack-lbaas | 17:35 | |
xgerman | "ImportError: No module named ‘octavia_lib’” shouldn’t tox just run? | 17:44 |
xgerman | maybe -rv? | 17:44 |
xgerman | one sec | 17:45 |
johnsom | If there has been a requirements change, you need to run tox with the -r option to pull in the new requirements | 17:45 |
johnsom | I guess it's been a while since we added a requirement.... grin | 17:45 |
xgerman | well, the crazy tox in docker I run usually poull in reqs all the time but who knows | 17:45 |
*** trown|lunch is now known as trown | 17:49 | |
cgoncalves | https://github.com/openstack/octavia/blob/master/octavia/controller/worker/flows/load_balancer_flows.py#L239 | 17:58 |
cgoncalves | the LB delete flow deletes listeners. it shouldn't, right? | 17:58 |
johnsom | It should if it's a cascade delete | 17:59 |
cgoncalves | but it is not | 17:59 |
cgoncalves | the cascade delete deletes listeners and pools | 17:59 |
cgoncalves | https://github.com/openstack/octavia/blob/master/octavia/controller/worker/flows/load_balancer_flows.py#L293-L294 | 17:59 |
johnsom | I mean you can't have a listener without a load balancer | 18:00 |
johnsom | I agree, that is odd and probably a mistake, but would have to dig through the flows to confirm it's not needed for some odd reason. | 18:01 |
johnsom | The API will block an LB delete if there is a listener unless it's a cascade. | 18:02 |
openstackgerrit | German Eichberger proposed openstack/octavia master: Allows failover if port is not deallocated by nova https://review.openstack.org/585864 | 18:02 |
cgoncalves | well, xgerman is (accidentally?) removing that listeners_delete flow | 18:02 |
xgerman | huh? | 18:02 |
cgoncalves | the API doesn't allow deleting LB with just a listener as child | 18:03 |
cgoncalves | left quick comment on https://review.openstack.org/#/c/613685/ | 18:04 |
xgerman | ah, thanks… gotta run to get a haircut, meet people, etc. | 18:05 |
openstackgerrit | Merged openstack/octavia master: Fix performance of housekeeping DB clean up https://review.openstack.org/627058 | 18:56 |
openstackgerrit | Merged openstack/octavia master: Fix parallel plug vip https://review.openstack.org/638992 | 18:56 |
openstackgerrit | Merged openstack/octavia master: Fix an amphora driver bug for TLS client auth https://review.openstack.org/640232 | 18:56 |
*** ccamposr has quit IRC | 19:01 | |
*** yamamoto has quit IRC | 19:14 | |
openstackgerrit | guang-yee proposed openstack/octavia master: Update osutil support for SUSE distro https://review.openstack.org/541811 | 19:38 |
openstackgerrit | guang-yee proposed openstack/octavia master: Update osutil support for SUSE distro https://review.openstack.org/541811 | 19:53 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia stable/rocky: Fix performance of housekeeping DB clean up https://review.openstack.org/642552 | 19:56 |
*** lxkong has joined #openstack-lbaas | 20:00 | |
*** yamamoto has joined #openstack-lbaas | 20:06 | |
*** yamamoto has quit IRC | 20:12 | |
*** gthiemonge has quit IRC | 20:54 | |
*** trown is now known as trown|outtypewww | 21:05 | |
*** abaindur has joined #openstack-lbaas | 21:06 | |
*** abaindur has quit IRC | 21:06 | |
*** abaindur has joined #openstack-lbaas | 21:06 | |
*** henriqueof has quit IRC | 21:08 | |
*** kosa777777 has joined #openstack-lbaas | 21:23 | |
*** pcaruana has quit IRC | 21:45 | |
*** ivve has quit IRC | 21:54 | |
rm_work | Whelp | 22:26 |
rm_work | https://www.nginx.com/press/f5-acquires-nginx-to-bridge-netops-and-devops/ | 22:26 |
johnsom | lol, interesting | 22:29 |
*** threestrands has joined #openstack-lbaas | 22:50 | |
*** fnaval has quit IRC | 23:00 | |
colin- | is there anything in the documentation that describes when we source nat and don't | 23:05 |
colin- | or anything on the topic of snat in octavia | 23:05 |
johnsom | colin- We don't really SNAT. The octavia driver is a full proxy, which means there is a connection to the VIP and a separate connection to the member from the amphora. It's not a NAT however, it's a proxy. | 23:19 |
johnsom | There is a ton a subtlety in that, I get it. | 23:20 |
johnsom | Actually, I guess the one case where we do something similar to a NAT is for UDP. | 23:22 |
johnsom | But all of the TCP based protocols are two separate sessions. | 23:22 |
johnsom | This is why we can accept IPv4 on the VIP and connect to IPv6 members on the back, or the other way around. | 23:23 |
colin- | so, the back end hosts in that case don't see the source IP for TCP VIPs? | 23:30 |
johnsom | Correct, they won't see the client source IP that connected to the VIP. They have to use the PROXY protocol or the header insertion to see that data. | 23:30 |
*** fnaval has joined #openstack-lbaas | 23:31 | |
*** fnaval has quit IRC | 23:33 | |
*** ccamposr has joined #openstack-lbaas | 23:33 | |
*** abaindur has quit IRC | 23:36 | |
*** luksky has quit IRC | 23:40 | |
*** ccamposr has quit IRC | 23:40 | |
colin- | and when you say use the PROXY protocol you're referring to the protocol argument the pool create command supports? | 23:43 |
colin- | that's the first oppurtunity i see to use it | 23:43 |
colin- | opportunity* | 23:43 |
johnsom | Yes, It's the protocol, set on the pool, used to connect to the member servers. PROXY is a TCP protocol that prefixes the normal TCP data flow with a prefix of information about the flow. | 23:44 |
johnsom | https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt | 23:44 |
colin- | understood just wanted to make sure that i understood where octavia implements it | 23:45 |
johnsom | That is a good description, but it is also used outside of HAProxy | 23:45 |
colin- | and that i wasn't missing anything by only looking at the pool create | 23:45 |
*** threestrands has quit IRC | 23:45 | |
johnsom | Yeah, listener is the protocol on the frontend, pool is the protocol on the backend connection. | 23:45 |
colin- | thanks | 23:46 |
johnsom | NP | 23:49 |
johnsom | colin- BTW, if you find things that we haven't described well in the docs, feel free to open stories for us. I know there are bunch of things missing, etc. But having stories makes it easier to track the important things, etc. | 23:50 |
colin- | ok, i'll log one with this and a couple of other generic questions that i have rattling around | 23:53 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!