Monday, 2025-09-29

opendevreviewGregory Thiemonge proposed openstack/octavia-tempest-plugin master: WIP Make test_traffic_ops faster  https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/96234008:35
opendevreviewGregory Thiemonge proposed openstack/octavia-tempest-plugin master: Improve test_traffic_ops execution speed  https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/96234009:33
opendevreviewGregory Thiemonge proposed openstack/octavia-tempest-plugin master: Improve test_traffic_ops execution speed  https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/96234011:54
opendevreviewGregory Thiemonge proposed openstack/octavia-tempest-plugin master: Fix two-node jobs  https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/92144911:55
scoopex_hi, we have some loadbalancers with 3 listeners and two backends of 60 nodes. we ran in the situation that healthmanager constantly failovers amphorae every 1-2 minute. this does not happen, when we reduce the number of backends to 30. the loadbalancer network is operated over neutron (thats https://yaook.cloud/, but same way like kolla does). Might the problem originated in the size of the 14:11
scoopex_health sattus messages? (like described in the fix https://review.opendev.org/c/openstack/octavia/+/852269)14:11
opendevreviewGregory Thiemonge proposed openstack/octavia-tempest-plugin master: Fix two-node jobs  https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/92144914:22
gthiemon1escoopex_: if you enable the debug messages in the octavia health-manager, you should see the incoming heartbeat messages from the amphorae. In your case, it could be that some messages are too slow to be processed in time (basically the health-manager updates the DB with the status of each resource - lbs, listeners, members - in some case with very loaded db server, some updates can take a lot 14:32
gthiemon1eof time, or it could be packet drop due to the size of the UDP packets, here ssh to an amphora and tcpdump on the managmeent interface will show you the size of the packets sent by the amphora14:32
opendevreviewTobias Urdin proposed openstack/octavia master: wip: Add Barbican secrets consumers  https://review.opendev.org/c/openstack/octavia/+/86430814:45
scoopex_gthiemon1e: the system ist n ot very loaded, i think the upd package size might be a problem.15:02
scoopex_scoopex_: will review a tcpdump tomorrow. the network itself uses a MTU of 8942, but the neutron port (where the health manage listens), shows a MTU of 1500.15:05
scoopex_so looking for packages sent on amphora and packages received on the manager and using debug log might be benefical.15:06
johnsomscoopex_ The other think to check is if you are running the Amphora instance out of RAM. If you are running a 1GB compute flavor for the Amphora it may not be enough and any update to the load balancer configuration could run it out of RAM.15:30
scoopex_johnsom: I will have a look, but i think that that is not a problem currently.16:01
opendevreviewMerged openstack/octavia master: Validate default tls versions and protocols  https://review.opendev.org/c/openstack/octavia/+/95940620:30
opendevreviewMerged openstack/octavia master: Require valid volume size  https://review.opendev.org/c/openstack/octavia/+/95940420:30

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!