| opendevreview | Gregory Thiemonge proposed openstack/octavia-tempest-plugin master: WIP Make test_traffic_ops faster https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/962340 | 08:35 |
|---|---|---|
| opendevreview | Gregory Thiemonge proposed openstack/octavia-tempest-plugin master: Improve test_traffic_ops execution speed https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/962340 | 09:33 |
| opendevreview | Gregory Thiemonge proposed openstack/octavia-tempest-plugin master: Improve test_traffic_ops execution speed https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/962340 | 11:54 |
| opendevreview | Gregory Thiemonge proposed openstack/octavia-tempest-plugin master: Fix two-node jobs https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/921449 | 11:55 |
| scoopex_ | hi, we have some loadbalancers with 3 listeners and two backends of 60 nodes. we ran in the situation that healthmanager constantly failovers amphorae every 1-2 minute. this does not happen, when we reduce the number of backends to 30. the loadbalancer network is operated over neutron (thats https://yaook.cloud/, but same way like kolla does). Might the problem originated in the size of the | 14:11 |
| scoopex_ | health sattus messages? (like described in the fix https://review.opendev.org/c/openstack/octavia/+/852269) | 14:11 |
| opendevreview | Gregory Thiemonge proposed openstack/octavia-tempest-plugin master: Fix two-node jobs https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/921449 | 14:22 |
| gthiemon1e | scoopex_: if you enable the debug messages in the octavia health-manager, you should see the incoming heartbeat messages from the amphorae. In your case, it could be that some messages are too slow to be processed in time (basically the health-manager updates the DB with the status of each resource - lbs, listeners, members - in some case with very loaded db server, some updates can take a lot | 14:32 |
| gthiemon1e | of time, or it could be packet drop due to the size of the UDP packets, here ssh to an amphora and tcpdump on the managmeent interface will show you the size of the packets sent by the amphora | 14:32 |
| opendevreview | Tobias Urdin proposed openstack/octavia master: wip: Add Barbican secrets consumers https://review.opendev.org/c/openstack/octavia/+/864308 | 14:45 |
| scoopex_ | gthiemon1e: the system ist n ot very loaded, i think the upd package size might be a problem. | 15:02 |
| scoopex_ | scoopex_: will review a tcpdump tomorrow. the network itself uses a MTU of 8942, but the neutron port (where the health manage listens), shows a MTU of 1500. | 15:05 |
| scoopex_ | so looking for packages sent on amphora and packages received on the manager and using debug log might be benefical. | 15:06 |
| johnsom | scoopex_ The other think to check is if you are running the Amphora instance out of RAM. If you are running a 1GB compute flavor for the Amphora it may not be enough and any update to the load balancer configuration could run it out of RAM. | 15:30 |
| scoopex_ | johnsom: I will have a look, but i think that that is not a problem currently. | 16:01 |
| opendevreview | Merged openstack/octavia master: Validate default tls versions and protocols https://review.opendev.org/c/openstack/octavia/+/959406 | 20:30 |
| opendevreview | Merged openstack/octavia master: Require valid volume size https://review.opendev.org/c/openstack/octavia/+/959404 | 20:30 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!