*** gthiemon1e is now known as gthiemonge | 06:54 | |
opendevreview | Merged openstack/octavia stable/2023.2: Fix timeout duration in start_vrrp_service during failovers https://review.opendev.org/c/openstack/octavia/+/897783 | 08:31 |
---|---|---|
opendevreview | Merged openstack/octavia stable/2023.2: Reduce duration of failovers with amphora in ERROR https://review.opendev.org/c/openstack/octavia/+/897784 | 08:31 |
opendevreview | Merged openstack/octavia stable/2023.1: Fix timeout duration in start_vrrp_service during failovers https://review.opendev.org/c/openstack/octavia/+/897789 | 08:31 |
opendevreview | Merged openstack/octavia stable/2023.1: Reduce duration of failovers with amphora in ERROR https://review.opendev.org/c/openstack/octavia/+/897790 | 08:41 |
danfai | Hi, we are running octavia zed now. At the beginning we were running taskflow with in-memory backend but switched to zookeeper with mysql for persistence. With both we experienced some issues, probably mostly related to our setup/local patches. Regarding those issues we were wondering, what is the recommended way of operating octavia in terms of taskflow? | 08:42 |
gthiemonge | danfai: hi, so far only the redis backend is used in the CI, we've planned to add zookeeper (https://review.opendev.org/c/openstack/octavia/+/862671), we haven't noticed specific issues when jobboard is enabled. | 09:06 |
gthiemonge | that said, I've fixed a few bugs in taskflow with jobboard (especially when errors occur), they are still under review https://review.opendev.org/q/project:openstack/taskflow+is:open | 09:07 |
gthiemonge | danfai: what kind of issues do you have? | 09:08 |
opendevreview | Merged openstack/octavia stable/zed: Fix timeout duration in start_vrrp_service during failovers https://review.opendev.org/c/openstack/octavia/+/897786 | 09:16 |
opendevreview | Merged openstack/octavia stable/zed: Reduce duration of failovers with amphora in ERROR https://review.opendev.org/c/openstack/octavia/+/897787 | 09:16 |
opendevreview | Merged openstack/octavia stable/yoga: Fix timeout duration in start_vrrp_service during failovers https://review.opendev.org/c/openstack/octavia/+/898101 | 09:16 |
opendevreview | Merged openstack/octavia stable/yoga: Reduce duration of failovers with amphora in ERROR https://review.opendev.org/c/openstack/octavia/+/898102 | 09:16 |
opendevreview | Merged openstack/octavia stable/xena: Fix timeout duration in start_vrrp_service during failovers https://review.opendev.org/c/openstack/octavia/+/898104 | 09:16 |
opendevreview | Merged openstack/octavia stable/xena: Reduce duration of failovers with amphora in ERROR https://review.opendev.org/c/openstack/octavia/+/898105 | 09:16 |
opendevreview | Merged openstack/octavia stable/wallaby: Fix timeout duration in start_vrrp_service during failovers https://review.opendev.org/c/openstack/octavia/+/898112 | 09:16 |
opendevreview | Merged openstack/octavia stable/wallaby: Reduce duration of failovers with amphora in ERROR https://review.opendev.org/c/openstack/octavia/+/898113 | 09:16 |
opendevreview | Merged openstack/octavia stable/2023.2: Fix amphorae in ERROR during the failover https://review.opendev.org/c/openstack/octavia/+/897785 | 09:16 |
opendevreview | Merged openstack/octavia stable/2023.1: Fix amphorae in ERROR during the failover https://review.opendev.org/c/openstack/octavia/+/897791 | 09:17 |
opendevreview | Merged openstack/octavia stable/zed: Fix amphorae in ERROR during the failover https://review.opendev.org/c/openstack/octavia/+/897788 | 09:17 |
opendevreview | Merged openstack/octavia stable/yoga: Fix amphorae in ERROR during the failover https://review.opendev.org/c/openstack/octavia/+/898103 | 09:17 |
opendevreview | Merged openstack/octavia stable/xena: Fix amphorae in ERROR during the failover https://review.opendev.org/c/openstack/octavia/+/898106 | 09:22 |
opendevreview | Merged openstack/octavia stable/wallaby: Fix amphorae in ERROR during the failover https://review.opendev.org/c/openstack/octavia/+/898114 | 09:22 |
danfai | gthiemonge: for the zookeeper one, we had issues with the connections, which was probably our configuration issue, and afterwards we had a job that was run twice, but might be also error on our side, since we migrated Lbs from an old system to octavia and had custom steps for that. Sometimes we had sometimes exceptions triggering in the worker, that left some weird state, | 09:26 |
danfai | that I still try to understand. For the memory backend I think we should have been fine if we increased the time a worker has to properly stop. | 09:26 |
danfai | gthiemonge: all in all these are only small (likely home-made) issues, but if we know which mode is the 'reference' one, we would probably try to explore it at least a bit. | 09:28 |
opendevreview | Gregory Thiemonge proposed openstack/octavia master: Add zookeeper backend for jobboard in devstack https://review.opendev.org/c/openstack/octavia/+/862671 | 09:43 |
gthiemonge | maybe we need to get feedback from the other operators, I know that OVH uses jobboard, but I don't know which backend | 09:44 |
gthiemonge | ^^ this commit enables zookeeper in our CI | 09:44 |
gthiemonge | but yeah, we don't test corner cases like outages, upgrades, etc | 09:45 |
danfai | thank you, I also try to understand our problems more in-depth | 10:05 |
pyjou | Hello everyone. I still wait for a review about the resize subject. I have the RFE https://review.opendev.org/c/openstack/octavia/+/885490 and the implementation https://review.opendev.org/c/openstack/octavia/+/890215. Can you review it? | 13:01 |
gthiemonge | pyjou: Hi, I will review it | 13:27 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!