16:00:14 #startmeeting Octavia 16:00:14 Meeting started Wed Nov 6 16:00:14 2024 UTC and is due to finish in 60 minutes. The chair is gthiemonge. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:14 The meeting name has been set to 'octavia' 16:00:20 o/ 16:00:26 o/ 16:00:26 o/ 16:02:17 #topic Announcements 16:02:32 well... no announcements from me this week 16:02:37 did we miss anything? 16:02:46 Nope 16:03:19 Tempest broke our docs job in octavia-tempest-plugin 16:03:32 I opened a bug and posted a patch 16:03:37 #topic CI Status 16:03:47 yeah I saw it in the RBAC test patch 16:04:05 I though it was the patch 16:04:29 thanks for reporting and fixing it johnsom 16:05:19 https://review.opendev.org/c/openstack/tempest/+/934194 for reference 16:06:02 interesting 16:06:09 good catch 16:06:28 Tough to track down 16:06:43 yeah :) 16:07:18 #topic Brief progress reports / bugs needing review 16:08:22 let me know when you are done because I have three minor patches this time 16:09:13 I'm fixing an issue with SINGLE LBs with UDP listener, the GARP is not sent by the amphora (so it can affect the LB after a failover for instance), patch got a -1 I'll update it tomorrow 16:09:29 I have been testing and re-organizing the RBAC patch chain. Your comment was actually my next step, switching the jobs around. 16:09:30 tweining: you can send the links now 16:09:44 https://review.opendev.org/c/openstack/python-octaviaclient/+/934133 1 is a missing FAILOVER_STOPPED status in python-octaviaclient 16:10:10 johnsom: I reviewed and tested the Octavia RBAC patch, it LGTM, very clean patch 16:10:21 https://review.opendev.org/c/openstack/octavia/+/934224 2 is a bugfix for a bug reported to kolla-ansible but not octavia before 16:10:30 Thanks, just need to get all of the gates straight now 16:11:01 tweining: +1 thanks for the patch, I'll leave a comment in the kolla-ansible bug 16:11:03 https://review.opendev.org/c/openstack/octavia/+/934235 3 is a documentation update with a comment. I think we could optimize that health check function a bit 16:11:34 gthiemonge: thanks. tell them to open a bug for octavia next time. :) 16:12:31 yeah they could have just added it to the list of affected project :/ 16:14:16 tweining, on that comment. I think she did it this way to account for multiple health manager instances. 16:14:43 I.e. you don't want two processes with the same list in the thread pool. 16:15:12 There is a way to do it in bulk with the DB, but I think she went down the easier path 16:16:29 mmh, I think I see what you mean. 16:17:48 There was also something about randomizing the list so it didn't pull the same list every time, but maybe that was another issue.... 16:19:18 I it would require some inter process synchronization mechanism othewise and that can complicate things 16:19:26 *I think 16:19:37 Yeah, we prefer to use the database locking for things like this 16:20:00 When she pulls one at a time, she locks the object at the same time 16:20:07 It's atomic at the DB level 16:20:57 I need to investigate but the submit(failover_amphora) may have a different behavior now we are using amphorav2 (or maybe it's different only when jobboard is enabled) 16:21:25 (that would mean that we probably no longer need to have many threads that looks for stale amphorae) 16:21:37 anyway the comments are helpful there 16:23:05 #topic Open Discussion 16:23:29 hmm, maybe multiprocessing.Queue could work https://docs.python.org/3/library/multiprocessing.html#exchanging-objects-between-processes 16:24:10 Health manager processes are usually on different hosts 16:24:41 okay :/ 16:25:29 I don't have other things to discuss, I think. I stopped working on rate limiting to work on a downstream task 16:27:16 ack 16:28:28 I guess we can close this meeting! 16:28:33 thank you guys! 16:28:37 #endmeeting