16:00:14 <gthiemonge> #startmeeting Octavia 16:00:14 <opendevmeet> Meeting started Wed Nov 6 16:00:14 2024 UTC and is due to finish in 60 minutes. The chair is gthiemonge. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:14 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:14 <opendevmeet> The meeting name has been set to 'octavia' 16:00:20 <gthiemonge> o/ 16:00:26 <tweining> o/ 16:00:26 <johnsom> o/ 16:02:17 <gthiemonge> #topic Announcements 16:02:32 <gthiemonge> well... no announcements from me this week 16:02:37 <gthiemonge> did we miss anything? 16:02:46 <johnsom> Nope 16:03:19 <johnsom> Tempest broke our docs job in octavia-tempest-plugin 16:03:32 <johnsom> I opened a bug and posted a patch 16:03:37 <gthiemonge> #topic CI Status 16:03:47 <gthiemonge> yeah I saw it in the RBAC test patch 16:04:05 <gthiemonge> I though it was the patch 16:04:29 <gthiemonge> thanks for reporting and fixing it johnsom 16:05:19 <tweining> https://review.opendev.org/c/openstack/tempest/+/934194 for reference 16:06:02 <gthiemonge> interesting 16:06:09 <gthiemonge> good catch 16:06:28 <johnsom> Tough to track down 16:06:43 <gthiemonge> yeah :) 16:07:18 <gthiemonge> #topic Brief progress reports / bugs needing review 16:08:22 <tweining> let me know when you are done because I have three minor patches this time 16:09:13 <gthiemonge> I'm fixing an issue with SINGLE LBs with UDP listener, the GARP is not sent by the amphora (so it can affect the LB after a failover for instance), patch got a -1 I'll update it tomorrow 16:09:29 <johnsom> I have been testing and re-organizing the RBAC patch chain. Your comment was actually my next step, switching the jobs around. 16:09:30 <gthiemonge> tweining: you can send the links now 16:09:44 <tweining> https://review.opendev.org/c/openstack/python-octaviaclient/+/934133 1 is a missing FAILOVER_STOPPED status in python-octaviaclient 16:10:10 <gthiemonge> johnsom: I reviewed and tested the Octavia RBAC patch, it LGTM, very clean patch 16:10:21 <tweining> https://review.opendev.org/c/openstack/octavia/+/934224 2 is a bugfix for a bug reported to kolla-ansible but not octavia before 16:10:30 <johnsom> Thanks, just need to get all of the gates straight now 16:11:01 <gthiemonge> tweining: +1 thanks for the patch, I'll leave a comment in the kolla-ansible bug 16:11:03 <tweining> https://review.opendev.org/c/openstack/octavia/+/934235 3 is a documentation update with a comment. I think we could optimize that health check function a bit 16:11:34 <tweining> gthiemonge: thanks. tell them to open a bug for octavia next time. :) 16:12:31 <gthiemonge> yeah they could have just added it to the list of affected project :/ 16:14:16 <johnsom> tweining, on that comment. I think she did it this way to account for multiple health manager instances. 16:14:43 <johnsom> I.e. you don't want two processes with the same list in the thread pool. 16:15:12 <johnsom> There is a way to do it in bulk with the DB, but I think she went down the easier path 16:16:29 <tweining> mmh, I think I see what you mean. 16:17:48 <johnsom> There was also something about randomizing the list so it didn't pull the same list every time, but maybe that was another issue.... 16:19:18 <tweining> I it would require some inter process synchronization mechanism othewise and that can complicate things 16:19:26 <tweining> *I think 16:19:37 <johnsom> Yeah, we prefer to use the database locking for things like this 16:20:00 <johnsom> When she pulls one at a time, she locks the object at the same time 16:20:07 <johnsom> It's atomic at the DB level 16:20:57 <gthiemonge> I need to investigate but the submit(failover_amphora) may have a different behavior now we are using amphorav2 (or maybe it's different only when jobboard is enabled) 16:21:25 <gthiemonge> (that would mean that we probably no longer need to have many threads that looks for stale amphorae) 16:21:37 <gthiemonge> anyway the comments are helpful there 16:23:05 <gthiemonge> #topic Open Discussion 16:23:29 <tweining> hmm, maybe multiprocessing.Queue could work https://docs.python.org/3/library/multiprocessing.html#exchanging-objects-between-processes 16:24:10 <johnsom> Health manager processes are usually on different hosts 16:24:41 <tweining> okay :/ 16:25:29 <tweining> I don't have other things to discuss, I think. I stopped working on rate limiting to work on a downstream task 16:27:16 <gthiemonge> ack 16:28:28 <gthiemonge> I guess we can close this meeting! 16:28:33 <gthiemonge> thank you guys! 16:28:37 <gthiemonge> #endmeeting