16:00:09 <johnsom> #startmeeting Octavia
16:00:09 <opendevmeet> Meeting started Wed Aug 13 16:00:09 2025 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:09 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:09 <opendevmeet> The meeting name has been set to 'octavia'
16:00:21 <gthiemonge> o/
16:00:27 <johnsom> Hello everyone!
16:01:02 <johnsom> #topic Announcements
16:01:09 <johnsom> PTL nominations are open
16:01:13 <johnsom> #link https://governance.openstack.org/election/
16:01:22 <johnsom> They close 8/20
16:02:29 <johnsom> I do intent to nominate myself (though my brain has not yet absorbed "Gazpacho")
16:02:48 <gthiemonge> congrats ;-)
16:03:26 <johnsom> Another item of note is next week is library feature freeze.
16:03:35 <johnsom> #link https://releases.openstack.org/flamingo/schedule.html
16:03:48 <johnsom> I have a related topic for a bit later in the meeting.
16:04:24 <johnsom> Any other announcements this week?
16:04:52 <gthiemonge> nop
16:05:26 <johnsom> #topic Brief progress reports / bugs needing review
16:05:39 <johnsom> I have been working on the UDP health monitor bug:
16:05:41 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/2114264
16:06:30 <johnsom> We definitely have issues here and I am trying to peal back the layers of issues. I can reproduce that the health monitor is useless in the scenario provided, which is not good
16:07:21 <gthiemonge> someone reported an issue with the configuration of haproxy 2.8 when using http_version/domain_name in HMs, we have to update the haproxy template to remove some deprecated config options:
16:07:29 <gthiemonge> https://review.opendev.org/c/openstack/octavia/+/956751
16:08:23 <johnsom> Cool! Thanks!
16:08:23 <gthiemonge> http_version and domain_name are not covered by our scenario tests, so I wanted to add them there, but I found out that the HM scenario tests didn't use a listener, so the haproxy config file was never rendered
16:08:24 <whershberger> I'd really appreciate a review of this patch if anyone has cycles: https://review.opendev.org/c/openstack/octavia/+/956930
16:08:32 <gthiemonge> so I have 2 additional patches for octavia-tempest
16:08:42 <gthiemonge> https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/956787
16:08:49 <gthiemonge> https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/956752
16:09:00 <johnsom> Yeah, that was a huge opps IMO
16:09:06 <gthiemonge> whershberger: thanks for the patch]
16:09:39 <gthiemonge> I'll take a look but I cannot currently test it in my env
16:09:52 <johnsom> whershberger Hello and welcome. Yes, that is a problem for sure. I appreciate the patch
16:10:04 <whershberger> I reproduced with devstack and can provide more detailed notes if that would be helpful
16:10:37 <whershberger> Don't have access to my reproducer env but you should be able to set the connection limit extremely low to reproduce with much less RAM (see the workaround in the related bug)
16:11:01 <johnsom> I am wondering if this should not just be a tunable setting and if we should not be "guessing" on this. What are your thoughts?
16:12:09 <gthiemonge> yeah using "mem" doesn't look like a great idea, the free memory depends on how many old workers are still running after reloading haproxy
16:12:21 <whershberger> I'm certainly open to implementing it as a tuneable; it certainly eliminates the "out of an abundance of caution" factor
16:13:55 <whershberger> Wondering if the patch I've proposed would make a sane default or if we should do a constant value for the cache size
16:14:03 <gthiemonge> (I'm fine if we approve this patch as a short-term fix, then perhaps we can create a launchpad for a long term solution)
16:14:51 <whershberger> :+1: from me, although I may not be able to work on a long-term fix (would need to check with my manager)
16:14:54 <johnsom> Yeah, that might be the best approach, fix the immediate issue that 1/2 RAM is bad, then do an RFE to make it tunable
16:15:17 <johnsom> Understood
16:16:05 <whershberger> Thanks, really appreciate your input
16:16:28 <johnsom> Ok, let's get some review cycles on the patch as is and I can open an bug to track changing this over to a tunable
16:17:50 <johnsom> Ok, any other progress updates? Otherwise I will move on to two other items I wan to discuss today.
16:18:11 <gthiemonge> that's all for me
16:18:34 <johnsom> #topic Patches requiring discussion/consensus
16:18:44 <johnsom> #link https://review.opendev.org/c/openstack/octavia/+/952397
16:19:20 <johnsom> This patch came in from our friends on Ironic.
16:19:46 <johnsom> It changes the default image build to be a hybrid UEFI image.
16:20:03 <johnsom> The downside here is it adds a 500MB partition to the image.
16:20:14 <johnsom> I see two issues:
16:20:53 <johnsom> 1. the compute flavor has to change as the image will no longer fit in the compute flavor we have  been using by default
16:21:13 <johnsom> 2.  It increases the storage size requirements per Amphora
16:21:32 <johnsom> I see two options:
16:22:16 <johnsom> 1. We increase our default flavor size to be 3GB (like we do for RHEL tribe images today) and put in release notes that upgrading will require a new compute flavor.
16:22:28 <johnsom> 2. We make this optional instead of the default.
16:22:39 <johnsom> What do you all think?
16:23:22 <johnsom> I know at least one operator already runs with much larger flavors, so it would be a no-op for them. But I can imagine others may not.
16:25:13 <gthiemonge> I'm leaning towards option 1, I'm ok if the image is bigger, as long as there's an easy update/upgrade path
16:26:02 <johnsom> It's not a hard upgrade path, but it will take more effort than a normal Octavia upgrade would.
16:26:47 <gthiemonge> right
16:27:38 <johnsom> I also struggle a little with what the point of having an ESP partition in the image. This is more of an appliance use case where I don't think it would ever be used in practice.
16:29:31 <johnsom> Ah, right there are boot loader chains in there. Nevermind, it has to be there to some degree for UEFI
16:31:07 <johnsom> Alright, I will go down the path of helping with that patch to move towards 3GB as the new default and all of the release notes requirements, etc. I might also put in a build option to disable UEFI just for those that want the smallest image.
16:31:26 <johnsom> Ok, one more patch to talk about:
16:31:33 <johnsom> #link https://review.opendev.org/c/openstack/octavia-lib/+/936863
16:32:10 <johnsom> So this one was not on my radar at all for some reason, but someone brought it up in the channel this week.
16:32:42 <gthiemonge> it's an interesting one
16:33:53 <johnsom> Dang, netsplit and it lost me last message
16:34:08 <gthiemonge> we added the limited_graph option in octavia as a temporary fix for a similar issue, but IIRC our main goal was to only fetch non-recursive objects (or flat objects)
16:34:34 <johnsom> So I have questions....  I thought the driver agent only returned one object at a time. So is this just a backend performance enhancement? If so, why should the library need to change?
16:34:41 <gthiemonge> but adding the same fix in the driver lib for other providers doesn't look like a temporary fix
16:35:38 <gthiemonge> one object at a time but with its children, right?
16:37:51 <johnsom> No, I thought we set it up to only be flat objects for drivers.
16:37:52 <johnsom> #link https://github.com/openstack/octavia-lib/blob/master/octavia_lib/api/drivers/data_models.py
16:38:24 <johnsom> Maybe I am remembering wrong
16:39:38 <gthiemonge> hmm maybe I need to take a closer look at it
16:39:42 <johnsom> I am remembering it wrong... .sigh
16:40:05 <johnsom> #link https://github.com/openstack/octavia-lib/commit/d700c00a90fd62b4f6cb9eb30ebe5f619dd6bfda
16:41:05 <johnsom> Ok, we should take a look at this during this week as freeze is next week.
16:41:37 <gthiemonge> ack
16:42:01 <johnsom> I really wish these things were all just flat objects. That way it is much cleaner and if someone needs more, they can request it explicitly
16:42:57 <johnsom> Ok, that is all I had on patches I wanted to highlight.
16:42:59 <johnsom> #topic Open Discussion
16:43:05 <johnsom> Any other topics this week?
16:43:21 <gthiemonge> nothing here
16:44:11 <johnsom> Ok, thank you everyone!
16:44:14 <johnsom> #endmeeting