16:00:56 <gthiemonge> #startmeeting Octavia
16:00:56 <opendevmeet> Meeting started Wed Mar 15 16:00:56 2023 UTC and is due to finish in 60 minutes.  The chair is gthiemonge. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:56 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:56 <opendevmeet> The meeting name has been set to 'octavia'
16:00:59 <gthiemonge> Hi Folks
16:01:08 <johnsom> o/
16:01:09 <tweining> o/
16:01:33 <jdacunha> Hello
16:01:37 <QG> o/
16:02:30 <gthiemonge> #topic Announcements
16:02:34 <gthiemonge> * Antelope Release schedule: RC2
16:02:44 <gthiemonge> as discussed last week, we proposed a RC2 for Octavia
16:02:59 <gthiemonge> it includes a workaround for an openstacksdk bug that would have broken octavia-dashboard
16:03:15 <johnsom> +1
16:03:18 <gthiemonge> #link https://review.opendev.org/c/openstack/octavia/+/876919
16:03:32 <gthiemonge> now I think we're done for Antelope
16:03:43 <tweining> nice
16:04:21 <gthiemonge> * PTG
16:04:44 <gthiemonge> the Octavia PTG will be on March 28th (14:00-18:00 UTC)
16:05:01 <gthiemonge> #link https://etherpad.opendev.org/p/bobcat-ptg-octavia
16:05:11 <gthiemonge> don't forget to register
16:05:14 <tweining> sounds good
16:05:20 <gthiemonge> and add your topics to the etherpad!
16:06:05 <johnsom> I think a hot topic will be the let's encrypt RFE
16:06:50 <jdacunha> jonhsom, yes :)
16:07:49 <gthiemonge> I need to read the spec before the PTG
16:08:05 <johnsom> Yeah, there is a lot to unpack there
16:10:01 <gthiemonge> any other announcements?
16:12:09 <gthiemonge> #topic CI Status
16:12:17 <gthiemonge> FYI johnsom fixed an issue when bulding ubuntu images
16:12:22 <gthiemonge> #link https://review.opendev.org/c/openstack/octavia/+/877141
16:12:42 <gthiemonge> it merged earlier today
16:12:43 <johnsom> Yeah, saves almost 1GB of space inside the amphora images
16:12:49 <gthiemonge> thanks again johnsom
16:13:15 <QG> a question about ubuntu image; did already anyone test to build amphora image with jammy ?
16:13:22 <johnsom> The "build-essentials" uninstall was not removing everything
16:13:30 <johnsom> Yes, we test on Jammy in the gates now
16:14:03 <QG> ohhh ok last time i tried it i had issue with openssl v3
16:14:06 <johnsom> #link https://review.opendev.org/c/openstack/octavia/+/862131
16:14:31 <QG> johnsom: Thanks !
16:14:33 <johnsom> But as we recently found, they are using more disk space in the cloud image as of a week or two ago
16:15:03 <gthiemonge> hmm I'm not sure, this job ran on focal: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0b4/857676/11/check/octavia-v2-dsvm-scenario/0b42def/job-output.txt
16:15:28 <gthiemonge> https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/861369
16:16:41 <johnsom> Umm, that is a big problem then. Antelope was supposed to be tested on Jammy
16:18:18 <johnsom> #link https://zuul.opendev.org/t/openstack/build/69cc6fb302d74064be499eead9af5f1e/log/controller/logs/dib-build/amphora-x64-haproxy.qcow2_log.txt#335
16:18:34 <gthiemonge> I think we are running jammy amphora images on focal
16:18:35 <johnsom> This job ran Jammy, the scenario for the disk space reduction.
16:19:47 <johnsom> That needs to be fixed ASAP, as all of antelope should have been on Jammy
16:20:12 <gthiemonge> could you review:
16:20:16 <gthiemonge> #link https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/861369
16:21:03 <johnsom> Done
16:21:11 <gthiemonge> thanks
16:21:29 <johnsom> Well, I guess the answer still stands, the amp image is running jammy in the upstream gate jobs
16:22:14 <johnsom> I also have manually run Octavia/devstack on Jammy without issue
16:22:30 <tweining> me too AFAIR
16:23:10 <johnsom> bobcat/2023.2 will also be on Jammy
16:23:22 <johnsom> #link https://governance.openstack.org/tc/reference/runtimes/2023.2.html
16:24:12 <tweining> nice. Rocky will be supported too
16:24:29 <gthiemonge> AFAIK rocky linux doesn't work ATM
16:24:33 <gthiemonge> (in dib)
16:24:49 <johnsom> "best effort"
16:25:27 <gthiemonge> I have WIP commit that adds rockylinux support: https://review.opendev.org/c/openstack/octavia/+/873489
16:25:35 <tweining> well, the intention counts already
16:25:39 <gthiemonge> but I got some firewalling issues with the o-hm0 iface
16:27:19 <gthiemonge> #topic Brief progress reports / bugs needing review
16:27:34 <gthiemonge> I've worked on an interesting issue with sqlalchemy:
16:27:39 <gthiemonge> #link https://storyboard.openstack.org/#!/story/2010646
16:27:56 <gthiemonge> the lock of the load balancers in the member batch update API call didn't work as expected
16:28:10 <gthiemonge> #link https://review.opendev.org/c/openstack/octavia/+/877414
16:28:13 <johnsom> I fix the issue with disk space inside the test amphora images.
16:28:18 <gthiemonge> johnsom: thanks for helping in this issue
16:28:26 <tweining> you're our sqlalchemy expert now :P
16:28:41 <johnsom> I posted an api-ref patch to call out the 501 status code that provider drivers may return for features/options they don't support.
16:30:05 <johnsom> I have also been working on fixing the octavia tempest plugin now that scoped tokens are not going to happen. That impacted admin credential tests because the admin credential would have required a scoped token. I think we have all of that straight now.
16:30:24 <gthiemonge> +1 !
16:30:31 <johnsom> There is another patch that will be needed to completely remove the scoped token logic, but that is a bobcat topic
16:31:09 <johnsom> I do have a question about gate jobs related to this, but I can wait for open discussion on that
16:31:25 <gthiemonge> #topic Open Discussion
16:31:32 <johnsom> lol
16:31:41 <johnsom> #link https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/876904
16:31:58 <johnsom> Ok, so now that scoped tokens are not happening in OpenStack, we have three scenarios:
16:32:18 <johnsom> 1. Advanced RBAC - this has been the default for Octavia since Pike
16:33:05 <johnsom> 2. Advanced RBAC with "enforce_new_defaults" - This is the above with the addition of requiring the new "member" and "reader" roles.
16:34:27 <johnsom> 3. Keystone default roles (aka "enforce_new_defaults" without advanced RBAC) - This means no "load balancer" specific roles required, but users must have either "reader" or "member". We provide this via a policy override file.
16:35:12 <johnsom> My question is, do we want gate jobs for all of these? Currently we test #1, my patch adds #3 as we want that for downstream use.
16:36:05 <gthiemonge> johnsom: do you add #3 in octavia-tempest-plugin and in octavia?
16:36:08 <johnsom> I think #2 will become the new default for Bobcat, so we could just transition #1 to #2
16:36:20 <johnsom> Yes, here:
16:36:20 <gthiemonge> I think the scope-token job was only in tempest-plugin
16:36:22 <johnsom> #link https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/876904
16:36:51 <johnsom> It will need to be in octavia proper jobs, not just tempest plugin jobs
16:37:02 <gthiemonge> ok
16:37:09 <johnsom> IMO\
16:39:09 <gthiemonge> I think #1 and #3 are fine ATM (in both octavia and o-t-p)
16:39:46 <johnsom> Ack, then maybe transition #1 to #2 as all the service enable "enforce_new_defaults" in bobcat. I am good with that.
16:40:26 <johnsom> I hate adding more jobs, but they are no-op jobs, so run quickly
16:40:47 <johnsom> quickly-ish
16:40:52 <gthiemonge> ahem... 1h19m
16:41:10 <johnsom> yeah, lol
16:41:18 <tweining> it's all relative
16:41:58 <gthiemonge> I think we're building an amphora image in the noop jobs
16:42:03 <johnsom> Nope
16:42:05 <gthiemonge> no?
16:42:06 <gthiemonge> ok
16:42:48 <tweining> so what is no-op doing for more than an hour then?
16:43:20 <tweining> maybe I can try to investigate that a bit
16:43:55 <gthiemonge> lol
16:44:01 <johnsom> Ran: 578 tests in 3306.5628 sec.
16:44:09 <johnsom> The rest is all devstack/zuul
16:44:33 <johnsom> It is a bit slower than you would expect really
16:44:43 <gthiemonge> maybe we can increase the number of threads in tempest
16:45:00 <gthiemonge> --concurrency=4
16:45:10 <johnsom> We should look at RAM usage. It could be swapping, which slows everything down. There is a devstack flag to lock mysql ram usage down that may help
16:45:14 <tweining> I guess a lot is resource setup/teardown?!
16:45:59 <johnsom> There shouldn't be much of that really, but yeah, it would be good to get fresh eyes on this.
16:46:09 <johnsom> 2023-03-15 00:55:34.256022 | controller | {1} octavia_tempest_plugin.tests.api.v2.test_member.MemberAPITest2.test_UDP_RR_member_batch_update [67.189751s] ... ok
16:46:15 <johnsom> for no-op is.... odd
16:47:36 <johnsom> Though, no-op does run on the slower nodes too, so that will be a factor
16:48:39 <johnsom> I will propose a test job today with the devstack mysql memory cap setting to see how that impacts the jobs
16:48:50 <gthiemonge> ack
16:49:43 <gthiemonge> any other topics?
16:51:03 <gthiemonge> ok!
16:51:11 <gthiemonge> thank you guys!
16:51:14 <gthiemonge> #endmeeting