13:00:00 <mnasiadka> #startmeeting kolla
13:00:00 <opendevmeet> Meeting started Wed Aug  7 13:00:00 2024 UTC and is due to finish in 60 minutes.  The chair is mnasiadka. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:00 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:00 <opendevmeet> The meeting name has been set to 'kolla'
13:00:06 <mnasiadka> #topic rollcall
13:00:07 <mnasiadka> o/
13:00:23 <mmalchuk> o/
13:00:53 <mhiner> o/
13:00:58 <SvenKieske> o/
13:01:14 <darmach> o/
13:01:48 <mattcrees> o/
13:02:46 <mnasiadka> #topic agenda
13:02:46 <mnasiadka> * CI status
13:02:46 <mnasiadka> * Release tasks
13:02:46 <mnasiadka> * Regular stable releases (first meeting in a month)
13:02:46 <mnasiadka> * Current cycle planning
13:02:48 <mnasiadka> * Additional agenda (from whiteboard)
13:02:48 <mnasiadka> * Open discussion
13:02:51 <mnasiadka> #topic CI Status
13:02:55 <mnasiadka> so, status is RED
13:03:02 <mnasiadka> (for kolla-ansible)
13:03:19 <mnasiadka> pinning openstackclient in https://review.opendev.org/c/openstack/kolla-ansible/+/925857
13:03:33 <mnasiadka> #topic Release tasks
13:04:15 <mnasiadka> It's R-8 week, nothing on our calendar
13:04:23 <jovial> Will we need to pin the openstack client in kayobe too?
13:04:23 <mnasiadka> #topic Regular stable releases
13:04:39 <mnasiadka> jovial: it depends, if you have Nova without Cinder, or Cinder without Nova ;-)
13:04:44 <mnasiadka> (in any job)
13:04:58 <jovial> Ahh, cheers
13:05:04 <mnasiadka> bbezak: did we release anything last month?
13:05:18 <mnasiadka> I did see some movement maybe, but maybe I was just dreaming ;-)
13:05:29 <mnasiadka> Ah, bbezak is not here
13:05:32 <mnasiadka> so I'll ask him later
13:05:34 <SvenKieske> yes we did
13:06:00 <mnasiadka> Ok, so I'll raise this months releases after the meeting
13:06:11 <mnasiadka> #topic Current cycle planning
13:06:29 <SvenKieske> https://review.opendev.org/c/openstack/releases/+/925182
13:06:54 <mnasiadka> SvenKieske: thanks
13:07:15 <mnasiadka> ok, it's my first day after vacation - so going to work again on Noble and Ansible bump
13:07:19 <mnasiadka> and we'll see how that goes
13:07:55 <mnasiadka> I guess no other major features need discussing
13:08:02 <mnasiadka> #topic Additional agenda (from whiteboard)
13:08:08 <mnasiadka> oh boy, that's long
13:08:23 <mnasiadka> mhiner [2024-08-07]: please review:
13:08:23 <mnasiadka> Refactor of docker worker https://review.opendev.org/c/openstack/kolla-ansible/+/908295
13:08:23 <mnasiadka> Refactor of kolla_container_facts https://review.opendev.org/c/openstack/kolla-ansible/+/911417
13:08:23 <mnasiadka> Move actions to kolla_container_facts https://review.opendev.org/c/openstack/kolla-ansible/+/911505
13:08:23 <mnasiadka> Add uninstall tasks to ACK https://review.opendev.org/c/openstack/ansible-collection-kolla/+/925083
13:08:24 <mnasiadka> Add action for getting container names list https://review.opendev.org/c/openstack/kolla-ansible/+/924389
13:08:32 <mhiner> Also, the two changes mentioned in TODOs in migration patchset are done - last two on the above list.
13:08:40 <mhiner> Shoud I add them to the migration patchset?
13:08:46 <mhiner> Doing this will put migration in the fifth place in the relation chain.
13:10:06 <ihalomi> there is also problem in that docker worker refactor in upgrade tests, since they use old release of ansible-collection-kolla which doesnt install docker 6.0.0
13:10:47 <mhiner> that was discussed on previous meeting
13:10:50 <SvenKieske> that last one should be fixed by the osbpo patch
13:11:14 <ihalomi> then during upgrade it switches to master version of a-c-k but bootstrap-servers is not called again
13:11:28 <SvenKieske> afaik this one it was: https://review.opendev.org/c/openstack/ansible-collection-kolla/+/916258
13:12:49 <ihalomi> SvenKieske: yes but it needs to be backported to 2024.1 release of a-c-k
13:13:40 <mnasiadka> I'm still amazed why are we not using a venv and recommending users to use a venv ;-)
13:14:20 <SvenKieske> I guess that's a different discussion that also already took place. I guess it's just missing people actually implementing it.
13:14:47 <SvenKieske> and maybe we should also use venvs ourselves everywhere, before recommending it to users ;)
13:15:42 <mnasiadka> ok, reviewed some of these patches, let's see the answers later
13:15:56 <mnasiadka> (r-krcek)[2024-08-07] please review
13:15:56 <mnasiadka> Change to dev mode https://review.opendev.org/c/openstack/kolla-ansible/+/925714 and https://review.opendev.org/c/openstack/kolla/+/925712
13:15:56 <mnasiadka> k-a check command https://review.opendev.org/c/openstack/kolla-ansible/+/599735
13:15:56 <mnasiadka> memcache_security_strategy templating https://review.opendev.org/c/openstack/kolla-ansible/+/925444
13:16:42 <ihalomi> mnasiadka: what about backporting this so docker worker tests will pass? https://review.opendev.org/c/openstack/ansible-collection-kolla/+/916258
13:17:30 <mnasiadka> just propose it, we can discuss it in Gerrit
13:20:20 <SvenKieske> regarding dev mode: doesn't the proposed approach break for modern pip which doesn't allow to install outside of venv? will comment on the patch
13:20:34 <mnasiadka> Well, first of all it's not a bug fix - it's not backportable
13:20:55 <mnasiadka> but let's continue in Gerrit
13:21:09 <mnasiadka> ok, and now a huge blob of text
13:21:12 <mnasiadka> (SvenKieske)[2024-08-07]: How do we want to handle slurp upgrades in the future?
13:21:12 <mnasiadka> currently it's afaik only planned to make a special upgrade command for 2023.1 release to upgrade rmq: https://review.opendev.org/c/openstack/kolla-ansible/+/918976
13:21:12 <mnasiadka> problem I see is: won't we need a similar change for the next slurp upgrade cycle as well, and wouldn't it thus make sense to add a generic slurp-upgrade command to master?
13:21:12 <mnasiadka> (mattcrees): My understanding is that going forward we should only bump RabbitMQ versions once a year during the major/odd releases. As such, we shouldn't need this additional rabbit upgrade in future releases.
13:21:12 <mnasiadka> maybe I missed something or is the intent to just always provide one-off patches for the slurp releases to keep the master codebase lean?
13:21:14 <mnasiadka> It's also unclear to me how we handle the process when users set `rabbitmq_image` via global.yml, we need to handle these users during future upgrades, see comments on the patch.
13:21:14 <mnasiadka> what I mean by this: how do we make sure we don't forget a reno for slurp that users need to now change their rabbitmq_image var if they changed it, etc. do we possibly want to test this in CI slurp jobs maybe?
13:21:16 <mnasiadka> (mattcrees): I guess bumping RabbitMQ versions should already have a reno?
13:21:16 <mnasiadka> (mattcrees):  We do definitely want to test the double rabbit upgrade in CI, I plan for a follow-up patch once the current ones get merged.
13:22:20 <mnasiadka> Ok, so RMQ double version upgrade - since we do rolling upgrade, we can't jump across two releases in one go
13:22:29 <SvenKieske> okay, if someone (mnasiadka?) could clarify if we just intend to bump rmq going forward once a year (is that possible with upstreams release cadence? i don't know), I'm fine.
13:22:50 <SvenKieske> I actually just now read mattcrees replies, thanks for those :)
13:23:24 <mnasiadka> well, RMQ in the past had two major releases per year
13:23:26 <SvenKieske> I guess my fear of missing a reno is overblown, I agree we will have reno if we bump rmq version
13:24:02 <mnasiadka> And it seems they are not changing this
13:24:04 <SvenKieske> so if they (rmq) stick to their release cadence we can't only upgrade once a year, can we?
13:24:11 <mnasiadka> So I would be reluctant to agree bumping every two cycles
13:24:33 <mnasiadka> I would rather thing about having upgrade prechecks that would check if an upgrade is possible?
13:24:35 <SvenKieske> so my proposal would be to move the rmq upgrade command to regular release (master)
13:24:47 <mnasiadka> *think
13:24:52 <SvenKieske> currently it's sitting on the stable branch
13:25:21 <mattcrees> We've got a patch in progress for that kind of precheck: https://review.opendev.org/c/openstack/kolla-ansible/+/918977
13:25:48 <mnasiadka> so, let's assume somebody wants to upgrade from A to C
13:26:05 <SvenKieske> ah you beat me with the link posting matt, thx :9
13:26:14 <mnasiadka> so we would need to upgrade RMQ to B version, then C version
13:26:21 <mattcrees> And I agree, if we're not sticking to one bump a year then this should get into master. It might be nice to get a less symlinky way of having multiple versions if anyone has ideas around that ;)
13:26:42 <SvenKieske> let's fix the symlinks maybe later :D
13:28:09 <mnasiadka> So then yes, we need a command for upgrading RMQ
13:28:28 <mnasiadka> or a slurp-upgrade subcommand that upgrades RMQ first, and then does a regular upgrade
13:28:34 <mnasiadka> fine with anything really
13:29:13 <SvenKieske> alright, my point was just that we maybe move the existing approach in gerrit just from the current stable branch to master, ty! :) we can discuss details and improvements on the patches I guess
13:29:58 <mattcrees> Sure, sounds like we're in agreement on the plan. I'll get those patches pointing to master when I get a moment
13:30:25 <mnasiadka> we can and probably we should, although we can't test it in master ;-)
13:31:40 <mnasiadka> Anyway, happy to review anything
13:31:41 <SvenKieske> alright, I have two little things CI and python 3.12 related
13:31:51 <SvenKieske> which are not on the agenda, but I guess the agenda is empty?
13:32:10 <mnasiadka> mattcrees: can we gather all those RMQ related patches in some Gerrit topic and reference it in somwhere on the whiteboard so it's easy to track them?
13:32:18 <mnasiadka> #topic Open discussion
13:32:21 <mnasiadka> SvenKieske: now you can
13:32:22 <mnasiadka> ;-)
13:32:23 <mattcrees> Yeah sure thing
13:32:35 <SvenKieske> so I tried fixing a linting issue locally, when I ran into this: https://review.opendev.org/c/openstack/kolla-ansible/+/925671
13:33:08 <SvenKieske> which is really weird because this flake8 error should've been catched by CI long ago, and should've complained everytime since it was merged
13:33:26 <SvenKieske> I still haven't figured out why CI didn't catch it (I had not that much time to look into it yet)
13:34:04 <SvenKieske> second thing I stumbled over, when doing this, which is interesting for mnasiadka I guess, is that flake8 won't really work with python3.12 without https://review.opendev.org/c/openstack/kolla-ansible/+/925670
13:34:27 <SvenKieske> so you might want to incorporate that into your ubuntu noble testing
13:35:01 <mnasiadka> or we just merge it now ;-)
13:35:13 <SvenKieske> discussed this also with frickler and he had also no immediate idea why CI didn't catch this. also I would like to propose to run our linters on python3.12 maybe?
13:35:14 <mnasiadka> well, question is where do we install flake8 ;-)
13:35:20 <SvenKieske> xD
13:35:38 <SvenKieske> I actually am not sure currently where to set python version for our linters, yeah
13:35:49 <SvenKieske> I guess I can figure that out somewhere
13:36:02 <mnasiadka> I'll have a look - basically the problem with Noble patch is that we always assumed Ubuntu nodes for py3 testing are the same as the ones we use for building Ubuntu
13:36:17 <mnasiadka> and now only the latter is moving to Noble
13:36:24 <mnasiadka> which I'll need to solve somehow
13:36:40 <mnasiadka> but maybe we should have some additional jobs for py3.12 testing
13:36:44 <mnasiadka> I'll have a look
13:37:28 <SvenKieske> yeah, I was also thinking about better adding more jobs for py312 instead of just moving, seems more secure, even if the increased load is unfortunate
13:38:20 <SvenKieske> thanks for looking into ti
13:38:22 <SvenKieske> it*
13:39:10 <SvenKieske> I still want to figure out why the linter didn't complain about that f string, that's a pretty old basic check. and I accidently even fixed a bug there, because the hostname wasn't printed properly
13:40:05 <SvenKieske> at least it didn't find more bugs..
13:40:30 <mnasiadka> haha
13:41:45 <SvenKieske> but I fear that somehow that linter is just partially broken and doesn't report anything..
13:42:09 <SvenKieske> guess that's paranoia, maybe I should add a testcase that's always failing to ensure the linter works :D
13:43:42 <SvenKieske> I also have some code around that I need to push that checks if all our imports are in our requirements.yml/txt files, it's not quite polished enough yet
13:44:44 <mnasiadka> ah, hacking 3.0.1 depends on flake8<3.8.0 and >=3.6.0
13:44:51 <mnasiadka> and 3.7.9 doesn't fail on this one
13:45:00 <mnasiadka> and we don't have flake8 in lint-requirements.txt
13:45:04 <SvenKieske> mhm, maybe a bug in flake8
13:45:17 <SvenKieske> yeah you need a newer flake8 afaik on py312
13:45:27 <SvenKieske> maybe should regularly bump those
13:45:42 <mnasiadka> I guess so
13:45:47 <mnasiadka> I don't even know why we limited hacking
13:46:05 <SvenKieske> there was some bug in the past afaik, but CI is green now ;)
13:46:31 <SvenKieske> somewhere was also the flake8 version pinned, don't quite remember if it was in u-c or somewhere?
13:46:36 <mnasiadka> yeah well, need more people paying attention :)
13:46:41 <mnasiadka> just checked, flake8 is not in u-c
13:46:50 <mnasiadka> after your patch we should be better anyway
13:46:51 <mnasiadka> thanks
13:46:56 <SvenKieske> impression I got at least was that the basic py312 jobs don't test enough stuff :D
13:47:05 <SvenKieske> yeah thanks as well, guess we can conclude the meeting
13:47:21 <mnasiadka> question if we should backport the CI patch - I guess it could make some sense
13:48:17 <SvenKieske> then we need first to merge all the flake8 backport fixes: https://review.opendev.org/c/openstack/kolla-ansible/+/925671
13:48:42 <SvenKieske> those are trivial, but the CI will complain
13:49:05 <mnasiadka> those are merging
13:49:08 <mnasiadka> anyway, let's finish
13:49:10 <mnasiadka> thank you all for coming
13:49:12 <mnasiadka> #endmeeting