19:00:34 <clarkb> #startmeeting infra
19:00:34 <opendevmeet> Meeting started Tue May 13 19:00:34 2025 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:34 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:34 <opendevmeet> The meeting name has been set to 'infra'
19:00:50 <clarkb> #link https://lists.opendev.org/archives/list/service-discuss@lists.opendev.org/thread/B3CAZDR2RDA672TKGETT2SAIRISBMKUF/ Our Agenda
19:00:55 <clarkb> #topic Announcements
19:01:17 <clarkb> currently finishing up my last meeting. But wanted to get things started here so there wasn't confusion over whether it was happening
19:01:37 <tonyb> Noted
19:02:44 <clarkb> I dno't have anything else to announce. Were there other announcements?
19:02:57 <tonyb> Not from me
19:03:51 <fungi> summit cfp closes in a month
19:03:57 <clarkb> June 13 is the deadline
19:04:20 <fungi> #link https://summit2025.openinfra.org/cfp/
19:04:54 <clarkb> #topic Zuul-launcher image builds
19:04:56 <fungi> per the oppeninfra foundation board meeting today, openinfra projects may not need cla enforcement in a month
19:05:02 <clarkb> #undo
19:05:02 <opendevmeet> Removing item from minutes: #topic Zuul-launcher image builds
19:05:16 <clarkb> ya I guess its worth making mention of that as we may be asked to assist with dco enforcement
19:05:32 <fungi> just a heads up there may be some semi-urgent acl changes when that happens
19:05:43 <clarkb> but I think we (opendev) are in a holding pattern until there is stronger direction
19:05:52 <fungi> correct
19:06:00 <fungi> more just early warning
19:06:31 <clarkb> #topic Zuul-launcher image builds
19:06:34 <clarkb> ok lets dive into the agenda
19:06:47 <clarkb> mnasiadka continues to be the zuul launcher port nodepool images wizard
19:06:57 <corvus> yay!
19:06:58 <clarkb> we should have ubuntu and debian images now. CentOS and rocky are in progress
19:07:12 <clarkb> and this covers both x86 and arm images too which is great
19:07:40 <corvus> i think probably most of the major gaps are covered at this point?
19:07:42 <clarkb> I also wanted to mention that corvus added grafana dashboards for zuul-launcher against each of the cloud regions too. So we have better insight into how things are going there without looking in zookeeper directly
19:07:47 <corvus> like, do we have enough to satisfy openstack?
19:08:08 <clarkb> corvus: almost. I think openstack uses a fair bit of centos. There aws a bunch of discussion about that in the TC meeting earlier today
19:08:21 <clarkb> once centos is in then yes I think we're basically there for something like openstack
19:08:36 <corvus> groovy
19:08:40 * tonyb should go read the TC logs
19:09:16 <clarkb> corvus: from an implementation perspective are there any gaps you are concerned about? Do we need better node deletion tooling based on the recent need for that?
19:09:53 <corvus> yeah, we are still missing some manual tasks, so we probably shouldn't switch over more tenants before that
19:10:14 <corvus> (like "nodepool list/delete" equivalents)
19:10:25 <corvus> but with the latest changes, all the known issues resulting in leaked nodes, etc, should be fixed, so if something looks weird, speak up
19:10:54 <clarkb> sounds good. And zuul is dogfooding it
19:11:21 <clarkb> Other than helping mnasiadka and any other image build volunteers is there anything else we should look at doing on the opendev side?
19:11:44 <corvus> don't think so at the moment
19:11:53 <clarkb> great
19:11:56 <clarkb> #topic Gerrit 3.11 Upgrade Planning
19:12:37 <clarkb> Now that we've replaced the Gerrit server the next Gerrit item on my mind is upgrading Gerrit. Gerrit 3.12 is about to release and we've managed to roughly keep up upgrading around when the next release comes out which puts that in the now timeframe
19:12:47 <clarkb> it also works well with the openstack release schedule.
19:12:53 <clarkb> #link https://www.gerritcodereview.com/3.11.html
19:13:15 <clarkb> My rough plan here is to hold some nodes so that I can manually work through upgrade and downgrade steps and make sure that process works as expected
19:13:36 <clarkb> can also use that held node to test some behavior things if anything in the linked release notes concerns us (there is usually a small number of things to double check)
19:14:07 <clarkb> if others have time to look over that 3.11 release page in order to call out concerns that would be great. And I can volunteer to start putting together a plan and do the testing I described
19:14:35 <clarkb> thinking out loud I think this is possible late may/early june. Though early june might be weird for me I'm sure I can make time for it if that ends up being the timeline that works
19:14:56 <clarkb> any thoughts/concerns/input on this item?
19:15:06 <fungi> i think that timing sounds good
19:15:33 <fungi> i did start to go over the release notes a while back, for other reasons, and it's... not brief
19:15:48 <clarkb> yes, I think this upgrade is a little more involved than the last ~3 or 4 we'ev done
19:16:09 <clarkb> the java stuff changes. But shouldn't affect us because we build our own wars
19:16:17 <clarkb> if we were using upstream war builds then we would have to upgrade to java 21 first
19:16:34 <clarkb> intead we'll build a java 17 war and run with javav 17 on bookworm then switch to 21 when trixie becomes an option
19:17:43 <clarkb> #topic Upgrading old servers
19:17:59 <clarkb> The Gerrit server move was related to this but then all the followup kinda threw me off this horse
19:18:20 <clarkb> I'm hoping that I can back to it either this week or next so be on the lookout for more server swaps. Also happy for others to jump in and do some if they are able
19:18:25 <clarkb> there is more than enough work to share here
19:18:53 <clarkb> The noble rsyslog apparmor issue has been accepted by ubuntu
19:19:14 <clarkb> not sure that means we'll get a fix in noble but our workaround is fine and maybe for the 2026 release we won't need a workaround
19:19:44 <clarkb> fungi: we didn't manage to get a definitive answer on refstack shutdown did we?
19:19:56 <clarkb> that is also semi related to this topic (not replacement/upgrade but cleanup)_
19:19:57 <fungi> no, i need to pick that discussion back up
19:20:09 <fungi> i meant to try to cover it in person during meetings last week with staff
19:20:18 <fungi> (and failed)
19:20:23 <clarkb> ack
19:20:36 <clarkb> Anyone else have server upgrade updates? Otherwise I think we can continue on
19:21:28 <clarkb> #topic Working through our TODO list
19:21:33 <clarkb> #link https://etherpad.opendev.org/p/opendev-january-2025-meetup
19:21:39 <clarkb> reminder we have a high level todo list on this etherpad
19:22:02 <clarkb> if you woudl like to get more involved with opendev pulling up this list is a great place to start. Feel free to reach out to me with any questions you may have about specific tasks.
19:22:19 <clarkb> And for those of us that are regular contributors don't forget to update that list if things get completed or add them if you notice things that need attention
19:22:32 <clarkb> #topic Rotating mailman 3 logs
19:22:39 <clarkb> #link https://review.opendev.org/c/opendev/system-config/+/948478
19:23:00 <clarkb> fungi did end up writing a change for this. I think I convinced myself that we can probably just try it and if it creates problems then backing it out is relatively easy.
19:23:25 <clarkb> email delivery is a process that gets reattempted so if we have to shut stuff down for a short period to back out that is fine
19:24:34 <clarkb> but if other reviewers have additional concerns feel free to raise them on the change and we can sort out more testing ahead of time if necessary
19:24:43 <clarkb> fungi: anything else to add to this one?
19:24:52 <fungi> nope!
19:24:57 <fungi> that sums it up nicely
19:25:20 <clarkb> #topic Open Discussion
19:25:34 <clarkb> We ended up with a fairly short agenda today after I cleaned things up yesterday.
19:25:41 * tonyb has a thing
19:25:46 <clarkb> However, there are two thigns I wanted to call out that came up / was reminded about afterwards
19:26:04 <tonyb> you go first clarkb
19:26:06 <clarkb> The first is I have a change up to upgrade gitea from 1.23.7 to 1.23.8 whcih should be straightforward if we want to proceed with that
19:26:37 <clarkb> and the other is setuptools made a release that broke pbr ~9 days ago. They backed that out, but now pbr is in a position where it has a shot clock to get updates in before setuptools breaks things again
19:27:10 <clarkb> it is probably a good idea for us to help the oslo folks get that sorted out in a reasonable amount of time. But I'm hoping they drive it and we can help with review, ci, testing, etc
19:27:20 <clarkb> tonyb: go for it
19:28:11 <tonyb> Is your point related to frickler's https://meetings.opendev.org/irclogs/%23opendev/%23opendev.2025-05-13.log.html#opendev.2025-05-13.log.html#t2025-05-13T09:06:03 "	infra-root: missed to add this to the agenda (and likely won't be at the meering, either) but maybe we should discuss a strategy for dealing with the pbr CI issues? https://review.opendev.org/q/project:openstack/pbr+status:open or should this rather be
19:28:11 <tonyb> discussed in the oslo team context?" ?
19:28:40 <clarkb> tonyb: yes, basically pbr needs some fundamental updates to address setuptools problems but before it can make those updates needs to have working CI again
19:28:51 <tonyb> Okay cool
19:29:10 <tonyb> I wasn't sure and didn't want frickler's thing missed
19:29:17 <fungi> so two separate but connected problems
19:29:23 <tonyb> Okay
19:29:44 <clarkb> my suggestion is that we support oslo (something we've done with pbr for a long time) but not get in the drivers seat unless someone has a lot of time and interest they want to put into that
19:30:05 <clarkb> openstack in particular relies on features in pbr that aren't supported by setuptools scm
19:30:25 <clarkb> opendev and zuul etc currently rely on pbr but don't rely on those features so for us worst case we just change the package management system
19:30:56 <clarkb> so I'm hoping openstack leads the way here :)
19:31:17 <clarkb> tonyb: was that your item? Or was there something else too?
19:31:30 <tonyb> In https://review.opendev.org/c/openstack/project-config/+/948033 (discussion about adding RDO to opendev) clarkb asked where x86_64-v3 is actually available so I wanted to check for objections before I launch a node in each cloud+region to check for cpu-flags to answer that
19:31:54 <clarkb> tonyb: I think that is a great idea and a good way to get concrete info. No objection from me
19:32:00 <fungi> i have no objections to doing that, though you could also probably just check /proc/cpuinfo on the mirrors?
19:32:18 <clarkb> tonyb: I would boot them in the jenkins/zuul tenant just in case nova scheduling by tenant has an impact (I doubt it will but may as well do it that way to be sure)
19:32:25 <fungi> good point
19:32:32 <tonyb> In addition I want to say that RDO is in a ... state of flux ... and after I published that review it became possible that RDO would go to gitlab to follow CentOS
19:32:49 <fungi> or i guess you could log into running ci nodes and cat /proc/cpuinfo
19:33:00 <tonyb> Okay cool that's a good thing to point oit
19:33:22 <tonyb> *out
19:33:26 <clarkb> ya you want to use the same tenant and flavor as much as possible I think
19:33:32 <clarkb> since clouds can do weird scheduling things iirc
19:34:46 <tonyb> fungi: I could ... do that I'd be a little worried about futzing up jobs but `(ssh root@${node} -q cat  /proc/cpuinfo) > ${node}.cpuinfo` should be pretty safe
19:34:54 <clarkb> re RDO's home I guess the main thing on our side is to let us know if a decision is made one way or another so that we can either help or standdown on any necessary pre debugging
19:35:11 <tonyb> Yup will do
19:35:48 <tonyb> Okay that's all from me
19:35:53 <clarkb> I'll leave things open until 19:40 if there is anything else. Otherwise I think we can all go grab $meal early. I'm still on texas time so hungry for lunch
19:36:39 <fungi> yeah, i have texas stomach as well, feels like it should be lunchtime still
19:37:17 <corvus> ya'll gonna need a texas-sized lunch?
19:37:38 <clarkb> maybe. We have a pot of beans on that I have had to smell all morning and it has made me very hungry
19:38:52 <clarkb> ok sounding like that really was everything
19:38:55 <clarkb> Thank you everyone!
19:39:12 <clarkb> We will be back at the same time and location next week. Probably with a fuller agenda
19:39:20 <fungi> thankfully i think texas-sized lunches are behind me for a while to come
19:39:29 <fungi> still recovering
19:39:31 <clarkb> and as always feel free to reach out on the mailing list or in #opendev if anything comes up between now and then
19:39:48 <clarkb> #endmeeting