Tuesday, 2025-06-17

fungimeeting here in 30 minutes (19:00 utc)18:30
fungi#startmeeting infra19:00
opendevmeetMeeting started Tue Jun 17 19:00:02 2025 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.19:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:00
opendevmeetThe meeting name has been set to 'infra'19:00
fungi#link https://lists.opendev.org/archives/list/service-discuss@lists.opendev.org/thread/SCQGMDGA25UKFCKPKBO2RFTJRETB5XFT/ Our Agenda19:01
fungisorry, took me a sec19:01
fungi#topic Announcements19:01
fungi#link https://cfp.openstack.org/app/2025summit/all-plans/36/presentations/new/summary The OpenInfra Summit Europe 2025 call for Forum session proposals is open until July 819:05
fungii clearly should have prepared announcements in advance, that url was a lot more work to find than i anticipated19:06
fungianyway, if anyone wants to propose an opendev-related forum session, now's the time to figure that out19:06
fungiand remember, the gerrit folks will be there too so this is an opportunity for some cross-project work there as well19:07
fungiany other announcements?19:07
fungi#topic Zuul-launcher image builds (corvus 20240910)19:08
fungii know there's a stack of changes pending for label switches in nodesets19:08
corvuscouple of zuul-launcher things:19:08
corvus1) i have a change up to move the openstack tenant (and others) to use zuul-launcher nodes https://review.opendev.org/95271219:08
corvusthis would affect some, but not all jobs19:08
corvus2) i have a series of changes to move even more nodesets over to zuul-launcher for the tenants that use it.  that series *ends* at https://review.opendev.org/95272619:08
corvuswe could merge the change to switch openstack over, then merge the changes in the series to use progressively more zuul-launcher nodes as we gain confidence19:09
corvusare we ready for that?19:09
corvusfrickler noted that we can't autohold zuul-launcher nodes yet.  i can probably get that implemented this week so it's in saturday's restart.  should we wait on that or proceed without it in order to ramp up our load testing?19:09
fricklerI wouldn't see that as blocker if the expected timeframe is that short19:10
fungiyeah, i think if we tell people we can't autohold their job for now it's not the end of the world19:10
corvus_i think my homeserver just decided to do something else...19:11
fungii'm on board with approving 952712 after the meeting today, or first thing tomorrow depending on what others are comfortable with19:11
corvus_yeah, i think that's reasonable.  and i think it's worth getting more data now19:11
corvus_12:1119:11
corvus_so i'll plan on switching over the openstack tenant19:11
corvus_at the end of the series that moves over nodesets, you can see there are only two images we're missing in zuul-launcher: ubuntu-bionic and ubuntu-focal19:12
corvus_should we send out a "last call" for folks to port them?  or does someone here want to do it?  i don't expect it to be very hard.19:12
fungii think we (opendev) are still using focal nodes for at least some tests right?19:12
fungithings that haven't been moved to jammy/noble yet19:13
fungii'm hesitant to commit to finding time to do the focal label migration myself at this time though19:13
fungibionic i'm tempted to just let openstack folks take care of or we drop it19:14
corvus_ok.  i'm sure one of us opendev folks will end up doing focal if no one else does.19:14
corvus_it's looking highly likely that bionic may just not get done unless there's a motivated openstack-focused person19:14
fungior we take it as a sign to replace the lingering focal servers and move testing to later ubuntu ;)19:15
fungiwhich would be better for us overall, but maybe more work in the near term19:15
corvus_let's let it sit out there for a while, and when shutting off nodepool is in sight, send out one last notice.  sound good?19:15
fungii think when i looked, all the openstack releases still under maintenance were tested on jammy and newer19:15
fungiso even openstack may just say people who care about unmaintained branches should either step up asap or drop testing19:16
fungiyeah, sounds good to me. any other feedback on this topic?19:18
fricklerhow bad would it be to keep those labels running on nodepool for some time?19:18
fungiit would mean continuing to maintain nodepool and run the servers for it19:19
corvus_i think the effort to maintain nodepool for one label that no one uses is not worth doing19:19
fungii'm with you on that19:20
corvusso i'll plan on switching over the openstack tenant19:20
fungii think the effort to maintain anything no one uses (or at least cares enough about to help with) is not worth it19:20
fricklercan we gather data how often a label is getting used?19:20
corvus_(hey the homeserver is back, sorry about the delayed/repeat message ^)19:20
fricklerI'm just arguing that we don't need to switch things off. if they break somehow, that's another story19:21
corvus_it's probably like 15 minutes of work for someone to port over the image, if it's important.19:21
fungibut also a good opportunity to stop spending cycles and wasting space/bandwidth on an image that isn't being used, if it's not being used that is19:22
corvus_i very much want to switch it off when we get there.  i'd like to recover the resources, and start to see what zuul looks like without nodepool.  note that they don't cooperate on things like quota...19:22
fungiso i guess to repeat/restate frickler's earlier question, what's the best way to figure out how much or often a given label is requested?19:23
corvus_should show up in the launcher logs19:23
fungi(or how recently)19:23
fungiwe have ~2 weeks retention on those? or am i remembering wrong?19:24
corvusmaybe 10 days it looks like?19:25
fungii concur19:25
fungijust checked19:25
fungiso we could say a label not requested in 10 days is probably not that important19:25
fungiand someone could also port it later if that turns out to be an incorrect assumption19:26
fungianything else on this topic?19:27
corvusnope19:27
corvusthanks!19:27
fungi#topic Gerrit shutdown problems (clarkb 20250527)19:27
fungii don't think there's anything new on this, we can revisit next week19:27
fungi#topic Gerrit 3.11 Upgrade Planning (clarkb 20250401)19:27
fungi#link https://www.gerritcodereview.com/3.11.html Please check this for any concerns with the way we use Gerrit19:28
fungi104.130.253.194 is a held Gerrit 3.11 node for testing purposes19:29
fungi#link https://review.opendev.org/c/opendev/system-config/+/882900 Host Gerrit images on quay.io19:29
fungiand its parent change19:29
fungi#link https://etherpad.opendev.org/p/gerrit-upgrade-3.11 Planning Document for the eventual Upgrade19:30
fungino changes since last week, but all good reminders as we should move forward on this soon19:30
fungi#topic Upgrading Old Servers (clarkb 20230627)19:30
fungi#link https://etherpad.opendev.org/p/opendev-bionic-server-upgrades19:31
fungi#link https://etherpad.opendev.org/p/opendev-focal-server-upgrades19:31
fungithose are somewhat relevant to the earlier zuul-launcher discussion as well19:31
fungii haven't seen tonyb so unsure what the wiki (and cacti) situation is, presumably the same as last week19:32
fungisimilarly i haven't found time to work out what the communication schedule should be (if any) for decommissioning refstack19:32
fungiwe're already halfway through the hour so i'm going to skip a few older topics for now...19:33
fungi#topic Enabling hashtags globally (corvus 20250523)19:33
fungithis happened19:33
fungii haven't heard any complaints19:33
corvusi think we can drop from agenda now?19:33
fungilikely19:34
fungishould be resolved, i just wanted to double-check there's been no fallout that escaped my attention19:34
fungi#topic Adding CentOS 10 Stream Support to Glean, DIB, and Nodepool (clarkb 20250527)19:35
fungii don't think there's anything new on this one, but as there are a bunch of moving parts and multiple people involved i want to make sure19:36
fungisounds like no19:37
fungi#topic projects.yaml normalization (frickler 20250617)19:37
fungia (possibly intentional) regression in ruamel.yaml has started adding spaces to the end of wrapped strings, akin to format=flowed E-mail wrapping19:37
fungi#link https://sourceforge.net/p/ruamel-yaml/tickets/546/ Trailing whitespaces in some generated YAML outputs19:37
fungi#link https://review.opendev.org/952006 this is impacting our normalization job output19:38
corvusi really hope that's not intentional :)19:38
fungiyeah, it seems odd to me19:39
corvusruamel does allow for a lot of control, i wonder if we could code around it19:39
fungias i mentioned in one of the comments, the only place i've ever seen that is in format=flowed e-mail19:39
corvus(which is a mitigation for the quirks of a 40 year old standard)19:40
fungithe lack of maintainer feedback on the linked ticket means i don't know whether it's worth hacking around something they may just fix in the next release19:40
corvus(not something anyone should be doing this century)19:40
fungiand yeah, wrapping is explicit in yaml, no need for inference like in e-mail message bodies19:40
fungiprobably just missed chomping the whitespace at the wrap point in some update19:41
corvusmy thoughts: 1) this is not a good format, we shouldn't adapt to it; 2) pinning for a couple months to see if things shake out seems okay; 3) if it doesn't get reverted, we should code around it... somehow.19:41
fungiso anyway, the choices as currently presented to us are to temporarily pin/skip versions of that library, accept the new output, or keep ignoring it and hope something changes19:42
fungii agree with your take, corvus19:42
frickler+119:42
fricklerthe second point I wanted to mention: why does the bot keep updating the change when there is no delta to the previous PS?19:43
fungialso worth keeping an eye on (and linking in an inline comment for the pin/skip) the upstream bug report19:43
fungifrickler: without having dug into it, i'm guessing it either doesn't attept to check whether the diff is equivalent to the existing change already in gerrit, or the attempt it makes is flawed in some way19:44
fungisomeone needs to figure out which that is if we don't want to continue getting new identical revisions daily19:44
fricklerI think I saw a check, but I don't have the ref handy19:45
fungi#link https://review.opendev.org/952315 Block broken ruamel.yaml versions19:46
fungithat's worth calling out as the (currently abandoned) option we seem to be favoring19:46
fungiperhaps worth restoring and updating if necessary19:47
corvus++19:47
fricklerrestored now19:47
fungithanks19:47
fricklerwill check whether the denylist is still current tomorrow19:47
fungithanks!19:48
fungii think that's still current, fwiw19:48
fungiit has 2x+2 already, feel free to self-approve if you confirm there aren't newer releases yet to worry about19:49
fungiokay, i don't know that this is worth spending more time on since those of us present seem to have reached a quick agreement19:50
fungi#topic Open discussion19:50
fungiwe have 10 minutes left, anyone have updates on the older topics i skipped or anything else they want to bring up?19:50
fungiseems like no. i return the balance of 5 minutes, don't spend it all in one place!19:54
fungi#endmeeting19:54
opendevmeetMeeting ended Tue Jun 17 19:54:34 2025 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:54
opendevmeetMinutes:        https://meetings.opendev.org/meetings/infra/2025/infra.2025-06-17-19.00.html19:54
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/infra/2025/infra.2025-06-17-19.00.txt19:54
opendevmeetLog:            https://meetings.opendev.org/meetings/infra/2025/infra.2025-06-17-19.00.log.html19:54
corvusthanks!19:54
-opendevstatus- NOTICE: Zuul jobs reporting POST_FAILURE were due to an incident with one of our cloud providers; this provider has been temporarily disabled and changes can be rechecked22:36

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!