Wednesday, 2025-08-20

clarkbcorvus: the upgrades and reboots have gotten as far as zl01. We issueda graceful stop which the process recorded in its debug log. I think we're waiting now for all the uploads to finish maybe? I guess thats fine for rolling upgrades but it is different than the hard stop start we have been doing. I wonder if we should update the playbook to match or not00:26
clarkblooks like it took approximately 15 minutes to gracefull stop. That isn't too bad os maybe this is fine. It does mean the launcher falls off the components list because unlike the executors it is isn't "paused" during that period of time00:27
clarkbanyway I think this is working its just didn't look how I expected for a moment so I dug in00:28
opendevreviewOpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml  https://review.opendev.org/c/openstack/project-config/+/95799502:25
opendevreviewMerged openstack/project-config master: kolla: Allow kolla-core to remove RP votes  https://review.opendev.org/c/openstack/project-config/+/94984204:19
opendevreviewMerged openstack/project-config master: Remove nodepool configuration/elements  https://review.opendev.org/c/openstack/project-config/+/95618404:22
opendevreviewMerged openstack/project-config master: Prepare for retirement of RefStack repositories  https://review.opendev.org/c/openstack/project-config/+/94785904:24
opendevreviewMerged openstack/project-config master: Stop syncing run_tests/Vagrantfiles for OSA  https://review.opendev.org/c/openstack/project-config/+/95694404:24
ykarelHi is this known? ERROR! couldn't resolve module/action 'openvswitch_bridge'. This often indicates a misspelling, missing collection, or incorrect module path.04:47
ykarelor was a timing thing during ansible 11 switch?04:47
ykarelseen in todays periodic run04:47
ykarelhttps://c4e65dc90d54a7cb0d09-c58207963db0f03dec19154799b50d2d.ssl.cf5.rackcdn.com/openstack/41fccd9ecb204b53bc4e82d4e6cd9dec/job-output.txt04:47
ykarelhttps://6ab93f4ca7b96e79e883-fb8b0f0ff152f556a5802daf1433e080.ssl.cf5.rackcdn.com/openstack/492f0763d7b54cb388374107cc79cf62/job-output.txt04:48
ykarelor may be we need to adopt in https://codesearch.opendev.org/?q=Ensure%20the%20infra%20bridge%20exists&i=nope&literal=nope&files=&excludeFiles=&repos=04:49
fricklerykarel: IIUC that module was dropped to be installed with ansible 11, a short term workaround might be to let those jobs run with ansible 904:59
opendevreviewTakashi Kajinami proposed opendev/system-config master: Add OpenVox to mirror  https://review.opendev.org/c/opendev/system-config/+/95729905:00
ykarelfrickler, ack i was going to copy the module like https://opendev.org/zuul/zuul-jobs/raw/branch/master/roles/multi-node-bridge/library/openvswitch_bridge.py05:00
fricklerah, yes. maybe there is a way we can make it useable from other playbooks without duplication, though.05:05
ykarelhttps://review.opendev.org/c/openstack/neutron-tempest-plugin/+/95800805:09
ykarelyes looks setting ANSIBLE_LIBRARY that can be reused, any way to set it ?05:10
fricklersorry, I don't know too much about these ansible-in-zuul details, maybe clarkb or corvus have an idea05:14
ykarelok thx05:33
*** clarkb is now known as Guest2453511:02
*** dhill is now known as Guest2453912:35
fungiykarel: yeah, ansible dropped the ovs module, so clarkb vendored it into the multi-node-bridge role with with https://review.opendev.org/c/zuul/zuul-jobs/+/95718812:54
fungiyou could also just do something similar12:54
fungithough i agree finding a way to not have two copies floating around would be nice12:55
fungilonger term it would probably make more sense to replace it entirely in multi-node-bridge with a simiar setup just using linux's bridge driver12:55
ykarelfungi, for the limited use going with https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/958008/2/roles/multi-node-setup/tasks/main.yaml12:56
fricklerinfra-root: looks like we again have a kolla-ansible change stuck in check for > 24h, https://review.opendev.org/935704 PS20. not sure if that's related to the ongoing restarts? I'll leave it in place for now in case someone wants to dig further13:02
*** dhill is now known as Guest2454413:36
Clark[m]frickler: I wouldn't expect that to be related to the restarts as the launchers updated closer to 14 hours ago. I think if you hover the build status bar in the UI you get the request I'd which you can grep for in launcher and scheduler logs to see where it may be stuck13:46
Clark[m]We probably want to check on rax ord and dfw as well since reenabling those may have caused them to error again?13:47
opendevreviewClif Houck proposed openstack/diskimage-builder master: Add a sha256sum check for CentOS Cloud Images  https://review.opendev.org/c/openstack/diskimage-builder/+/95798313:52
opendevreviewClif Houck proposed openstack/diskimage-builder master: Add a sha256sum check for CentOS Cloud Images  https://review.opendev.org/c/openstack/diskimage-builder/+/95798313:53
fricklerClark[m]: if neither you nor corvus want to check deeper, I'd just abandon and restore that change in order tu rerun jobs13:54
Clark[m]I can take a look but it will be a bit. I'm trying to get out for an early morning bike ride before the heat of the day but can look when I get back13:56
fungilooks like it has two builds that are waiting on specific nodeset requests13:57
fungiwe could check what provider those are for13:57
fungithough it's been in the queue since long before we reenabled the rax classic regions, and these don't look like retries13:58
corvusnodeset requests are a56f042da27b4cfda9af080eea029ac7 and ea449d9a4f744590a1e0af1b7c3bc625, for posterity14:08
corvusgiven that they each have the requisite nodes ready and assigned to the request, i think that's very likely a launcher bug14:10
fungii'll be in and out a bit today. taking a break from storm prep to go grab lunch, then when i get back i'll split my time between last-minute yardwork and server upgrades14:59
cloudnullClark[m] fungi can we get you all to shutdown jobs on the rackspace legacy cloud environments? We’re seeing more than 300k api requests hammering the environment this morning.15:07
cloudnullMaybe there’s a run away process? Bad return from the legacy api?15:07
opendevreviewJames E. Blair proposed opendev/zuul-providers master: Revert "Reenable rax DFW and ORD providers"  https://review.opendev.org/c/opendev/zuul-providers/+/95809415:16
corvus2025-08-20 15:13:51,274 ERROR zuul.Launcher:   keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://dfw.servers.api.rackspacecloud.com/v2/637776/servers/d92503ac-b6c8-4d5d-a54c-f6c0a4717271: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))15:16
corvuscloudnull ^ i see a lot of those errors15:16
corvus6422 of those errors since utc midnight15:18
corvus31294 regular api calls15:20
corvuscloudnull: opendev's zuul issues 37716 api calls to both regions of rax legacy since utc midnight, 6422 of them failed with connection errors15:20
opendevreviewMerged opendev/zuul-providers master: Revert "Reenable rax DFW and ORD providers"  https://review.opendev.org/c/opendev/zuul-providers/+/95809415:21
cloudnullWe’ll give it a look one things calm down a bit.15:26
cloudnullDo you happen to have an api breakdown of calls made across DFW and ORD since it was reenabled? Maybe this is just an issue with DFW?15:27
corvuscloudnull: yes, we have a record of every call.  one sec.15:32
corvusdfw: 19009 successful calls, 6422 failures; ord: 11271 successful calls, 0 failures15:35
corvuscloudnull: ^ so yeah, looks like dfw is the only one we're seeing errors for15:35
corvusfrickler: i think i have what i need from those changes; they're never going to resolve on their own.  i will dequeue them.  bugfix to come later.15:40
corvuss/those changes/that change/15:41
cloudnullcorvus could I bother you to reenabled ord and iad?15:42
cloudnullWe think there’s an issue specifically with DFW and I’d like to prove that and help with resources15:43
corvuscloudnull: we hadn't enabled iad yet, just because its failure mode was different (slow booting nodes) so we wanted to monitor that separately... can i do ord for now and wait for Clark or fungi to be around to add iad back?15:44
opendevreviewJames E. Blair proposed opendev/zuul-providers master: Re-enable rax-ord  https://review.opendev.org/c/opendev/zuul-providers/+/95809915:45
opendevreviewMerged opendev/zuul-providers master: Re-enable rax-ord  https://review.opendev.org/c/opendev/zuul-providers/+/95809915:46
cloudnullThank you corvus15:46
fricklercorvus: thx, rechecked the change and will check later whether things progress better now. those stuck requests weren't related to the dfw issues, right?15:50
corvusdon't think so; it's an error in the ready node reuse code15:56
fungiokay, back at the keyboard for a bit to see what i missed16:15
fungiand yeah, the stuck jobs had node assignments pending for hours before we tried to bring rax classic back online yesterday, so should have been entirely unrelated16:17
Guest24535I am around now and can monitor reenabling rax-iad if cloudnull is still ok with that: https://review.opendev.org/c/opendev/zuul-providers/+/957957 is the change for that16:24
Guest24535oh heh I've guestified16:24
Guest24535one moment please16:24
*** Guest24535 is now known as clarkb16:25
fungiwelcome back Guest24535! ;)16:25
clarkbthanks16:26
fungii'll approve that change, i'm semi-around again now16:26
clarkbthanks I'm properly around at this point16:27
opendevreviewMerged opendev/zuul-providers master: Reenable rax IAD  https://review.opendev.org/c/opendev/zuul-providers/+/95795716:27
clarkbthen also in my backlog is https://review.opendev.org/c/opendev/zone-opendev.org/+/957981 and child16:27
fungilooking16:29
fungiapproved both, though on the second what do you think about just adding a www vhost to static and doing tghe redirect there since it already hosts a ton of them?16:30
fungialternatively we can probably point is to the lb as long as we add a redirect in the apache configs for the backends16:31
opendevreviewMerged opendev/zone-opendev.org master: Delete review02 DNS records  https://review.opendev.org/c/opendev/zone-opendev.org/+/95798116:32
clarkbfungi: ya I didn't think to far ahead on that one. I just didn't want us to accidentally enable the record and have it point to a non existent location16:32
clarkbbut now that apache is doing all the initial connections on giteas I thinkwe could handle it there16:32
opendevreviewMerged opendev/zone-opendev.org master: Update commented out www.opendev.org record  https://review.opendev.org/c/opendev/zone-opendev.org/+/95798216:33
fungioh, good point, it was a less clean set of options in the past when we were doing haproxy->gitea instead of haproxy->apache->gitea16:34
clarkbI've shutdown the screen that was running the zuul reboots as those completed successfully16:38
fricklerif someone gets bored, there's a new big yaml reformatting change, I didn't check yet what happened to trigger this https://review.opendev.org/c/openstack/project-config/+/95799516:39
clarkbcorvus: we're largely caught up as of yesterday evening. I think the setup_hook change landed just a bit too late to get deployed though so we aren't running that in prod yet (but we did direct checking of it so I'm not too worried)16:39
clarkbfungi: frickler: do you recall what the set mtu to 1500 process was for the rax flex network? that is up next for me if the ssh keys and routers and networks etc all got created properly overnight. THen we can boot a mirror16:42
clarkbbased on https://grafana.opendev.org/d/fd44466e7f/zuul-launcher3a-rackspace?orgId=1&from=now-1h&to=now&timezone=utc&var-region=$__all rax classic iad seems to be working. we have ready and in use nodes implying that boot timeouts are not a major issue16:43
fungi`openstack --os-cloud=opendevzuul-rax-flex --os-region-name=SJC3 network set --mtu=1500 opendevzuul-network1`16:44
fungithat16:44
fungi's from my shell history on bridge16:44
fricklerclarkb: from what I remember we needed to set that on the tenant network?16:44
fricklerlike that, yes16:44
clarkbperfect thanks. run_cloud_launcher.log seems to indicate success so I'll set that in IAD3 momentarily16:44
fricklermaybe check the current value first16:45
clarkbfrickler: ++16:45
fricklerwould be interesting to see whether rax fixed their deployments16:45
fungiwell, they aleady fixed them at least once to no longer be <1500 they merely made them much larger16:46
clarkbits 3942 currenctly16:46
clarkb*currently16:46
fungibut yeah, i dunno if they re-lowered them to 150016:46
fungiguess not, that's the setpoint i recall16:46
fricklerok, so bad configuration for a public cloud IMO, but up to them.16:46
funginot entirely broken at least, but yes likely to lead to pmtud negotiations and/or fragmentation/reassembly16:47
clarkbok I set the mtu to 1500 on that network in both accounts16:48
fungiand possible unreachability to/from some places impacted by pmtud black holes, though hopefully those parts of the internet are vanishingly rare these days16:48
fungii still have ptsd from dealing with customer sites where their "security" people had been convinved that icmp was dangerous so they just blocked all of it at their edge16:49
clarkbthere is an Ubuntu 24.04 image in this cloud region. Do you think we should upload our own noble image or just use theirs? (we uplaoded our own image into most other cloud since it wasn't otherwise available)16:49
fungithe whole "ping of death" scare did more lasting damage to the internet in misplaced security filters than the actual packets ever could16:50
clarkbI'm somewhat inclined to just use the cloud provided image16:50
clarkbour config management should coerce it to looks basically identical to whatever we would upload I think16:50
fungii think i deleted the noble-server-cloudimg-amd64.img from my homedir that i had uploaded to the other regions16:52
clarkbya we would probably just download the ubuntu published image and reupload which is likely the same result as what that cloud image is as well16:52
fungi`wget https://cloud-images.ubuntu.com/noble/current/{noble-server-cloudimg-amd64.img,SHA256SUMS{,.gpg}}` is how i pulled it, fwiw16:52
fungifollowed by `gpg --verify SHA256SUMS.gpg SHA256SUMS && sha256sum -c --ignore-missing SHA256SUMS`16:53
keekzcloudnull corvis: dfw should be better now16:57
fungikeekz: thanks for the followup!16:58
clarkbdid someone else want to push a change up to enable that region? I'm working on a new mirror in iad317:01
fungigimme a sec i can push a revert17:01
opendevreviewJeremy Stanley proposed opendev/zuul-providers master: Reapply "Reenable rax DFW and ORD providers"  https://review.opendev.org/c/opendev/zuul-providers/+/95810417:03
fungithough i guess the subject is now misleading since part of it was reverted already17:03
fungihappy to amend with revised commit message if anyone cares17:03
clarkbya maybe amend it just to avoid confusion if we have problems in the futuer. It will be clearer that ord was ok17:05
fungican do17:13
opendevreviewJeremy Stanley proposed opendev/zuul-providers master: Reapply "Reenable rax DFW provider"  https://review.opendev.org/c/opendev/zuul-providers/+/95810417:14
fungibetter?17:14
clarkbapproved17:16
opendevreviewMerged opendev/zuul-providers master: Reapply "Reenable rax DFW provider"  https://review.opendev.org/c/opendev/zuul-providers/+/95810417:17
fricklerfwiw I'd be fine with using the cloud ubuntu image17:19
opendevreviewClark Boylan proposed opendev/zone-opendev.org master: Add mirror01.iad3.raxflex to our DNS zone  https://review.opendev.org/c/opendev/zone-opendev.org/+/95810617:22
opendevreviewClark Boylan proposed opendev/system-config master: Add mirror01.iad3.raxflex to our inventory  https://review.opendev.org/c/opendev/system-config/+/95810717:26
clarkbfrickler: ya that is what I ended up doing17:26
clarkbI think the main reason we didn't use cloud images previously was that they were simply not available so I took advantage of the image being available here and simplify things. It seems to hae worked fine and you can ssh in to the IP address in ^ and check it out first  too17:26
opendevreviewMerged opendev/zone-opendev.org master: Add mirror01.iad3.raxflex to our DNS zone  https://review.opendev.org/c/opendev/zone-opendev.org/+/95810617:30
opendevreviewClark Boylan proposed opendev/zuul-providers master: Add raxflex iad3 region to zuul's resource pools  https://review.opendev.org/c/opendev/zuul-providers/+/95810917:33
clarkbcloudnull: ^ I noticed that we're still set to a 10 instance limit and unlimited floating IPs in iad3. I've hardcoded us to the 10 instance limit as a conservative starting point but can update that to whatever we set the floating ip limit to later17:34
opendevreviewJeremy Stanley proposed opendev/system-config master: Use Jammy for our Kerberos servers  https://review.opendev.org/c/opendev/system-config/+/95811217:47
fungiinfra-root: ^ i couldn't find anything similar for the afs db or file servers, am i blind or do we not do test deployments of those?17:47
clarkbfungi: in zuul.d/system-config-roles.yaml we have tests for the openafs role. I think that may only be the client side though17:49
fungiyeah, nothing that deploys test servers on specific platforms17:50
clarkbI don't see any job for that seems to run the service-afs playbook17:50
fungiokay, so good enough17:50
clarkbyou could add a job that does ^ but I'm wondering if part of the reason for that is it isn't fully automated?17:50
clarkboh I wonder if some of the problem is with the domain and authentication and all that17:50
clarkbsince its a global filesystem it probably isn't trivial to spin up something working without making it a different domain?17:51
fungiyeah, i assumed it was complexities of actually having a working subtree in global afs17:52
clarkbdid anyone else want to review the mirror01.iad3.raxflex server addition? I'll probably approve it in ~10 minutes if there is no -1 betwen now and then18:21
fungiyeah, i just wanted to make sure it was in dns before approving18:26
corvuslgtm18:27
fungisince my jammy change for the kerberos servers is passing i'm going to stick them and all the afs servers into the emergency disable list now18:27
clarkbfungi: I think we have a documented process for kerberos server outages fwiw18:28
fungiafs servers too18:34
fungii'm pulling them all up18:34
fungihttps://docs.opendev.org/opendev/system-config/latest/kerberos.html#no-service-outage-server-maintenance and https://docs.opendev.org/opendev/system-config/latest/afs.html#no-outage-server-maintenance for the record18:42
fungii've placed the following servers temporarily in the emergency disable list on bridge in order to start working through upgrades over the rest of the week: afs01.dfw.openstack.org, afs01.ord.openstack.org, afs02.dfw.openstack.org, afsdb01.openstack.org, afsdb02.openstack.org, afsdb03.openstack.org, kdc03.openstack.org, kdc04.openstack.org18:48
fungithese are the only afs and kerberos servers i found in our inventory18:48
fungiper the no-outage docs i'm upgrading kdc04 first since it's the inactive replica18:50
fungipackages for focal are already up to date, but it apparently needs a clean reboot before i can run do-release-upgrade to jammy. i expect this will be common across the entirety of the set18:52
fungiit starts an extra sshd on 1022/tcp, for reference18:58
Clark[m]Our iptables likely blocks that fwiw. I'm on matrix now due to lunch18:59
fungi"Sorry, this storage driver is not supported in kernels for newer releases. There will not be any further Ubuntu releases that provide kernel support for the aufs storage driver. Please ensure that none of your containers are using the aufs storage driver, remove the directory /var/lib/docker/aufs and try again."19:00
fungii don't think we rely on it?19:00
Clark[m]That should be fine. I didn't even think we run docker on the kdcs19:00
fungiyeah, don't think we do19:01
Clark[m]Are there any containers? If not then nothing should use aufs19:01
Clark[m]And I would expect new containers to have stopped using aufs at some point19:02
fungiCommand 'docker' not found19:02
fungisurvey says "no"19:02
opendevreviewMerged opendev/system-config master: Add mirror01.iad3.raxflex to our inventory  https://review.opendev.org/c/opendev/system-config/+/95810719:03
Clark[m]That change will run all the jobs but you've got the hosts in the emergency file so shouldn't matter19:03
fungiright19:03
fungii did a `rm /var/lib/docker/aufs` and retried do-release-upgrade, seems maybe happier now19:04
Clark[m]Ack19:04
fungier, rm -rf because it was a directory19:05
Clark[m]I suspect it was empty too and just auto created by something for some reason at some point :)19:05
fungii concur19:06
fungiso far i've only told it to keep our sshd and sudoers config changes, anything else i expect ansible can (re-)correct19:28
clarkbhttps://mirror.iad3.raxflex.opendev.org/ubuntu/ has content now. I think it should be ok to land https://review.opendev.org/c/opendev/zuul-providers/+/958109 as a result. But I know that cloudnull wants to adjust quotas there. Not sure if we want to wait for that to happen before we try to use the 10 instance quota19:39
fungilgtm, yep19:51
fungikdc04 is up and running on jammy now. i'll do the switchover steps in our docs next19:57
fungiDatabase propagation to kdc04.openstack.org: SUCCEEDED19:59
corvusclarkb: did you verify the flavor names?  (they are different in the other 2 flex regions)20:06
corvus(i mean to say, dfw3 and sjc3 are different from each other, so i wonder if iad3 should be different still, or is the same as sjc3)20:07
fungioh! good memory20:07
fungigranted, thay should have become readily apparent when booting the new mirror instance20:08
clarkbcorvus: I did. sjc3 and iad3 have matching flavors20:08
fungiwonder why dfw3 is the odd one out20:09
clarkbactually iad3 is a subset. But the three flavors we use are in the subset20:09
corvusclarkb: cool, lgtm then... 20:09
corvusthe zuul-providers config for iad3 == sjc320:10
corvus4 flavors there in your change20:10
clarkbya I had to check for booting the mirror as well so made sure everything lined up20:10
clarkbthe fourth is a duplicate we just alias the nested virt to the 8gb flavor iirc. But yes I checked they are in there20:10
fungidid i miss the zuul-providers addition?20:10
clarkbgp.0.4.4 gp.0.4.8 and gp.0.4.16 show up20:11
clarkbfungi https://review.opendev.org/c/opendev/zuul-providers/+/958109 its this change20:11
corvusah yes, 4 of our flavors, 3 of theirs.  i thought you meant that iad3 was a subset of sjc3 (but it's not)20:13
corvusclarkb: i think from our pov, it's okay to start with the 10.20:13
clarkbcorvus: oh sorry I meant the flavors on the cloud side of iad3 are a subset of the flavors in sjc320:17
clarkbbut those that do exist overlap20:17
clarkbcorvus: in that case I think we can probably land the change and see how it goes?20:17
clarkbthen after the quota is adjusted we can change that value20:17
fungilgtm, thanks!20:24
opendevreviewMerged opendev/zuul-providers master: Add raxflex iad3 region to zuul's resource pools  https://review.opendev.org/c/opendev/zuul-providers/+/95810920:24
corvusclarkb: oh that's interesting about the cloud flavors.  gtk.  all caught up now.  :)20:24
fungire-ran /usr/local/bin/run-kprop.sh on kdc03 after upgrade to jammy, all done there now20:40
fungistarting on afsdb01 with our no-downtime maintenance instructions20:43
fungiInstance ptserver, temporarily disabled, has core file, currently shutdown. Instance vlserver, temporarily disabled, currently shutdown.20:44
clarkbI think for the fileservers we have to transition the primary volume away from the host being updated. That might be a bit more painful unless things are already distributed in a way that just works for all but one20:48
clarkbbut this already seems liek good progress!20:48
fungiyeah20:49
fungialready reading ahead to those20:49
fungibut we have 3 db servers to get through first20:50
fungiand then maybe repeat from jammy to noble20:50
clarkbone thing that just occurred to me is you may want to remove the ansible fact cache files for those hosts before you remove them from the emergency file20:54
clarkbthat way ansible rereads all the facts as they are now rather than potentially relying on old fact info20:54
fungiany idea where that is these days?20:58
fungibut good call, yep20:58
clarkb/var/cache/ansible/facts on bridge I think20:59
fungik, thx20:59
fungiInstance ptserver, has core file, currently running normally. Instance vlserver, currently running normally.21:12
fungii'll move on to afsdb02 and 0321:13
cloudnullclarkb: I can go get those quotas update now. 21:20
clarkbcloudnull: ack I don't think we're in a hurry. But we have everythin configured on our side to take advantage of them once set now21:21
clarkbfungi: are you having to do a preparatory reboot on all of these before beginning the update process?21:24
clarkband did any other server complain about aufs?21:24
fungiclarkb: yeah, all of them want a reboot before running, and i've just preemptively been removing that directory21:26
clarkback thanks21:27
fungibasically if there's been a kernel update applied since the last reboot they want to be rebooted first, so that's every last one really21:27
fungibecause we reboot them infrequently21:27
clarkbmakes sense21:28
fungihopefully afsdb02 will be done soon and i can move on to 0321:28
fungifinally finished the yardwork so i can focus on this a little more intently21:28
fungithough i'll probably grab a shower while afsdb03 is upgrading21:29
fungiotherwise christine will complain21:29
clarkbonce you get to a good pausing point you can always pick it back up again in the morning21:33
clarkbI have an appointment tomorrow morning but am around otherwise21:35
clarkbI'm off to get my annual eyeball scan21:36
fungiyeah, other than meetings i've got nothing pressing tomorrow21:36
fungiassuming my cables don't get sucked up by a hurricane (unlikely)21:37
clarkbomnomnom21:37
fungicopper ramen21:38
fungiworking on 03 now21:46
fungiokay, afsdb03.openstack.org is now upgraded to jammy. that just leaves the file servers, which i'll pick back up with in my morning23:04
clarkband we're leaving all of the hosts in the emergency file for now? (I think that is fine just double checking)23:07

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!