tonyb | That'd be good, all round | 00:05 |
---|---|---|
opendevreview | Tony Breeds proposed opendev/system-config master: DNM: Initial dump or mediawiki role and config https://review.opendev.org/c/opendev/system-config/+/921322 | 00:55 |
*** Guest22 is now known as diablo_rojo | 00:56 | |
*** liuxie is now known as liushy | 05:43 | |
yoctozepto | infra-root: a kind reminder to consider dropping nebulous' zuul tenant: https://review.opendev.org/c/openstack/project-config/+/921725 | 10:44 |
fungi | preliminary testing indicates openaf-modules-dkms 1.8.12~pre1-1 on sid has fixed the lkm build failure for me finally | 10:53 |
fungi | er, openafs-modules-dkms | 10:53 |
frickler | infra-root: seems our debian (bookworm) mirrors are outdated, although I cannot find any obvious error in the update logs. see this error from kolla on a docker image https://paste.openstack.org/show/buh7fIcz87FqGlt9YMPj/ | 11:02 |
frickler | checking on a held node, we indeed only seem to have the old version available https://paste.opendev.org/show/b356BlhXztQHkQuheNZq/ | 11:02 |
fungi | https://static.opendev.org/mirror/logs/reprepro/debian-security-bookworm.log | 11:06 |
fungi | last updated 3 hours ago | 11:06 |
frickler | ah, no, the issue is with the base repo, not security | 11:09 |
frickler | seems related to the point releases that were made on friday or so | 11:09 |
frickler | "The lock file '/afs/.openstack.org/mirror/debian/db/lockfile' already exists." | 11:10 |
frickler | looks like one update took too long and now it is stuck? | 11:10 |
fungi | aha, yes we run it under a timeout in case it gets hung, but prematurely terminating it can be worse sometimes | 11:10 |
fungi | guess there were too many packages to update within the timeout | 11:11 |
frickler | see the end of "reprepro/debian.log.1" | 11:11 |
frickler | since there is no reprepro process currently running, it should be safe to just remove the lock and let it retry? or do you want to run is manually? | 11:13 |
fungi | we should probably clear and then manually re-hold the lock and run the script in a screen session with the timeout disabled | 11:20 |
fungi | i can do it but am on a conference call for the next 40 minutes | 11:20 |
opendevreview | Martin Kopec proposed opendev/bindep master: Drop centos 8 stream jobs https://review.opendev.org/c/opendev/bindep/+/923151 | 11:49 |
fungi | okay, i've cleared and manually held our script lockfile, then cleared reprepro's lockfile, and now it's trying to update again | 12:18 |
fungi | in a root screen session on mirror-update.o.o | 12:18 |
NeilHanlon | infra-root: icymi: https://openwall.com/lists/oss-security/2024/07/01/3 | 12:35 |
fungi | NeilHanlon: yep, woke up to it. thankfully debian/bookworm already has fixed packages so ubuntu's probably aren't far behind | 12:54 |
fungi | and all our servers are 64-bit, so it sounds like if it is still exploitable (inconclusive) it probably takes considerably longer than the 6-8 hours cited for 32-bit systems | 12:55 |
fungi | but if there are delays in getting fixes packages we'll probably roll out the suggested mitigation temporarily | 12:56 |
NeilHanlon | fungi: ack - figured Debian was on top of it. Fedora world still working on drinking coffee :D Rocky Linux have a fix in their SIG/Security for it, which I get to shepherd out to all our fleet. | 13:04 |
Clark[m] | fungi: packages appear to be available. We may want to check if daily updates pulled them already and if not invoke some Ansible to run unattended upgrades sooner than the next daily run | 14:00 |
fungi | looks like focal and earlier aren't affected | 14:05 |
frickler | Clark[m]: the updates weren't there earlier this morning, so not pulled yet | 14:05 |
fungi | took some trial and error to find a new enough server to be affected | 14:06 |
fungi | on ze01: | 14:06 |
fungi | openssh-server: | 14:06 |
fungi | Installed: 1:8.9p1-3ubuntu0.7 | 14:06 |
fungi | Candidate: 1:8.9p1-3ubuntu0.10 | 14:07 |
fungi | so yeah, updates pending but not installed yet | 14:07 |
fungi | and i don't think we have any servers running on noble yet? unless tonyb booted a new mirror server already | 14:08 |
fungi | so it'll just be anything running jammy we need to `apt update&&apt install openssh-server` on | 14:08 |
frickler | interesting that they even pushed updates to noble, when the report said it was not exploitable. but yes, it should be only jammy for us afaict | 14:11 |
fungi | somebody happen to have the syntax handy for filtering the inventory by distro version? | 14:11 |
Clark[m] | fungi: you can also just invoke unattended upgrades I think then you don't need all the -y flags. | 14:12 |
Clark[m] | I don't think we get auto grouping by release so we have to do it manually or just do it for everything | 14:13 |
Clark[m] | Manually means when: ansible_distro_release == jammy or whatever the var and value are | 14:14 |
fungi | unrelated, in debian package mirroring news the manual reprepro run completed successfully, i'm rerunning it now just to make sure it's a no-op before i release the script flock | 14:17 |
Clark[m] | I think the simplest approach is likely to invoke updates for all systems (using unattended upgrades). It will take a bit of extra time but probably still faster than doing the more precise thing | 14:21 |
fungi | like this? | 14:30 |
fungi | ansible all -m shell 'apt update && apt install openssh-server' | 14:30 |
Clark[m] | fungi: yes or I think `unattended-upgrades` is a command that updates and installs available packages and we're doing it once a day so doing it sooner should be safe | 14:42 |
Clark[m] | You might need extra flags to directly install a specific package with apt-get | 14:44 |
Clark[m] | You can limit to a single host and see that it works before applying it globally too | 14:46 |
fungi | aha, yep, good call | 14:48 |
fungi | i'll test it on ze01 to confirm | 14:48 |
fungi | debian mirror update is confirmed complete, i've cleaned up the screen session on mirror-update now | 14:51 |
fungi | #status log Manually completed Debian mirror updates, which had been hung due to timeouts from the latest stable point release over the weekend | 14:52 |
opendevstatus | fungi: finished logging | 14:52 |
fungi | after `ansible ze01.opendev.org -m shell -va 'unattended-upgrades'` the server seems to be running the latest openssh-server package | 14:54 |
fungi | i'll swap ze01.opendev.org to all and run again | 14:54 |
*** darmach7 is now known as darmach | 15:04 | |
frickler | fungi: did you verify that it also restarts the service? that happening may depend on some apt settings | 15:08 |
fungi | a handful reported nonzero return codes, but i checked those after and all have the new openssh-server version installed | 15:08 |
frickler | installed!=running | 15:08 |
fungi | frickler: i spot checked a few of the ones that got upgrades and the process start time for sshd is approximately the time upgrades were performed | 15:10 |
fungi | e.g. on zm01: | 15:10 |
fungi | root 121643 0.0 0.4 15432 9388 ? Ss 15:00 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups | 15:10 |
frickler | ok, that's a better check, thx | 15:11 |
fungi | i also checked that on all of the jammy servers where ansible reported a nonzero exit too, just to be sure | 15:14 |
clarkb | as a heads up I did get the laptop debugging media that I needed so I'm hoping to work on that today whcih will probably mean a fair bit of time away from irc | 15:21 |
opendevreview | Monty Taylor proposed zuul/zuul-jobs master: Hook poetry into ensure-python and build-python-release https://review.opendev.org/c/zuul/zuul-jobs/+/923094 | 16:00 |
clarkb | reminder that now is a good time to put things on the meeting agends or let me know what should be edited there | 17:34 |
fungi | taking advantage of the brief break between storms and suddenly cooler temperatures to get some outdoor exercise, will be back in an hour-ish | 18:26 |
clarkb | ok laptop stuff is done for now. Time to put a meeting agenda together | 20:15 |
clarkb | I'm going to drop the mailman 3 throughput item since that seems well solved now (thanks again!) | 20:21 |
fungi | yep, sounds good | 20:29 |
loacker | Hi, I'm having issue downloading tarball from opendev gitea for any projects, I tried so far python-ironicclient, python-barbican and tempest just to name a few, I can wait and try the download next day, is there something going on or a status page? | 21:34 |
clarkb | loacker: I don't think we're aware of anything going on. That said we generally recommend against using those tarballs as they don't include necessary version info to properly build pacakges | 21:36 |
clarkb | loacker: instead you should clone the repo or fetch sdists from pypi etc | 21:36 |
loacker | Good point, thank you! I can clone it but I wanted to inform that for some reason the download feature isn't working. | 21:38 |
clarkb | ya its weird the server logs indicate an http 200 response | 21:38 |
clarkb | but with almost no data transfered and my browser doesn't start a download for that amount either | 21:39 |
loacker | clarkb: exactly, looks like also the searching functionality isn't working | 21:41 |
clarkb | loacker: can you be more specific about that? I've just tested a random string search and it seems to work | 21:42 |
clarkb | https://opendev.org/openstack/python-ironicclient/search?q=python is my naive check | 21:42 |
clarkb | my hunch is that the archive thing is a proper bug in gitea since I know in the past it was functional | 21:43 |
loacker | clarkb: I apologise, that isn't true! I made a mistake and I was searching under the opendev repository | 21:43 |
loacker | clarkb: yes I can confirm that it worked | 21:46 |
clarkb | thanks. Looking at the tarball thing more closely it appears that the 19 byte response is {"complete": false} | 21:46 |
clarkb | and it does that in a loop. I suspect that it is asnychronously trying to put the artifact together for you in the background and that eventually maybe a download will start? However, the lack of any signal to the user seems buggy at the very least | 21:47 |
clarkb | and not sure if it will ever respond back again. This is interesting new behavior | 21:47 |
clarkb | but ya the repos you referenced should all use PBR for packaging and they all rely on git history to properly set up version info when building packages. Pulling the raw tarball is almost certainly not the correct thing for those | 21:48 |
loacker | I certanly agree with you, and I'll switch to use PBR asap. | 21:51 |
opendevreview | Clark Boylan proposed opendev/system-config master: DNM intentional gitea failure to hold a node https://review.opendev.org/c/opendev/system-config/+/848181 | 21:55 |
clarkb | I've put a new hold in place for ^ so that we can test tarball stuff on the 1.22.0 version we haven't updated to yet | 21:55 |
loacker | I see, thank you for your prompt response on this! | 21:56 |
fungi | i take it we didn't get around to disabling the tarballs feature in gitea? i know the caching of them was consuming all disk space any time a crawler decided to hit all their urls | 22:10 |
fungi | maybe some mitigation we added for that broke them more generally? | 22:10 |
clarkb | fungi: there waasn't any way to disable them iirc | 22:10 |
clarkb | whcih is why our mitigation was the clear the cache daily instaed of whatever the default schedule was (I forget now) | 22:11 |
clarkb | and ya maybe clearing the cache often results in this poor performance | 22:11 |
fungi | aha, right. i couldn't recall how we had addressed that | 22:11 |
clarkb | its been 26 minutes since I've had the developer tools open in ff for a tarball download request and the state hasn't changed I am still getting the same json complete false blob back | 22:11 |
clarkb | but also ti could just be a gitea bug | 22:11 |
clarkb | we updated to 1.21.11 after we changed the cache cleanup cron to run daily | 22:12 |
clarkb | last call for meeting stuff. I'll get that sent out around 2300 UTC | 22:42 |
tonyb | I was hoping to have some updates before you sent it out but they won't be ready in 15mins so I'll just share during the meeting | 22:45 |
opendevreview | Ian Wienand proposed opendev/bindep master: Drop centos 8 stream jobs https://review.opendev.org/c/opendev/bindep/+/923151 | 22:48 |
clarkb | tonyb: will they be ready by 00:00? I can wait until then but will have to switch to dinner mode at that point | 22:54 |
opendevreview | Merged opendev/bindep master: Drop centos 8 stream jobs https://review.opendev.org/c/opendev/bindep/+/923151 | 23:06 |
clarkb | I eventually closed the browser tab that was trying to fetch the tarball... | 23:54 |
clarkb | it never did anything more than what I had described previously | 23:54 |
tonyb | clarkb: Sorry I missed your reply. Go ahead and send when you're ready to switch to dinner mode | 23:57 |
clarkb | ok I was just about to send it :) | 23:57 |
tonyb | \o/ | 23:58 |
clarkb | and sent | 23:59 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!