| @clarkb:matrix.org | Gitea09 backups failed again but that happend about 20 minutes before ^ merged so I think it is unrelated | 00:02 |
|---|---|---|
| @bbezak:matrix.org | Hi. not sure if that was reported, but git operations on opendev is terribly slow | 08:52 |
| @bbezak:matrix.org | LLMs hammering again? | 08:54 |
| @fungicide:matrix.org | i'm going to grab an early lunch out now that the roads are mostly clear again, back shortly (i hope!) | 15:46 |
| @clarkb:matrix.org | infra-root the keycloak backup failure does appear to be related to the mysql backup streaming script failing. I have to pop out in a few for a doctors appointment so not sure I'll get to debugging that for a bit | 16:00 |
| @clarkb:matrix.org | we can in theory manually run the /etc/borg-streams/mysql script and see if it fails in a consistent manner that we can debug though | 16:01 |
| @clarkb:matrix.org | system load seems reasonable across the giteas so any slowness right now is probably not bots crawling the site | 16:03 |
| @clarkb:matrix.org | looking at etherpad, gitea, gerrit, and mailman their db backups seem to have succeeded to the vexxhost backup server. Backups to the rax server should start in an hour or two. Its possible that this was just a fluke or maybe it is something specific to how keycloak runs its database server? | 16:15 |
| @clarkb:matrix.org | note I only looked at the backup logs not the backups themselves on each of those hosts | 16:16 |
| @fungicide:matrix.org | i'll take a look at the keycloak situation in a few minutes | 17:12 |
| @fungicide:matrix.org | but yeah, that's the only new failure i see | 17:13 |
| @fungicide:matrix.org | mariadb is listening on localhost:3306 at least | 17:14 |
| -@gerrit:opendev.org- Monty Taylor https://matrix.to/#/@mordred:inaugust.com proposed: [openstack/project-config] 975673: Add wandertracks-android repo for WanderTracks https://review.opendev.org/c/openstack/project-config/+/975673 | 17:24 | |
| @mordred:waterwanders.com | I grew an extra repo | 17:25 |
| @clarkb:matrix.org | mordred: and I can't wait to learn what you find about android testing in zuul :) | 17:25 |
| @mordred:waterwanders.com | Clark: me neither! :) | 17:27 |
| @mordred:waterwanders.com | locally tests run against an emulator ... but that is kvm backed, and of course we all love our friend nested virt | 17:27 |
| @mordred:waterwanders.com | so maybe they'll just run against an emulator REALLY SLOWLY | 17:28 |
| @fungicide:matrix.org | `mysqldump: Got error: 2002: "Can't connect to server on '127.0.0.1' (115)" when trying to connect` | 17:30 |
| @mordred:waterwanders.com | port vs socket / default mysql db permissions issue? | 17:31 |
| @fungicide:matrix.org | i think the container networking must have split family binding | 17:33 |
| @fungicide:matrix.org | we configure mariadb to listen on ::1 and can't reach it on 127.0.0.1 | 17:34 |
| @fungicide:matrix.org | if i adjust the mysqldump to do `-h ::1` it works fine | 17:34 |
| @fungicide:matrix.org | or the socket binding mariadb uses is af-specific | 17:35 |
| @clarkb:matrix.org | We do have nested virt labels now | 17:36 |
| @mordred:waterwanders.com | oh - nice! I'll try those out once I get that far | 17:36 |
| @clarkb:matrix.org | fungi: oh weird. so if we update the script to use ::1 it will probably work fine then | 17:36 |
| @fungicide:matrix.org | or maybe we should tell it to bind to `localhost` instead of litersl `::1` | 17:38 |
| @fungicide:matrix.org | literal | 17:38 |
| @clarkb:matrix.org | I wonder if they treat localhost special when binding and only do the socket if you do that | 17:44 |
| @clarkb:matrix.org | similar to how it works whne you connect with the client | 17:44 |
| @clarkb:matrix.org | I think we can start with using ::1 for now to get things working again then decide what if any proper refactoring should happen from there? Do you want to write that chnge or should I? | 17:44 |
| @fungicide:matrix.org | i can | 18:02 |
| @clarkb:matrix.org | https://review.opendev.org/c/opendev/system-config/+/975319 would be a good one to get in soon too so that we can start updating images into the newly refreshed trixie builds | 18:09 |
| @fungicide:matrix.org | Clark: are we still building bullseye images for uwsgi-base? if not, you could just drop the platform profiles from libffi8 instead of extending the list | 18:12 |
| @fungicide:matrix.org | though i'm okay with the change either way | 18:13 |
| @fungicide:matrix.org | for that matter, the platform:dpkg profile is likely unnecessary these days since we're not building centos images any more (did we ever?) | 18:14 |
| @clarkb:matrix.org | fungi: we are not building bullseye anymore. But I don't think it hurts to have there | 18:14 |
| @clarkb:matrix.org | I can't ever remember building anything but debian images for this stuff. | 18:15 |
| @fungicide:matrix.org | yeah, i'm fine approving as-is | 18:15 |
| @clarkb:matrix.org | I was trying to keep the diff small and bindep should ignore the non applicable data anyway | 18:16 |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 975696: Backup keycloak database over IPv6 loopback https://review.opendev.org/c/opendev/system-config/+/975696 | 19:18 | |
| -@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/system-config] 975319: Update python base images https://review.opendev.org/c/opendev/system-config/+/975319 | 19:19 | |
| @clarkb:matrix.org | fungi +2 from me on that one but we should probably go ahead and approve it before the 0500 ish backups early tomorrow | 19:21 |
| @clarkb:matrix.org | cool all of those container images promoted successfully according to zuul | 19:23 |
| @clarkb:matrix.org | fungi: have time for a quick review on https://review.opendev.org/c/opendev/grafyaml/+/975334 I think that is the most trivial of the trixie updates | 19:23 |
| @fungicide:matrix.org | looking | 19:27 |
| @fungicide:matrix.org | lgtm, let's see how it fares on deploy | 19:28 |
| -@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/grafyaml] 975334: Update to python3.12 trixie container https://review.opendev.org/c/opendev/grafyaml/+/975334 | 19:37 | |
| @clarkb:matrix.org | that may not deploy directly. We may need to do something to force the infra-prod-service-grafana job to run. but the ci jobs for grafyaml include testing against grafana so I 'm sure its fine | 19:40 |
| @fungicide:matrix.org | oh right, it can't trigger that job because it's a change for another repository | 19:56 |
| @fungicide:matrix.org | though are dispatch jobs a possible solution to that, longer term? | 19:57 |
| @fungicide:matrix.org | i'm still a little fuzzy on how those work | 19:57 |
| @fungicide:matrix.org | i guess the real issue is that the secret for connecting to the bastion to run our nested deploy ansible is for system-config, so grafyaml doesn't have clearance to use it even if it could run the job from its own pipeline config | 19:58 |
| @fungicide:matrix.org | or, not secret, project ssh key? | 19:59 |
| @fungicide:matrix.org | regardless, roughly the same problem | 19:59 |
| @fungicide:matrix.org | yeah, i guess a dispatch job could still only run other jobs for the same project pipeline? which makes sense security-wise | 20:01 |
| @clarkb:matrix.org | yup its the project ssh key | 20:06 |
| @clarkb:matrix.org | fungi: since diablo_rojo says there isnt' any need for ptgbot for a bit should we proceed with https://review.opendev.org/c/openstack/ptgbot/+/975325 ? | 21:43 |
| @fungicide:matrix.org | yes. yes we should | 21:44 |
| @clarkb:matrix.org | fungi: it failed to fetch quay.io/opendevorg/python-base:3.12-trixie on what looks like a short read | 22:06 |
| @fungicide:matrix.org | `ERROR: failed to build: failed to solve: quay.io/opendevorg/python-base:3.12-trixie: failed to resolve source metadata for quay.io/opendevorg/python-base:3.12-trixie: short read: expected 1814 bytes but got 0: unexpected EOF` | 22:06 |
| @fungicide:matrix.org | yeah, just saw that | 22:06 |
| @clarkb:matrix.org | fungi: I am able to fetch and run that image locally so I don't think the problem is the image or quay, but maybe related to the thing sean pointed out earlier | 22:06 |
| @clarkb:matrix.org | I think we can recheck it | 22:06 |
| @fungicide:matrix.org | agreed, done | 22:07 |
| @clarkb:matrix.org | this build ran in rax ord not dfw though | 22:07 |
| @clarkb:matrix.org | but possible they are rolling out network updatse across the clouds or something | 22:07 |
| @fungicide:matrix.org | or intentionally degrading performance on rax classic in order to encourage people to move to flex if the price increase hasn't already motivated them (only half joking) | 22:08 |
| @clarkb:matrix.org | once the change lands and image promotes the hourly eavesdrop deploy should update the bot | 22:21 |
| @jim:acmegating.com | what's the thing sean pointed out? | 22:22 |
| @fungicide:matrix.org | corvus: https://meetings.opendev.org/irclogs/%23openstack-infra/%23openstack-infra.2026-02-04.log.html | 22:24 |
| @fungicide:matrix.org | looked like a truncated read downloading a package | 22:24 |
| @fungicide:matrix.org | so superficially similar | 22:24 |
| @fungicide:matrix.org | though was in a different rackspace region and communicating with our mirror server | 22:25 |
| @clarkb:matrix.org | fungi: but the mirror server was proxying to pypi outside of the cloud network | 22:26 |
| @fungicide:matrix.org | right, so could have been cascading from the other side of the proxy too | 22:26 |
| -@gerrit:opendev.org- Zuul merged on behalf of Monty Taylor https://matrix.to/#/@mordred:inaugust.com: [openstack/project-config] 975673: Add wandertracks-android repo for WanderTracks https://review.opendev.org/c/openstack/project-config/+/975673 | 22:38 | |
| @clarkb:matrix.org | Assuming nothing comes up between now and then I'll probably target friday for updating gerrit to 3.11.8. If anyone wants to weigh in on switching to tags for builds now would be the time | 22:38 |
| @clarkb:matrix.org | I followed up in the openinfra events irc channel where ptgbot lives, but it reconnected as part of eavesdrop hourly deploys and the debug log for the bot indicates it connected using tls 1.3 so Ithink that is working well | 23:11 |
| @fungicide:matrix.org | yeah, lgtm too | 23:23 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!