fungi | looks like uploading is in progress now | 00:17 |
---|---|---|
*** ralonsoh_ is now known as ralonsoh | 05:38 | |
fungi | a few images seem to have uploaded successfully, the rest are retrying with very short timers. i haven't looked closely yet but wonder if we gave glance enough room to store them all? | 13:01 |
frickler | I hope glance is using ceph, which should have plenty of room, but I didn't check that | 13:01 |
frickler | also maybe check temporary storage on the controllers | 13:02 |
fungi | do our sysadmin ssh accounts not exist on the cloud servers i guess? ip addresses for them are in the passwords list but no mention of how to ssh into them there | 15:38 |
fungi | for the physical servers i mean | 15:38 |
fungi | aha, our keys are authorized for the root user | 15:40 |
fungi | nodepool.exceptions.BuilderInvalidCommandError: Unable to find image file of type raw for id 3c968fce6bb94ea68f720d6b6c368a8f to upload | 15:43 |
fungi | maybe this is an unforeseen result of our image pruning! | 15:43 |
fungi | in which case we just need to wait a day or two for the older image builds to age out | 15:44 |
fungi | but that would explain why it took so long to finally get a few images uploaded, those are the ones which were auto-rebuilt since the config for the cloud got added | 15:45 |
fungi | since some of our images don't get rebuilt more often than weekly, those may take a while to update and upload | 15:46 |
fungi | infra-root: ^ not urgent, but an observation worth some mulling over to figure out how/if we want to address | 15:47 |
fungi | worst case, we do nothing and accept that adding new clouds takes a week or so to settle out | 15:48 |
fungi | or we restore the raw versions manually with qemu-img convert from the qcow2 copies | 15:48 |
fungi | since this is an eventually-consistent scenario and our more commonly-used images will be available there by the end of the weekend, i'm inclined to just wait | 15:50 |
fungi | we could also force new image builds to occur ahead of schedule, if anyone thinks that's worth the additional effort | 15:51 |
fungi | still not very immediate, but might get some of them uploaded sooner | 15:52 |
frickler | fungi: you can manually trigger builds | 15:55 |
frickler | but maybe if you can start testing with jammy or so that will be good enough, too? | 15:56 |
fungi | yeah, i mentioned manually triggering new image builds as a middle-ground | 16:01 |
fungi | if we're in more of a hurry | 16:02 |
fungi | we don't have jammy uploaded yet, the only ubuntu there so far is bionic | 16:03 |
fungi | also rocky 8 and openeuler | 16:03 |
fungi | i'll trigger a jammy image build for now if one's not already in progress | 16:04 |
fungi | ran `nodepool image-build ubuntu-jammy` on nb01 | 16:07 |
fungi | ubuntu-jammy-91dd602703d747e6ae7d6d07702d3aa8 on nb02 is the build to watch | 16:08 |
fungi | building | 00:02:12:38 | 18:20 |
fungi | hoping it's done soon | 18:20 |
opendevreview | Tim Burke proposed opendev/git-review master: Suggest --no-thin based on error output https://review.opendev.org/c/opendev/git-review/+/922047 | 18:33 |
fungi | that image build completed a little over an hour ago, but i don't see it uploading to openmetal-iad3 yet. i'll check the logs | 19:43 |
frickler | fyi ansible-core 2.17 is ooming for both osa and k-a it seem, though the issue is to be fixed already in the latest stable-2.17 branch, not sure when zuul is meant to include ansible 10 support? https://github.com/ansible/ansible/issues/83392 | 19:45 |
fungi | possible the jammy upload is queued behind others due to serialization, so i'll give it a little longer | 19:49 |
fungi | in theory /opt/nodepool_dib/ubuntu-jammy-91dd602703d747e6ae7d6d07702d3aa8.raw is what it should eventually try to upload | 19:50 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!