clarkb | corvus: should be safe to kill that playbok at least | 00:01 |
---|---|---|
*** dviroel|rover|afk is now known as dviroel|out | 00:16 | |
*** rlandy|bbl is now known as rlandy | 00:34 | |
*** rlandy is now known as rlandy|out | 00:55 | |
*** ysandeep|out is now known as ysandeep | 02:12 | |
opendevreview | Ian Wienand proposed openstack/diskimage-builder master: Move image validation into extract-image https://review.opendev.org/c/openstack/diskimage-builder/+/839294 | 03:53 |
*** ysandeep is now known as ysandeep|afk | 04:18 | |
frickler | seems we are hitting https://bugs.launchpad.net/ubuntu/+source/reprepro/+bug/1968198 now, sadly there seems to be no indication on how to possibly fix this, except maybe run it on jammy? | 04:51 |
frickler | the mention of zstd means it might be related to the change of compression method that seems to have happened | 04:51 |
opendevreview | Ian Wienand proposed openstack/diskimage-builder master: Move image validation into extract-image https://review.opendev.org/c/openstack/diskimage-builder/+/839294 | 05:07 |
ianw | frickler: sigh, or in a container with similar i guess | 05:14 |
ianw | ideally i guess we'd get jammy nodes and update mirror-update testing before migrating it to a new platform. although setting up the mirror is a bit chicken-egg with that i guess | 05:17 |
frickler | ianw: seems to have gotten past this now, error messages about two specific pkgs but now seem to be continuing | 05:17 |
frickler | let's see how the export goes, maybe we have at least a working repo in the end, even if incomplete | 05:18 |
frickler | the other idea would be to start building images without mirror set up, shouldn't be impossible either | 05:20 |
frickler | 2022-04-26 05:32:05 | reprepro completed successfully, running vos release ... let's see how long this takes for ~160G of new stuff | 05:38 |
*** ysandeep|afk is now known as ysandeep | 06:13 | |
*** pojadhav- is now known as pojadhav|pto | 06:20 | |
ianw | it's capped at about 10mbit so i'd say ~3.5 hours | 06:36 |
opendevreview | Ian Wienand proposed openstack/diskimage-builder master: Move image validation into extract-image https://review.opendev.org/c/openstack/diskimage-builder/+/839294 | 06:46 |
*** jpena|off is now known as jpena | 07:13 | |
*** ysandeep is now known as ysandeep|brb | 07:25 | |
opendevreview | Ian Wienand proposed openstack/diskimage-builder master: centos: avoid head pipe failure https://review.opendev.org/c/openstack/diskimage-builder/+/839307 | 07:28 |
*** ysandeep|brb is now known as ysandeep | 07:30 | |
opendevreview | Ian Wienand proposed openstack/diskimage-builder master: Move image validation into extract-image https://review.opendev.org/c/openstack/diskimage-builder/+/839294 | 07:34 |
frickler | 2022-04-26 07:33:16 | Done. | 08:01 |
frickler | tested from a local jammy instance and seems to work fine, so I've gone ahead and self-approved https://review.opendev.org/c/openstack/project-config/+/839057 | 08:04 |
opendevreview | Merged openstack/project-config master: Start bulding ubuntu-jammy images https://review.opendev.org/c/openstack/project-config/+/839057 | 08:15 |
*** benj_1 is now known as benj_ | 08:18 | |
opendevreview | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Add promote-container-image role https://review.opendev.org/c/zuul/zuul-jobs/+/838919 | 08:56 |
opendevreview | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Add promote-container-image role https://review.opendev.org/c/zuul/zuul-jobs/+/838919 | 09:35 |
opendevreview | Merged openstack/diskimage-builder master: Add a job to test building jammy https://review.opendev.org/c/openstack/diskimage-builder/+/836228 | 09:47 |
opendevreview | Merged openstack/diskimage-builder master: yum-minimal: clean up release package installs https://review.opendev.org/c/openstack/diskimage-builder/+/837248 | 09:54 |
opendevreview | Merged openstack/diskimage-builder master: Move reset-bls-entries to post-install https://review.opendev.org/c/openstack/diskimage-builder/+/838792 | 10:02 |
frickler | looks like hole in one ;) https://nb01.opendev.org/ubuntu-jammy-0000000001.log | 10:21 |
*** rlandy|out is now known as rlandy | 10:24 | |
*** ysandeep is now known as ysandeep|coffee | 10:37 | |
ianw | frickler: nice work! | 10:37 |
*** dviroel|out is now known as dviroel|rover | 11:09 | |
fungi | frickler: what needed to be done for reprepro after i added the new key in 839261? | 11:17 |
fungi | i've started one more run to make sure it finishes clean before i release the file lock | 11:17 |
fungi | ahh, te zstd errors | 11:17 |
fungi | zstd: error 70 : Write error : cannot write decoded block : Broken pipe | 11:18 |
fungi | did that turn out to be benign then? | 11:18 |
*** ysandeep|coffee is now known as ysandeep | 11:28 | |
frickler | fungi: at least non-fatal, I didn't touch anything, just watch it run and finish somehow | 11:29 |
frickler | maybe the repo is now missing some pkgs, but it went well enough to build an image | 11:29 |
frickler | I'm hoping the misses will only affect jammy and not existing distros, so I didn't check in more detail | 11:30 |
frickler | https://review.opendev.org/c/openstack/devstack/+/839359 "Zuul: Unknown configuration error". I guess I missed step to add jammy to the launchers, but zuul/nodepool might be able to provide a more helpful error message? | 11:31 |
frickler | *the step | 11:31 |
fungi | looks like there are a lot of conditionals in https://opendev.org/zuul/zuul/src/branch/master/zuul/manager/__init__.py#L998-L1120 and that exception is a fall-through at the end | 11:35 |
fungi | i expect the "error in dynamic layout" exception is masking the catch-all exception raised in the try block | 11:38 |
frickler | fungi: corvus: that's the traceback from zuul log https://paste.opendev.org/show/bPl0n9kjkby3gPoLTNHd/ | 11:47 |
fungi | not what i expected at all | 11:49 |
fungi | i really don't see where it's coming from, maybe a regression in 836086 (Fix implied branch matchers with regex chars) | 11:53 |
opendevreview | Dr. Jens Harbott proposed openstack/project-config master: Start launching Jammy images https://review.opendev.org/c/openstack/project-config/+/839366 | 11:54 |
*** rlandy is now known as rlandy|mtg | 11:58 | |
fungi | looking at a recent build of a similar job, we do indeed have regex branch matchers inherited from common parents: https://zuul.opendev.org/t/openstack/build/6eae56a2d3354a6e86efedde2819eec3/log/zuul-info/inventory.yaml#278-279 | 12:00 |
fungi | okay, the ubuntu mirror update ran clean again, so i've released the lock and exited the screen session | 13:33 |
Clark[m] | fungi: did it have the same errors as before? | 13:38 |
fungi | if by "same errors" you mean the zstd broken pipe, yes | 13:39 |
fungi | probably we need newer libzstd to get around that | 13:40 |
Clark[m] | Ya that. The lp but I linked indicated some fixing was attempted but apparently incomplete | 13:40 |
fungi | but so far there's no indication that it's an actual problem, certainly wasn't fatal from reprepro's perspective anyway | 13:40 |
Clark[m] | Thank you frickler for pushing this along and getting it done. I should make note to add a last minute agenda item to discuss afs disk use and planning for jammy ports etc | 13:42 |
fungi | Clark[m]: i think 839366 would be the next step once you're settled in for the morning, already has my +2 | 13:45 |
Clark[m] | Will look but will be a bit as kids need to be deposited at school before I'm at a proper computer | 13:47 |
fungi | sure, take your time! | 13:48 |
frickler | fungi: I verified that the config error only comes from the unknown label and is the same with a completely bogus label. I'll do some further testing locally to see if this is easily reproducible. unless maybe corvus wakes up and at once sees the solution ;) | 14:15 |
fungi | frickler: good investigating, so it sounds like the original configuration error is probably tripping an untested code path related to branch matchers. probably a recent regression or we'd have already seen it at some point | 14:23 |
frickler | fungi: it also seems to need a more complex setup, all my easy test cases produce the expected NODE_FAILURE so far | 14:26 |
frickler | ah, or maybe if it is very recent I should try running latest zuul version, too | 14:27 |
fungi | did you test with inheriting from a parent with a regex branch matcher? | 14:27 |
fungi | and yes, i have a feeling it might be a regression in 836086 (Fix implied branch matchers with regex chars) which would need 5.2.3 or later, but it could be something even more recent | 14:29 |
*** ysandeep is now known as ysandeep|afk | 14:36 | |
*** ysandeep|afk is now known as ysandeep | 14:59 | |
*** pojadhav|pto is now known as pojadhav | 15:03 | |
*** lbragstad8 is now known as lbragstad | 15:05 | |
clarkb | fungi: frickler I'ev approved 839366. We'll want ot chekc that they can boot in all the clouds once up. | 15:22 |
fungi | which we can do with rechecking the devstack change a bunch | 15:26 |
corvus | frickler: fungi ack; i'll look at that later today | 15:27 |
fungi | thanks corvus! | 15:28 |
opendevreview | Merged openstack/project-config master: Start launching Jammy images https://review.opendev.org/c/openstack/project-config/+/839366 | 15:32 |
frickler | corvus: fungi: the issue actually is triggered by the mismatch in the nodeset name I had in combination with the implied-branches pragma. reproduced that locally with zuul:latest, will try to bisect | 15:54 |
frickler | meh, doing too many zuul restarts in a row doesn't work well with github rate limits it seems | 16:05 |
*** dviroel|rover is now known as dviroel|rover|lunch | 16:11 | |
*** marios is now known as marios|out | 16:12 | |
*** rlandy|mtg is now known as rlandy | 16:14 | |
*** ysandeep is now known as ysandeep|out | 16:17 | |
*** jpena is now known as jpena|off | 16:17 | |
frickler | so it seems I get the issue with 5.2.4 but not with 5.2.3 | 16:21 |
clarkb | I've somehow failed to eat breakfast and also flail around without getting much done /me resets the morning and finds food | 16:25 |
fungi | frickler: thanks, so i guess it's not 836086 after all since that merged prior to 5.2.3 | 16:31 |
frickler | actually my test was wrong. 5.2.2 to 5.2.3 is the broken step | 16:33 |
fungi | in that case 836086 seems increasingly likely to be the culprit (79c6717cea5abad479af98f40eb260472ef94648) | 16:39 |
clarkb | infra-root https://review.opendev.org/c/opendev/system-config/+/839250 and child are further gerrit 3.5 prep work | 16:50 |
clarkb | and https://review.opendev.org/q/topic:retire-elk has a bunch of elk related config management retirements now that the ELK servers are gone | 16:51 |
clarkb | reviews much appreciated | 16:51 |
*** dviroel|rover|lunch is now known as dviroel|rover | 17:02 | |
frickler | fungi: looking at that patch I tend to agree, but I'll leave it to corvus to carve out the details | 17:11 |
frickler | seems the jammy mirror isn't doing as well as I hoped, I had only tested the jammy tree, not -updates or -security https://zuul.opendev.org/t/openstack/build/ae536d3e855b48f99a6df7ad174849bb/log/job-output.txt#650-698 | 17:25 |
* frickler afks for a bit | 17:25 | |
clarkb | looks like they both contain actual content at least | 17:27 |
clarkb | and looking at this more closely it seems we may be mirroring source packages? | 17:28 |
clarkb | I wonder if we can clean those up to reduce the size of our ubuntu mirrors? | 17:29 |
clarkb | ya there are tar.xz's in the pool which I think are the source packages | 17:30 |
fungi | maybe there's a flag to tell reprepro not to, though it will certainly render any deb-src lines in a sources.list broken | 17:30 |
clarkb | hrm we do list deb-src in the the configure-mirrors role | 17:31 |
fungi | probably not? | 17:31 |
clarkb | no we do | 17:31 |
fungi | though if any jobs add some, they'll become broken... probably safe to assume no | 17:31 |
clarkb | right they'd break when apt-get updating | 17:31 |
clarkb | since that will try to pull the index at least. And we seem to set it in the role for everything unfortunately | 17:32 |
clarkb | but maybe we stop doing that and then we can clear out the source packages | 17:32 |
fungi | https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/configure-mirrors/templates/apt/etc/apt/sources.list.d/default.list.j2#L3 | 17:33 |
clarkb | and then separately we have https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/configure-mirrors/templates/apt/etc/apt/sources.list.j2 | 17:33 |
clarkb | I wonder which is used? but we should probably remove the deb-src lines either way if doing this | 17:33 |
clarkb | it does seem overkill to haev a CI system care about source packages unless building packages, but in that case we probably don't want to be looking at our mirrors for the sources | 17:34 |
fungi | anyway, configure-mirrors is the only real risk i find via codesearch | 17:34 |
fungi | all other uses of deb-src seem to be with specific (non-mirror) package repos | 17:35 |
clarkb | I've got a note to talk about afs disk usage in the meeting (it didn't make the agenda because I realized ports were a problem too late) | 17:35 |
clarkb | we can discuss this option along with the others I've identified | 17:35 |
clarkb | fungi: ubuntu uses https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/configure-mirrors/templates/apt/etc/apt/sources.list.j2 and debian uses the files in sources.list.d for maximum confusion | 17:38 |
clarkb | but I think that makes this even less risky as only debian jobs would expect deb-src currently | 17:38 |
clarkb | (and their number is fewer) | 17:38 |
clarkb | I wonder why we split them like that | 17:38 |
fungi | i would wager there's no conscious decision behind the difference | 17:43 |
clarkb | the fact that source packages use xz likely indicates we won't see as large a savings as we might hope for. But considering we did similar cleanup to the rpm distros this seems reasonable | 17:44 |
clarkb | and once you add up ubuntu and unbuntu ports across 3.5 releases the impact may still be quite large | 17:45 |
fungi | could be in the neighborhood of 10-25%, gut feeling from having profiled package sizes in years past | 17:45 |
fungi | the question is, does reprepro have an option to exclude them | 17:46 |
fungi | looks like if we omit the "source" entry from the architectures line in the updates file, it may not pull them | 17:50 |
fungi | also from the distributions file i guess | 17:50 |
fungi | though there's a good chance we'd have to manually clean up the existing source packages if we dropped them from the configs | 17:52 |
clarkb | I think it may clean them up due to the same mechanism that claers out old packages? | 17:55 |
clarkb | it knows if it isn't mirroring something to remove it | 17:55 |
fungi | yeah, but if we take it out of the configuration, it may not know it's not mirroring it... | 17:56 |
clarkb | I see | 17:56 |
clarkb | do we want to start with a small repo like UCA and test it? | 17:57 |
fungi | yeah, i can push a simple change for that as an experiment | 17:57 |
clarkb | ++ | 17:57 |
fungi | bad news (or good?). we don't have a source architecture configured for uca | 17:59 |
fungi | are we mirroring source packages for uca? | 17:59 |
* fungi checks | 17:59 | |
opendevreview | Merged opendev/system-config master: Update gerritbot-matrix version to include wipness https://review.opendev.org/c/opendev/system-config/+/837580 | 17:59 |
clarkb | oh we may not be I just assumed we were | 17:59 |
fungi | our experiment may already be done for us ;) | 17:59 |
clarkb | we do for debian-docker, debian, ubuntu, and ubuntu-ports according to our reprepro configs | 18:00 |
clarkb | debian-docker is another good experiement since it is small in scope assuming upstream publishes source packages at all | 18:00 |
fungi | https://static.opendev.org/mirror/ubuntu-cloud-archive/pool/main/a/alabaster/ | 18:00 |
clarkb | looks like debian docker has no source packages either | 18:01 |
fungi | no source packages, so i guess that's the right way to do it at least | 18:01 |
clarkb | ++ and I think we can go ahead and remove it from debian docker to reflect reality of no upstream source pakges but that won't serve as test for us | 18:01 |
fungi | playbooks/roles/reprepro/files/debian-docker-focal/config/distributions | 18:01 |
fungi | playbooks/roles/reprepro/files/debian-docker-bionic/config/distributions | 18:02 |
fungi | those list source | 18:02 |
clarkb | yup but no source packages I think because upstream doesn't have them | 18:02 |
fungi | ah | 18:02 |
fungi | i guess we could do ubuntu-ports, arm jobs are unlikely to hit a problem with it | 18:04 |
clarkb | ++ | 18:04 |
clarkb | and I think its safe for us to do ubuntu as well from a job perspectie as only debian instances get deb-src configured by default from my read of configure-mirrors | 18:05 |
fungi | definitely source packages in https://static.opendev.org/mirror/ubuntu-ports/pool/main/a/aalib/ right now | 18:05 |
clarkb | btw I found pool/main/v/ was a good prefix beacuse there aren't a lot of 'v' packages ;) | 18:05 |
clarkb | loads much more quickly than some of the other prefixes | 18:05 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Drop source package mirroring for ubuntu-ports https://review.opendev.org/c/opendev/system-config/+/839421 | 18:09 |
clarkb | +2 thanks | 18:10 |
clarkb | we can also mirror jammy docker packages: https://download.docker.com/linux/ubuntu/dists/jammy/ I'll get a chagne up for that | 18:13 |
opendevreview | Clark Boylan proposed opendev/system-config master: Add Jammy Docker package mirroring https://review.opendev.org/c/opendev/system-config/+/839422 | 18:19 |
clarkb | something like that maybe | 18:19 |
corvus | fungi frickler https://review.opendev.org/839423 should fix it. implied-branch matcher pragma is required which explains the rarity | 18:20 |
fungi | clarkb: i guess we don't track quota for mirror.deb-docker in grafana yet? | 18:22 |
fungi | corvus: thanks! | 18:22 |
clarkb | fungi: ya we may not. Its super tiny though like 120MB per release? | 18:22 |
clarkb | the packages aren't small but there are only a handful of them | 18:23 |
fungi | interesting, the mirror.ubuntu grpah shows that adding jammy took us from 667gb to 827gb of our 990gb quota (a 160gb or 16% increase) | 18:30 |
fungi | that's only a 1/4 increase over the prior size | 18:31 |
clarkb | ya that does make me wonder ifwe don't prune all the old packages as well as we expect. Or maybe jammy is using newer better compression algorithms | 18:31 |
fungi | but we're not mirroring 4 other releases of ubuntu that i can see | 18:31 |
fungi | right | 18:31 |
clarkb | hrm what do you mean about not mirroring 4 other realses? | 18:35 |
fungi | jammy accounts for ~1/5 of the used space | 18:38 |
fungi | which suggests we're mirroring 5 releases including jammy | 18:38 |
fungi | assuming all releases are roughly the same size | 18:38 |
fungi | those are likely invalid assumptions, but suggests we may have some lurking cruft | 18:39 |
clarkb | right. It wouldn't surprise me if there is lurking cruft just due to the size reduction on the rpm mirrors we just did | 18:39 |
opendevreview | Gage Hugo proposed openstack/project-config master: End project gating for openstack-helm-docs https://review.opendev.org/c/openstack/project-config/+/839103 | 19:01 |
opendevreview | Gage Hugo proposed openstack/project-config master: Retire openstack-helm-docs repo, step 3.3 https://review.opendev.org/c/openstack/project-config/+/839427 | 19:01 |
opendevreview | Gage Hugo proposed openstack/project-config master: Retire openstack-helm-docs repo, step 3.3 https://review.opendev.org/c/openstack/project-config/+/839427 | 19:14 |
fungi | clarkb: all open changes have been abandoned for the 5 repositories covered by topic:retire-elk | 19:58 |
fungi | now to work on dinner | 19:58 |
clarkb | fungi: thanks! | 20:00 |
opendevreview | Merged opendev/system-config master: Drop source package mirroring for ubuntu-ports https://review.opendev.org/c/opendev/system-config/+/839421 | 20:05 |
opendevreview | Clark Boylan proposed openstack/diskimage-builder master: Fix dhcp-all-interfaces on debuntu systems https://review.opendev.org/c/openstack/diskimage-builder/+/839080 | 20:45 |
opendevreview | Clark Boylan proposed openstack/diskimage-builder master: Revert "Fallback to persistent netifs names with systemd" https://review.opendev.org/c/openstack/diskimage-builder/+/838863 | 20:45 |
corvus | i'm going to restart the zuul schedulers now. | 20:52 |
fungi | please do! | 21:21 |
fungi | oh, that was 30 minutes ago ;) | 21:21 |
corvus | zuul01 has restarted (was a little slow); doing zuul02 now | 21:27 |
fungi | 839421 has deployed and will in theory be used by the 22z ubuntu mirror run in ~30 minutes | 21:28 |
opendevreview | Merged zuul/zuul-jobs master: Add per-build WinRM cert generation https://review.opendev.org/c/zuul/zuul-jobs/+/837416 | 21:52 |
clarkb | I'm going to bring up the question about deb-src in the zuul room | 21:52 |
fungi | great idea | 21:53 |
corvus | #status log restarted zuul schedulers/web/finger on 77524b359cf427bca16d2a3339be9c1976755bc8 | 22:23 |
opendevstatus | corvus: finished logging | 22:23 |
fungi | thanks! | 22:24 |
clarkb | gmann: now that the board meeting is over I wanted to mention the subunit2sql mysql database is the last thign still up from the health service. I kept it around to make sure that none of the infra-root had concerns with deleting it and there didn't seem to be concern on our end. Any on yours? I would back it up but it is quite large and considering the data is quite stale at this | 22:46 |
clarkb | point deleting it seems better. | 22:46 |
gmann | clarkb: afaik, health service was one of the user of it in OpenStack. but after health service down we have one usage in tempest test-removal process - https://docs.openstack.org/tempest/latest/test_removal.html#using-subunit2sql-directly | 22:51 |
gmann | question will be if we can keep it up with new system? or we should change the test-removal process in tempest? | 22:52 |
gmann | kopecmartin: mtreinish ^^ any opinion ? | 22:52 |
gmann | tempest test-removal are not very frequent now a days and I think we should be able to very the test status with new dashboard or so? but not to figure out how? | 22:53 |
fungi | well, it's currently not working for that because there's no way to query it with the openstack-health api server down | 22:53 |
clarkb | gmann: I'm not sure what you mean by new system. There is no replacement for the health dashboard or subunit2sql workers as far as I know. The database is going away one way or another the question is if we need to back up the data first | 22:53 |
clarkb | fungi: not just down, deleted | 22:53 |
fungi | well, yes, down because it's deleted ;) | 22:53 |
*** rlandy is now known as rlandy|out | 22:53 | |
clarkb | gmann: do you mean is there some way to use opensearch to query this ifno? | 22:54 |
gmann | new system i mean server we have for ELK services but not sure if that fits into that. as we already said health one is not going to be and we retired it also. | 22:54 |
gmann | clarkb: yeah | 22:54 |
gmann | clarkb: main usage from tempest side is to know test passing rate | 22:55 |
clarkb | gmann: ok I think that is a separate discussion. Specifically what we are wondering about here is "Is there any reason to backup the 286GB subunit2sql database before we delete it?" | 22:55 |
gmann | + shutdown subunit2sql service right? | 22:55 |
fungi | no, that's already gone. this is the database it used | 22:56 |
clarkb | gmann: the shutdown and deletion of the subunit2sql workers and the health api server is already complete. All that remains is the database | 22:56 |
gmann | ohk that is gone togehter. isee | 22:56 |
*** dviroel|rover is now known as dviroel|rover|out | 22:56 | |
gmann | I am not sure any benefits to keep that data. if we keep what is usage? | 22:56 |
clarkb | gmann: its a 64GB memory 500GB disk trove instance in rax. We are going to delete it. If we backup the database it will cost us about 286GB of backup space (wherever that ends up) | 22:57 |
fungi | if memory serves, it was pruned automatically to a 6-month window, so there's not that much retention and it will rapidly cease to be relevant anyway | 22:58 |
gmann | and current data is of last 6 month right? old one should have been gone already? | 22:59 |
clarkb | I think it may be older than that as things stopped working sometime last year | 22:59 |
clarkb | which is why we idneitifed it as being able to be shut down (it was broken and no one noticed) | 23:00 |
gmann | If I remember correctly, e-r also query on that right? or it was separate. and e-r instance is also planned to shutdown? http://status.openstack.org/elastic-recheck/ | 23:01 |
clarkb | e-r has never queried health/subunit2sql | 23:01 |
clarkb | the e-r instance will be shutdown beacuse the elasticsearch instance it talked to is gone | 23:01 |
gmann | yeah | 23:01 |
clarkb | basically we need to know if you can think of any reason why that data would be important enough to backup before we delete the database instance. None of us can come up with a reason for that | 23:02 |
fungi | and the server where e-r ran will be going away at the end of the week as well (status.openstack.org) | 23:02 |
gmann | yeah, I do not know usage or so reason for backing up that. as services are already down, we cannot do much with that backup . | 23:04 |
clarkb | ok, I'll plan to proceed with deleting it tomorrow. I guess if something pop up between now and then feel free to ping me | 23:05 |
gmann | I think you can remove that too. if anyone bring that up in some infra they can do it from scratch and do not need to import old data | 23:05 |
gmann | sure | 23:05 |
clarkb | gmann: now for tracking tempest test success/fail rates with opensearch. The way opensearch is working is it takes the test job logs (various log files) and indexes them | 23:05 |
clarkb | gmann: what this means is if your job logs somewhere tempest.test.foo = pass and tempest.test.bar = fail you can query on those strings and check them over time to see what the ratio between pass and fail is | 23:06 |
clarkb | it depends on you logging the info in the first place though | 23:06 |
clarkb | whether or not that is currently done I don't know, but tracking that info in opensearch is theoretically possible if you log that and index those logs | 23:06 |
gmann | clarkb: yeah that is what I was thinking to search with <test> = *pass|fail* | 23:06 |
gmann | at least knowing the recent with 'fail' string can give us idea about it | 23:07 |
gmann | I will try that and add it in tempest doc | 23:07 |
gmann | but i have not seen test removal since 1 (or 2 may be) years so its not very frequent things or something we need to worry much. | 23:08 |
clarkb | fungi: ianw: doesn't look like we're seeing disk utilization drop in the ubuntu-ports volume yet. I suspect we may need to do some manual intervention? | 23:14 |
clarkb | I can try to take a look at that tomorrow but I'm just about to the point of needing to help with dinner | 23:14 |
ianw | Error: packages database contains unused 'bionic-backports|main|source' database. | 23:16 |
ianw | This usually means you removed some component, architecture or even | 23:16 |
ianw | a whole distribution from conf/distributions. | 23:16 |
ianw | In that case you most likely want to call reprepro clearvanished to get rid | 23:16 |
ianw | of the databases belonging to those removed parts. | 23:16 |
ianw | iirc that's what i had to do with the security/stretch removal too. i can look at it in a bit | 23:17 |
clarkb | thanks | 23:18 |
fungi | yeah, so we need to prune the db | 23:23 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!