clarkb | I've deleted the etherpad autohold but not the gitea one | 00:09 |
---|---|---|
opendevreview | Ghanshyam proposed openstack/project-config master: [QA Acls] Allow Review-Priority for non core member also https://review.opendev.org/c/openstack/project-config/+/904809 | 00:26 |
tonyb | clarkb: I'm done with the held node, so feel free to drop the autohold whenever you're free | 00:54 |
tonyb | clarkb: I don't follow your comment here: https://review.opendev.org/c/openstack/project-config/+/904809/comment/7d6a2af4_d2ba1044/ | 01:07 |
tonyb | clarkb: everything I see matches 'grenade-core', which group is empty? | 01:07 |
opendevreview | Tony Breeds proposed openstack/project-config master: [QA Acls] Allow Review-Priority for non core member also https://review.opendev.org/c/openstack/project-config/+/904809 | 01:12 |
Clark[m] | tonyb: the line I highlighted is greande-core | 01:14 |
opendevreview | Ghanshyam proposed openstack/project-config master: [QA Acls] Allow Review-Priority for non core member also https://review.opendev.org/c/openstack/project-config/+/904809 | 01:15 |
gmann | Clark[m]: ^^ updated. thanks for catching that | 01:15 |
opendevreview | Merged opendev/system-config master: Add hints to borg backup error logging https://review.opendev.org/c/opendev/system-config/+/903357 | 02:28 |
opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/904811 | 02:59 |
opendevreview | Dr. Jens Harbott proposed openstack/project-config master: [QA Acls] Allow Review-Priority for non core member also https://review.opendev.org/c/openstack/project-config/+/904809 | 06:05 |
opendevreview | Merged openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/904811 | 06:09 |
opendevreview | Jan Marchel proposed openstack/project-config master: Add new NebulOuS projects: overlay-network-manager, security-manager https://review.opendev.org/c/openstack/project-config/+/904792 | 08:55 |
opendevreview | Xavier Coulon proposed openstack/diskimage-builder master: Replace OpenSUSE Leap 15.3 to OpenSUSE Leap 15.5 https://review.opendev.org/c/openstack/diskimage-builder/+/904821 | 09:56 |
opendevreview | Elod Illes proposed openstack/project-config master: WIP: Adapt branch creation to Unmaintained state https://review.opendev.org/c/openstack/project-config/+/904837 | 13:57 |
opendevreview | Merged openstack/project-config master: Deprecate cinderlib https://review.opendev.org/c/openstack/project-config/+/903260 | 14:20 |
fungi | infra-root: elodilles pointed out that we've got another round of deleted branches from 2023-12-21 where zuul seems to have missed or ignored some of the removals and still thinks they're present | 14:35 |
fungi | an example is https://zuul.opendev.org/t/openstack/project/opendev.org/openstack/automaton persisting with a stable/train branch that's no longer there | 14:35 |
fungi | i suspect an online reconfigure (smart? full?) would clear that out, but am wondering if we're hitting some soft of race in zuul's event processing or whether gerrit is failing to actually send the events | 14:36 |
fungi | since elodilles is preparing to do another batch of deletions shortly, it's possible we'll wind up with more | 14:37 |
fungi | manage-projects just failed on 903260 because the rootfs on gitea09 is full. looking into why now | 14:45 |
fungi | looks like /var/gitea/data/gitea/repo-archive is where almost all of it is | 14:47 |
fungi | 73% (113gb) of the 155gb rootfs is used by the contents of that directory | 14:49 |
fungi | the other gitea servers range from 528mb to 5.7gb in that directory, so gitea09's is orders of magnitude bigger | 14:51 |
fungi | infra-root: anyone happen to know what gitea uses that directory for? | 14:51 |
fungi | i'll start going through its documentation, but looks like some sort of git object cache | 14:51 |
fungi | i'm going to put the server in our emergency disable list and take it out of the haproxy pools temporarily | 14:52 |
elodilles | fungi: sorry, i've started the clean up script in the meantime, will that interfere with the above? ^^^ | 14:55 |
fungi | elodilles: no, it should be fine | 14:55 |
elodilles | ACK | 14:55 |
fungi | # status log Temporarily disabled gitea09 from the load balancer pools while investigating a full rootfs on it | 14:56 |
fungi | #status log Temporarily disabled gitea09 from the load balancer pools while investigating a full rootfs on it | 14:56 |
opendevstatus | fungi: finished logging | 14:56 |
fungi | and now i've downed the containers | 15:01 |
fungi | browsing gitea's issues, seems like people have been seeing repo-archive grow wildly for no apparent reason, including after upgrades | 15:06 |
fungi | http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=71123&rra_id=all suggests it jumped up by quite a bit in march, but then went crazy around the end of november and started to grow out of control. now it's been hovering near 100% for several weeks | 15:09 |
elodilles | fungi: these were deleted this time: https://paste.opendev.org/show/bn5tNwhl35pq7SFT5aAV/ | 15:17 |
fungi | thanks elodilles! | 15:17 |
elodilles | np | 15:17 |
Clark[m] | Repo archive cleanup is supposed to happen for archives older than 24 hours automatically | 15:18 |
fungi | yeah, i found the internal cron for it. maybe it's either not running or broken on gitea09? | 15:18 |
Clark[m] | Our config file doesn't override any of the cron settings so it should be running at least | 15:20 |
fungi | https://github.com/go-gitea/gitea/issues/25992 seems to indicate that it can be disabled entirely in configuration now, though i'm not finding the corresponding commit or docs to confirm | 15:21 |
fungi | but also, reading through commits related to the repo archive, it seems like it's used as a cache for performance reasons, so we might still want to have it? | 15:22 |
Clark[m] | It's caches of repo archives which I'm not sure are super important | 15:23 |
Clark[m] | But also reading other posts there are multiple crons and maybe only some are run by default? | 15:24 |
fungi | oh, this is specifically for when someone requests an "archive" tarball of a particular commit? | 15:24 |
Clark[m] | Yes | 15:25 |
Clark[m] | And apparently they pre build them for tags | 15:26 |
Clark[m] | Looks like the cron subsystem may be disabled by default | 15:26 |
Clark[m] | I think we should use the manual admin UI task to cleanup repo archives now (and maybe do that on all the nodes). Then do a followup change to enable the cron subsystem which we should take care to ensure doesn't run other jobs we don't want | 15:28 |
fungi | yeah, i found references to a "Delete all repositories' archives (ZIP, TAR.GZ, etc..)" task/button which should be in the admin dashboard | 15:29 |
Clark[m] | But we have to sort out this complicated cron configuration. It isn't clear to me if we enable a specific cron job if that will work when top level cron is disabled. I think enabling only the specific jobs we want is preferable | 15:30 |
Clark[m] | fungi: ya I think we want to find that and click it or better yet would be something more aligned with "run the equivalent of the cron job" | 15:31 |
Clark[m] | https://github.com/go-gitea/gitea/issues/6689 seems to make it clear that button deletes got archive objects of repos and not archived repos so that is good | 15:32 |
Clark[m] | fungi: maybe before deleting things check to see if any of the archive files are more than 24 hours old? The cleanup cron won't touch them if they are newer than that and we may have a different issue if so | 15:34 |
fungi | yeah, calling switching a repo to read-only "archiving" was a poor design choice on their part | 15:34 |
fungi | $ sudo find /var/gitea/data/gitea/repo-archive -type f -mtime +7|wc -l | 15:36 |
fungi | 5687 | 15:36 |
fungi | so thousands of files more than a week old | 15:37 |
fungi | but none more than a month old | 15:37 |
fungi | the oldest ones are 29 days old, around 2023-12-06 | 15:38 |
fungi | which would suggest something is (or was) cleaning them up | 15:38 |
fungi | maybe it's cleaning up archives older than 30 days by default? | 15:39 |
Clark[m] | Ya maybe they changed the default but didn't update the docs | 15:39 |
fungi | gitea10 has none older than a day | 15:40 |
clarkb | weird | 15:41 |
fungi | same for gitea11, but gitea12 has some as old as 19 days | 15:41 |
clarkb | maybe it is atime and not mtime? so it acts more like a cache? | 15:42 |
fungi | 11 days old on gitea13 | 15:42 |
fungi | -atime and -mtime counts seem to match up, spot-checking | 15:43 |
clarkb | as a quick sanity check `cron` doesn't appear in the app.ini written to gitea10 or gitea09 so they should use the same cron defaults | 15:45 |
clarkb | fungi: I'm looking at the source and I think it is using db entries for the olderthan content | 15:49 |
clarkb | rather than disk times | 15:49 |
clarkb | it also seems to short circuit if it errors rather than trying to continue to delete things | 15:50 |
clarkb | could be that we've got some archive that fails to delete for whatever reason and that short circuits everything else | 15:51 |
fungi | if so, still odd that the oldest one has been modified as recently as a month ago, but nothing older than that | 15:52 |
clarkb | I asked in the gitea general room and they say that robots can generate the files and their example robots.txt apparently asks crawlers to not do that and we can look at https://gitea.com/robots.txt as an example | 15:52 |
fungi | oh, nice | 15:52 |
fungi | looks like we have a bit of stuff in https://opendev.org/robots.txt currently | 15:53 |
clarkb | ya we disallow */archive/ they disallow /*/*/archive | 15:54 |
clarkb | not sure if those are equiavlent | 15:54 |
fungi | i have a feeling we got a robots.txt from gitea but it's outdated compared to the one they're using | 15:54 |
clarkb | but /*/tarball/ and /*/zipball/ may be useful | 15:54 |
fungi | aha, yeah looks like https://review.opendev.org/803231 may have copied gitea's ~2.5 years ago | 15:56 |
clarkb | mariadb isn't running on gitea09 but I think we want to look at something like `select * from repo_archiver where created_unix < 1704240000 limit 10;` | 16:01 |
clarkb | thats the unix timestamp for roughly two days ago I think and since cron only runs once a day and we clean up things older than 24 hours we may have things in there about 2 days old and be valid? | 16:01 |
clarkb | that query returns no results on gitea10 | 16:02 |
clarkb | this is expected based on the filesystem inspection fungi did. Now to check gitea12 | 16:02 |
clarkb | that query does return results on gitea12 | 16:03 |
fungi | i can start the containers back up, but would feel more comfortable if we could free up a little space on gitea09's rootfs first. clarkb: you have some old db dumps from almost a year ago in your homedir, is that still needed? | 16:03 |
clarkb | I think the old db backups can be cleaned up they were used to bootstrap the other new servers iirc | 16:04 |
clarkb | fungi: I'm starting to think that it may be the short circuiting issue given the gitea12 query results | 16:04 |
fungi | clarkb: yeah, the dumps in your homedir are called gitea09_transplant_db.sql{,.gz} | 16:05 |
fungi | i'll delete those which will free up a few hundred mb | 16:05 |
fungi | we now have 474M available on the rootfs which should be sufficient, but i'd also like to reboot the server to make sure it's in good shape before doing anything else | 16:06 |
fungi | clarkb: you okay with me doing a quick reboot, or will that interrupt anything you're checking? | 16:07 |
clarkb | fungi: I'm using gitea12 now since it has leaked archives. I'll jump off of 09 | 16:08 |
fungi | cool, rebooting 09 now | 16:08 |
fungi | it's back up now, and freed a bit of additional space (now right at half a gb) | 16:10 |
fungi | i've started the containers on 09 again | 16:10 |
fungi | interestingly, that freed even more space, now around 0.75gb | 16:11 |
clarkb | /tmp content maybe? | 16:12 |
clarkb | the dir structure of the repo archive is repoid/firsttwoofsha/sha.filesuffix | 16:13 |
clarkb | this is useful when looking at the db contents and trying to map to the on disk contents | 16:13 |
clarkb | fungi: I think we should try the admin delete all archives on one server and see if it errors | 16:18 |
clarkb | that would give more weight to the short circuit problem | 16:18 |
fungi | yep, i'll look into that now | 16:18 |
clarkb | gitea logs some of these errors at a trace level (which is lower than debug) and we're logging at info (higher than debug) | 16:23 |
fungi | root login is taking a while, seems like something might be wrong with it | 16:26 |
fungi | never mind, it finally went through | 16:26 |
fungi | archive deletion is in progress | 16:27 |
fungi | it's freed a ton of space on the rootfs already | 16:28 |
fungi | /var/gitea/data/gitea/repo-archive is now only 29mb | 16:29 |
clarkb | statup isn't instantaneous as the db has to come up first and gitea will wait for the db to be communicable | 16:29 |
clarkb | assuming this was on gitea09 | 16:29 |
fungi | yeah, it wasn't at startup though. i started the containers about 20 minutes ago | 16:29 |
clarkb | hrm | 16:30 |
fungi | probably just the amount of data it needed to read to put the root user dashboard together or something | 16:31 |
clarkb | gitea09's repo_archiver table is empty now | 16:31 |
clarkb | the disk usage in the archives dir appears to just be for the dir structure it doesn't delete dirs just files I guess | 16:32 |
clarkb | fungi: I guess the next step is to force replication to gitea09 to catch up its git content (which may create archives if tags are pushed iirc) | 16:33 |
clarkb | and then followup with an updated robots.txt and monitor? | 16:33 |
clarkb | I don't see a smoking gun in the code for why this is happening and I'm wary of enabling the most verbose log level to get more logs out of gitea | 16:33 |
fungi | sounds good | 16:33 |
fungi | should we force full replication for all the backends just for completeness? | 16:34 |
clarkb | I guess it doesn't hurt | 16:34 |
fungi | done | 16:35 |
fungi | well, started | 16:35 |
fungi | i did a full `replication start` | 16:35 |
clarkb | fungi: and next week we can run that query and see if we have results | 16:35 |
clarkb | er I guess we want to do the data collection after robots.txt is updated | 16:35 |
fungi | around 14k tasks | 16:35 |
clarkb | so update robots.txt, rerun cleanup task, then check a week later or so | 16:36 |
fungi | working on the updated robots.txt next | 16:36 |
fungi | also need to take the host out of the emergency disable list and reenable it in the haproxy pool still, and then i have a change i procedurally blocked i need to reapprove | 16:37 |
fungi | i had initially approved it just before i saw the manage-projects deploy failure | 16:37 |
clarkb | ack | 16:39 |
fungi | clarkb: any idea why we commented out the disallow lines for /avatars and /user/* >? | 16:39 |
clarkb | nope | 16:40 |
fungi | doesn't seem like search indexes pulling those would make much sense, and gitea.com's robots disallows them | 16:40 |
clarkb | ++ | 16:42 |
clarkb | https://forum.gitea.com/t/how-to-configure-cron-task-for-delete-all-repositories-archives-zip-tar-gz-etc/4848/2 points to a fun undocumented cron job option which basically runs that admin task automatically | 16:42 |
clarkb | we could set that up to run say monthly. Let the daily run do its best daily and then come by once a month and clear out everything? | 16:42 |
fungi | looks like we also commented out /raw/* but maybe the reason they have it in theirs in order to avoid duplicate search results for different views of the same content? | 16:43 |
clarkb | fungi: that would make sense I guess. Also the upstream robots.txt has two raw entries | 16:44 |
clarkb | oops three | 16:44 |
fungi | seems to be lots of copy-pasta | 16:57 |
clarkb | looking at gitea12 we have db entries for leaked disk entries. Reading the code this implies the error is occuring either when listing/finding entries to delete or when deleting the db record for the archive. The last thing that is done is deleting the content on disk which is present as is the db record | 17:00 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Update our Gitea robots.txt from gitea.com's https://review.opendev.org/c/opendev/system-config/+/904868 | 17:07 |
fungi | gerrit replication has caught up | 17:07 |
fungi | i'm going to reenable 09 in ansible and haproxy now | 17:07 |
clarkb | fungi: you don't happen to still be logged into gitea09 do you? I think admin/monitor/cron should show running tasks and maybe gives us info on last results? | 17:07 |
clarkb | I don't think that data is persisted to the db though so it is probably pretty empty now due to the restart. Maybe at midnight utc we can check it and see it running | 17:08 |
fungi | pulling it up now | 17:08 |
fungi | which task are you interested in? there are 21 listed | 17:10 |
clarkb | fungi: in the robots.txt I thought you were going to uncomment the disallows for avatar and users | 17:10 |
clarkb | fungi: the delete archive one | 17:10 |
clarkb | let me find the exact name | 17:10 |
fungi | "Delete all repositories' archives (ZIP, TAR.GZ, etc..)" isn't scheduled | 17:10 |
fungi | it shows the "previous time" as when i clicked the button in the ui | 17:11 |
clarkb | "archive_cleanup" is the name in the code | 17:11 |
clarkb | ya delete all repositories is disabled by default | 17:11 |
fungi | aha, "Delete old repository archives" | 17:11 |
clarkb | that does the full clear which is the same thing you did by clicking the button. However we should once a day run a cleanup of older archives | 17:11 |
clarkb | ya that one | 17:11 |
fungi | schedule to run @midnight | 17:11 |
fungi | previous time was Jan 5, 2024, 4:10:35 PM | 17:12 |
fungi | which was when i restarted the container | 17:12 |
fungi | next time is Jan 6, 2024, 12:00:00 AM | 17:12 |
fungi | so it thinks it ran that task when the container started, which would explain the small reduction in disk utilization i observed at that time | 17:12 |
clarkb | yup by default that cron is set to run on startup as well | 17:13 |
clarkb | that at least confirms it is running which is good | 17:13 |
fungi | but doesn't explain why it didn't remove most of the archives | 17:13 |
clarkb | because now we can focus on why it isn't doing what we want it ot do rather than figuring out if it even executes | 17:13 |
fungi | yep | 17:13 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Update our Gitea robots.txt from gitea.com's https://review.opendev.org/c/opendev/system-config/+/904868 | 17:15 |
fungi | gitea09 is out of the emergency disable list now | 17:15 |
fungi | and enabled in haproxy again | 17:16 |
clarkb | cross checking on gitea12 might be good too to ensure it is running daily | 17:17 |
clarkb | maybe it stopped running on the servers after being up for $time | 17:17 |
fungi | last ran on gitea12 Jan 5, 2024, 12:00:00 AM | 17:19 |
fungi | executions count is 2 | 17:19 |
fungi | which i suppose is since the gitea upgrade yesterday? (one at container start, and then one as scheduled) | 17:19 |
clarkb | ya that makes sense | 17:20 |
clarkb | so now we know it is definitely not running and also not clearing out entries we expect to be cleared out | 17:20 |
clarkb | I think the two main possibilities there are either an error listing/finding entries to delete or errors removing db rows causing short circuits in the entire process | 17:20 |
fungi | er, well definitely thinks it's running, but yeah not cleaning up what we want | 17:20 |
fungi | mmm, yeah db locking contention? | 17:21 |
clarkb | maybe? I guess deletions could hit lock problems (the listing shoudl be fine though?) | 17:21 |
fungi | good point | 17:21 |
clarkb | fungi: I'm definitely not seeing any smoking guns. I made note of the next record to be deleted by gitea12 if I read things correctly and can check that record on monday to see if one of the next three midnight runs get it by then | 17:29 |
clarkb | until then I think things are manageable and this workaround of just deleting the entire archive seems fine. Honestly I think we should consider to add that cron job to gitea too on a monthly basis | 17:30 |
fungi | sounds good | 17:30 |
fungi | keep in mind though, we had just shy of a month's archives on 09 from the look of things, and that filled up the rootfs. maybe weekly? | 17:31 |
clarkb | oh wow was it only a month. I guess so based on your timestamps | 17:31 |
clarkb | ya mgiht have to be weekly | 17:31 |
clarkb | I've gone ahead and deleted my autohold for gitea 1.21.3 testing. I don't think it is useful for this archive stuff and we're upgraded now | 17:35 |
clarkb | fungi: also https://review.opendev.org/c/opendev/system-config/+/904777 is more gitea related testing for unrelated problems | 17:37 |
opendevreview | Merged openstack/project-config master: Add new NebulOuS projects: overlay-network-manager, security-manager https://review.opendev.org/c/openstack/project-config/+/904792 | 17:42 |
fungi | deploy of that ^ succeeded, so gitea09 is no longer breaking the manage-projects job | 17:54 |
fungi | (which was how i initially noticed the issue with that server) | 17:55 |
clarkb | woot and that should've also resolved the earlier failure? | 17:55 |
fungi | yes | 17:55 |
clarkb | pretty sure it would since we refresh all of gitea each time and then run gerrit only if gitea succeeds | 17:55 |
* clarkb should go do nromal morning things that got neglected like eating breakfast | 17:58 | |
fungi | yes, do that. when you're caught up, opinion on my earlier comments about zuul holding onto deleted branches? should i ask it for a reconfigure? | 18:00 |
clarkb | yes I suspect that is the known issue with not batching up deletions of branches and instead doing them all serially in a short period of time (it results in lost events) | 18:03 |
clarkb | the fix for that is to reconfigure the tenant that is affected | 18:03 |
clarkb | fungi: `docker exec zuul-scheduler_scheduler_1 zuul-scheduler tenant-reconfigure openstack` from scrollback in #openstack-infra | 18:10 |
opendevreview | Merged opendev/system-config master: Check for gitea template rendering errors https://review.opendev.org/c/opendev/system-config/+/904777 | 18:52 |
fungi | clarkb: it doesn't seem like a tenant-reconfigure is sufficient in this case, at least the deleted branch is still lingering in the dashboard | 18:58 |
fungi | https://review.opendev.org/admin/repos/openstack/automaton,branches and https://opendev.org/openstack/automaton/branches don't have stable/train but https://zuul.opendev.org/t/openstack/project/opendev.org/openstack/automaton still does | 19:05 |
clarkb | fungi: did you wait for the reconfiguration to complete? it takes like 20 ish minutes | 19:11 |
clarkb | not sure when it ran relative to you checking the list | 19:11 |
fungi | 2024-01-05 18:57:42,514 INFO zuul.GerritConnection: Got branches for openstack/automaton | 19:13 |
fungi | and then looks like it loaded configuration from automaton's branches at 19:08:44 | 19:13 |
fungi | no mention of loading configuration from stable/train though | 19:14 |
fungi | is there a later pass to clean up dropped branches? | 19:14 |
clarkb | fungi: I'm not sure of the order but I don't think zuul uses any of the new content until the process is fully complete (the zk db is versioned) | 19:15 |
fungi | maybe it's not done yet | 19:15 |
clarkb | if I grep for `reconfiguration` I see 2024-01-05 17:51:53,251 DEBUG zuul.Scheduler: Smart reconfiguration triggered but no finished message | 19:19 |
clarkb | oh hrm maybe I needed to grep -i | 19:19 |
clarkb | ok I was looking at the wrong scheduler and needed -i | 19:21 |
clarkb | 2024-01-05 19:18:06,648 INFO zuul.Scheduler: Reconfiguration complete (smart: False, tenants: ['openstack'], duration: 1251.448 seconds) | 19:21 |
clarkb | fungi: I think it is done now. Maybe refresh and check? | 19:21 |
fungi | clarkb: yep, now it's gone from https://zuul.opendev.org/t/openstack/project/opendev.org/openstack/automaton so i was just impatient | 19:22 |
clarkb | and ya it took almost exactly 21 minutes | 19:23 |
fungi | elodilles: ^ all cleaned up | 19:25 |
opendevreview | Clark Boylan proposed opendev/system-config master: DNM intentional gitea failure to hold a node https://review.opendev.org/c/opendev/system-config/+/848181 | 20:31 |
opendevreview | Clark Boylan proposed opendev/system-config master: Enable gitea delete_repo_archives cron job https://review.opendev.org/c/opendev/system-config/+/904874 | 20:31 |
clarkb | I put an autohold in place for that so we can check the admin/monitor/cron (or whatever the path was) to see that it fires as expected after Sunday | 20:32 |
fungi | yep, that's the path | 20:36 |
clarkb | fwiw I got that config block out of the example app.ini content in the gitea repo | 20:38 |
clarkb | so while undocument I believe it to be valid (especially after finding the cron job in the source code) | 20:38 |
clarkb | https://104.130.4.31:3081/opendev/system-config is the held node. I manually downloaded the three archive file types for that repo which put them in the repo-archives dir on the held node | 23:36 |
clarkb | we should see them get cleaned up in ~24 minutes | 23:36 |
clarkb | I also realize the @weekly definition might confight with the 24h definition of the daily cleanup beacuse they'll both run at midnight | 23:36 |
clarkb | it might be better for us to specify a time like weekly at 0200 or whatever | 23:36 |
clarkb | or maybe they will handle that properly with locking I don't know | 23:37 |
clarkb | oh the daily won't cleanup actually | 23:37 |
clarkb | beacuse they are less than 24 hours old | 23:37 |
clarkb | so these should be good for checking the cleanup tomorrow sunday at midnight utc (since the time in 22 minutes is saturday midnight?) | 23:38 |
clarkb | anyway as long as this doesn't completely explode over the weekend on the held node I think we can deploy it to prod and then monitor more long term behavior | 23:38 |
fungi | sounds right to me | 23:55 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!