| opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/962557 | 02:15 |
|---|---|---|
| *** mrunge_ is now known as mrunge | 05:50 | |
| *** dtantsur_ is now known as dtantsur | 05:54 | |
| *** darmach9 is now known as darmach | 10:39 | |
| *** g-a-c0 is now known as g-a-c | 10:51 | |
| *** efoley_ is now known as efoley | 12:21 | |
| fungi | infra-root: more interesting behavior that may be related to the weekend's zuul upgrade... an autohold from last week (0000000241 in the openstack tenant) had its held node spontaneously deleted. even more odd is that it can't be deleted now | 12:48 |
| fungi | the autohold can't be deleted now, i mean | 12:49 |
| fungi | i wonder if this is at all related to the "Exception loading ZKObject" errors tonyb noted on saturday | 12:49 |
| fungi | also https://zuul.opendev.org/t/openstack/autohold/0000000243 which i just created shows me data for 0000000241 instead | 12:52 |
| fungi | possibly because i recreated it with the same parameters as before? | 12:53 |
| fungi | (in between those i created 0000000242 but had the change format wrong so deleted it immediately, and that one was successfully removed) | 12:54 |
| fungi | https://review.opendev.org/c/zuul/zuul/+/960417 "Add subnode support to launcher" merged last week, and does make additions to the model | 12:58 |
| fungi | i'm trying to track down the node deletion in the scheduler debug logs but not having much luck yet, maybe it got reaped by cleanup on one of the launchers? | 13:05 |
| fungi | 2025-10-06 12:45:15,959 ERROR zuul.web: AttributeError: property 'min_request_version' of 'OpenstackProviderNode' object has no setter | 13:08 |
| fungi | that's from web-debug.log for my autohold-delete attempt on 0000000241 a few minutes ago | 13:09 |
| opendevreview | Pierre Riteau proposed opendev/zuul-providers master: Add cirros 0.6.3 images to cache https://review.opendev.org/c/opendev/zuul-providers/+/963173 | 13:10 |
| fungi | zl01 and zl02 both logged a "zuul.zk.ZooKeeper: Error playing back event WatchedEvent(type='NONE', state='CONNECTED', path='/zuul/nodes/nodes/c97202e2fa644f8cb8038b62466cc101')" on 2025-10-04 around 08:40 utc (a couple of minutes apart) | 13:12 |
| fungi | that's probably when they were upgraded | 13:13 |
| opendevreview | Pierre Riteau proposed opendev/zuul-providers master: Add cirros 0.6.3 images to cache https://review.opendev.org/c/opendev/zuul-providers/+/963173 | 13:13 |
| fungi | the launchers reported a slew of errors like that, i'm guessing one for each existing node at that point in time | 13:18 |
| fungi | and yeah, the underlying exception was the same AttributeError on min_request_version as my autohold-delete met with | 13:20 |
| fungi | i can't say for sure the exceptions tonyb saw mentioned in the log are from the same issue, because ZKObject._loadData() doesn't preserve tracebacks so all we know is that there was *some* exception involving a request revision property, but it seems likely | 13:46 |
| *** ralonsoh_ is now known as ralonsoh | 14:00 | |
| mnasiadka | Hmm, https://review.opendev.org/c/opendev/zuul-providers/+/963173 failed two jobs with some gitea related clone/checkout problems | 14:27 |
| fungi | mnasiadka: both on cfn/computing-offload looks like, so maybe there's something broken with that repo on one of our gitea backends... i'll try to isolate it | 14:43 |
| mnasiadka | yeah, that felt weird that it’s on the same repo | 14:44 |
| corvus | fungi: the revision errors are a red herring | 14:44 |
| fungi | corvus: oh, good to know. thanks! | 14:45 |
| clarkb | fungi: mnasiadka iirc that repo is the one with a bunch of opaque binary data in it and it is larger than nova. I asked horace to see if he could reach out to them but not surei f that happened | 14:45 |
| corvus | fungi: so aside from anything weird that happened during the upgrade, the current problem is that 243 has the wrong info? what it it supposed to say? | 14:47 |
| fungi | corvus: it's supposed to say 243 and not 241 | 14:48 |
| fungi | also 241 is not deletable, and the node that was held for 241 (and all other held nodes i think) vanished, maybe all the nodes that existed got "cleaned up" during the upgrade? | 14:49 |
| corvus | fungi: maybe try reloading? because 243 says 243 for me | 14:49 |
| fungi | oh, yep now it says 243. i wonder if the old page content got cached for some reason? | 14:50 |
| corvus | ah yeah, it looks like in the web ui, if you click on an autohold, it just shows you the first one it loaded | 14:50 |
| fungi | or old api response | 14:50 |
| fungi | got it, so that was just confusing me and unrelated | 14:50 |
| fungi | main issue(s) seem to be unexpected node deletion during the upgrade and pre-upgrade autoholds no longer being deleteable | 14:50 |
| corvus | okay, i think the min_request_version error is a plausible cause for the node being deleted during the upgrade. i'll check and make sure that isn't an ongoing problem. i think i have enough info to try deleting the autoholds now (which i'll try after refreshing the web ui) | 14:55 |
| fungi | on mnasiadka's job failures, i directly cloned cfn/computing-offload successfully from all 6 gitea backends, no errors | 14:55 |
| mnasiadka | So then recheck it is | 14:56 |
| fungi | so it doesn't seem like it's corrupt on a random backend at least | 14:56 |
| corvus | ah, it's the same error, so that answers that question. i'll get a fix soon. | 14:56 |
| fungi | corvus: yeah, sounds right, i haven't seen any node-related problems for any resources created after the upgrade completed | 14:58 |
| fungi | and like i said, i was able to delete an autohold i created today, it's just the one i created friday is undeletable | 15:00 |
| fungi | which would seem to point to them having different data | 15:00 |
| fungi | cfn/computing-offload is currently 487M after a fresh clone, and takes me 37 seconds to download. the last change to merge to the master branch was just over a month ago, so no recent changes that i can see | 15:06 |
| fungi | oh, though i think dib downloads all branches and tags too | 15:06 |
| clarkb | fungi: yes dib should fetch all branches and refs, but it should also do some from the state of the last build's cache | 15:07 |
| fungi | which in a check job should be empty | 15:07 |
| clarkb | its possible the problems were simply internet connectvity related and we're more likely to experience an issue with the larger repos? | 15:07 |
| clarkb | fungi: no the cache on the image | 15:07 |
| fungi | oh! right | 15:07 |
| clarkb | in /opt/cache or whatever the path is not the zuul repos | 15:07 |
| fungi | yeah, https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/source-repositories/extra-data.d/98-source-repositories#L214-L215 is the relevant update command i think | 15:12 |
| clarkb | do we have a log from the failure? | 15:15 |
| clarkb | the fetches are supposed to retry now iirc. In the past we had problems where we would try to fetch over flip flopping ip addresses | 15:16 |
| clarkb | `error: corrupt loose object '8c215a8f38400bfce658e8f66efff79010d764aa'` | 15:18 |
| fungi | https://zuul.opendev.org/t/opendev/build/aa1eca2c3be144e6937360b080247ba5/log/job-output.txt#2095-2103 | 15:18 |
| clarkb | `fatal: loose object 8c215a8f38400bfce658e8f66efff79010d764aa (stored in /opt/dib_tmp/dib_build.IwkOLp7G/mnt/opt/git/opendev.org/cfn/computing-offload/.git/objects/8c/215a8f38400bfce658e8f66efff79010d764aa) is corrupt` | 15:18 |
| clarkb | so I don't think that is the flip flopping ip address issue | 15:19 |
| clarkb | that seems like the underlying git cache is corrupt and we may need to clear that out. But first we have to determine if the corruption is in gerrit, gitea, or the disk cache on test node images | 15:19 |
| fungi | yeah, so i wonder if we somehow ended up with corrupt git caches of that repo just on the both debian-trixie and alma-10 images | 15:19 |
| fungi | only those two platforms failed for that buildset anyway | 15:20 |
| clarkb | fungi: the image builds for trixie and alma build on noble iirc | 15:20 |
| clarkb | (just keep in mind they don't build themselves) | 15:20 |
| clarkb | ya the inconsistent behavior implies gerrit is not at fault. Could be a specific gitea backend or a specific cloud image | 15:20 |
| fungi | oh, are those the only two platforms we're building on noble? | 15:20 |
| clarkb | fungi: no I think everything builds on noble | 15:20 |
| fungi | yeah, just wondering if those were the two most recently uploaded images and that's the reason only they failed so far | 15:21 |
| fungi | i need to pop out to lunch, but can keep digging into this in about an hour | 15:21 |
| clarkb | or some specific cloud bit flipped or a specific test node image corrupted and everywhere was still running an older version due to upload delays or running a newer version and these were scheduled to older ones that were sad | 15:22 |
| clarkb | I think step 1 here is to check each gitea backend | 15:22 |
| clarkb | and if those look good then we dig into the test images themselves | 15:22 |
| clarkb | both failed builds ran in rax flex sjc3 | 15:22 |
| clarkb | successful jobs ran in rax flex dfw3, vexxhost ca-ymq-1, and rax ord (maybe others too) | 15:23 |
| clarkb | haven't found a successful job in scj3 yet so it could be specific to the image upload in sjc3 (this is where the inability to confirm hashes of cloud images via glance becomes problematic) | 15:24 |
| clarkb | I've got clones going against 09, 10, and 11 right now. These are not very fast. When complete I'll do 12, 13, and 14 | 15:28 |
| clarkb | gitea09 looks good | 15:30 |
| clarkb | oh though I'm realizing I'm just doing a git clone not the broader command | 15:31 |
| clarkb | 9, 10, and 11 all clone cleanly at least | 15:35 |
| clarkb | as does 12. I'm beginning to suspect this is an issue with the repo cache | 15:36 |
| clarkb | in theory the repo cache will automatically get cleared out as the next round of builds will eitehr fail on the broken images (so not propogate) or succeed on happy caches and then the result of those successes will be uploaded to all the clouds including the ones with currently broken images | 15:37 |
| clarkb | corvus: ^ assuming that the noble image rax flex sjc3 does have a corrupted git repo in its git cache do you think ^ is accurate and it will eventually "self heal" | 15:38 |
| clarkb | I think we can manually dleete that image from sjc3 if we want to speed the process up | 15:38 |
| clarkb | I also wish we recorded if the git clone from the cache or the update after the cache clone was the command that failed | 15:40 |
| clarkb | I suspect the later because the clone would just copy the object and packfiles as is iirc then when we try to operate on them in the subsequent command it complains | 15:41 |
| clarkb | all 6 clones from each respective gitea backend succeeded for me | 15:41 |
| clarkb | I'm going to git fsck each one now just to double check | 15:42 |
| clarkb | `git fsck --full --strict --progress` reports no issues in any of the local fresh clones | 15:44 |
| clarkb | https://zuul.opendev.org/t/opendev/build/e6d17f2733f3406f83daec9a962674bc/log/job-output.txt is another failure from the recheck and also in rax flex sjc3 so ya I think that particular cloud image is bad | 15:48 |
| clarkb | I'll manually boot a test node on that image in that cloud to see if we can learn more | 15:48 |
| clarkb | but first I need to reboot for local system updates | 15:48 |
| clarkb | ubuntu-noble-cd24b1135a324bd98beeebb955bee932 is the newer of the two noble images in raxflex sjc3 so I believe this is the one in use | 15:55 |
| clarkb | infra-root: root@159.135.206.48 will get you onto my test node | 16:00 |
| clarkb | git fsck on the cache repo on that image shows the same error as the image builds do | 16:02 |
| clarkb | the object file does exist (so not the missing portion of the error but the corrupt portion) and is ~2.9MB large | 16:03 |
| clarkb | I'm fscking every repo in the cache on that image now | 16:06 |
| clarkb | we had a successful build in dfw3 so I'll boot a test node there and run the same fsck to compare | 16:08 |
| corvus | catching up | 16:11 |
| clarkb | 174.143.59.196 is the ip address for the dfw3 node | 16:12 |
| clarkb | a number of other repos had issues in the sjc3 node but so far only some minor warnings in the dfw3 node | 16:15 |
| corvus | clarkb: without https://review.opendev.org/955797 my guess is we will struggle to self-heal because we'll keep failing builds. with that, i think we would slowly self-heal. in either case, deleting the known bad image should speed things up. | 16:15 |
| clarkb | corvus: ah for some reason I thought 955797 was already in place | 16:16 |
| corvus | (and to be clear, without https://review.opendev.org/955797 and deleting the known bad image should mean immediate healing) | 16:16 |
| corvus | yeah i keep thinking that too :) | 16:16 |
| clarkb | I think this is another data point indicating that 955797 is a good idea | 16:16 |
| clarkb | corvus: what is the process for deleeting ubuntu-noble-cd24b1135a324bd98beeebb955bee932 ? | 16:16 |
| clarkb | if my test in dfw3 comes back clean I think we should proceed with that as it appears to be a problem isolated to this specific cloud image | 16:17 |
| corvus | clarkb: i usually use the web ui, i think there's a zuul-client command. | 16:17 |
| clarkb | ack I'll do that once my tests conclude if they show the issue isn't more widespread | 16:17 |
| clarkb | I'm going to assume that this is some bitflip or data conversion corruption error on the cloud side after we've uploaded the data since the same data has apparently uploaded elsewhere without issue | 16:18 |
| corvus | clarkb: note you must use the build tenant: opendev | 16:18 |
| clarkb | corvus: and that will then affect all other tenants? | 16:18 |
| corvus | yep | 16:19 |
| clarkb | dangling commit b0cd0dfe6eb8ba133fcfc09e6a6b91edf5de6f96, dangling commit 5ad6519069f2ce78fd90366abb131be265c40d9c, and a bunch of gitignoreSymlink: .gitignore is a symlink warnings are what is found on dfw3 | 16:20 |
| clarkb | none of which are fatal | 16:20 |
| clarkb | on sjc3 starlingx/test, openstack/openstack-ansible, openstack/openstack-manuals, openstack/puppet-openstack-cookiecutter, x/ci-cd-pipeline-app-murano, cfn/computing-offload, and possibly more have corruption issues | 16:22 |
| clarkb | so ya seems like a fairly widespread data corruption issue with this particular image upload that isn't replicated by other uploads. I'll work on deleting the image and then clean up my test nodes | 16:22 |
| clarkb | unless anyone wants to do more investigation let me know and I can keep m test nodes up | 16:23 |
| fungi | clarkb: yeah, that's why i linked to the command in dib, i had already done a clone and even a clone --mirror from all the backends successfully | 16:24 |
| clarkb | ubuntu-noble-cd24b1135a324bd98beeebb955bee932 has been deleted from sjc3 | 16:25 |
| clarkb | infra-root is there any interest in keeping my test nodes around at this point or should I delete them? | 16:26 |
| fungi | i think you've gotten all the info from it that i would have | 16:27 |
| clarkb | corvus: is it safe to dequeue a buildset that is doing image builds? This is for check whcih I think should be fine since that doesn't produce any stored results | 16:28 |
| clarkb | I want to dequeue 963173 so that we can reenqueue it quicker | 16:29 |
| clarkb | note I checked the cloud image listing and it was cleared out there | 16:31 |
| clarkb | also looks like we may have leaked old nodepool images | 16:32 |
| clarkb | images like ubuntu-noble-1752791456. I'm guessing those should all be deleted manually at this point. I'll make a note for that but not sure when I'll be able to get to it | 16:32 |
| corvus | clarkb: absolutely safe to dequeue | 16:38 |
| corvus | remote: https://review.opendev.org/c/zuul/zuul/+/963203 Fix providernode assignment upgrade [NEW] | 16:38 |
| opendevreview | Merged openstack/project-config master: Replace 2025.2/Flamingo key with 2026.1/Gazpacho https://review.opendev.org/c/openstack/project-config/+/957467 | 16:38 |
| corvus | clarkb: fungi the providernode change above is something we should get into opendev production; we're going to have lingering bad data until we restart with it. | 16:39 |
| fungi | thanks | 16:39 |
| clarkb | my test nodes have been deleted and I dequeued and rechecked the cirros 0.6.3 change | 16:40 |
| clarkb | corvus: I'll review that change now | 16:41 |
| clarkb | as I think I've concluded the bad image builds situation for the moment | 16:42 |
| fungi | same | 16:42 |
| clarkb | side note there are so many new admin buttons in the zuul ui now | 16:42 |
| fungi | are you logged in? | 16:42 |
| fungi | i hadn't noticed them, but i don't tend to authenticate to the webui | 16:42 |
| clarkb | yes I used the web ui to delete the image from sjc3 | 16:44 |
| clarkb | (and that required logging in first) | 16:44 |
| fungi | neat | 16:45 |
| clarkb | corvus: one question on https://review.opendev.org/c/zuul/zuul/+/963203 | 16:53 |
| fungi | it seems like that test case is only temporarily useful anyway, so i don't see much point in overengineering it | 16:54 |
| corvus | replied (and ++ to fungi's point) | 16:58 |
| clarkb | ack +2 from me | 16:59 |
| fungi | yeah, lgtm. thanks for the quick fix! | 17:08 |
| clarkb | fungi: you may want to weigh in on https://review.opendev.org/955797 | 17:13 |
| clarkb | also we could add a git fsck step to the image builds to force them to fail early and not upload if the corruption happens at that point (though I don't think that was the case here since only one region seemed affected) | 17:14 |
| opendevreview | Merged opendev/project-config master: Use built images even if some failed https://review.opendev.org/c/opendev/project-config/+/955797 | 17:15 |
| fungi | ah yeah i was meaning to look at that one, merged now! | 17:15 |
| fungi | does make image validation testing harder in the future if we want to add it, i think? but we can figure that out when we get there | 17:17 |
| corvus | it might be worth an audit of checksums -- i'm 97% sure we're passing md5 and sha256 all the way to openstacksdk for the image uploads; but i don't know what happens after that. | 17:17 |
| corvus | (also, if these don't actually do anything -- boy oh boy could we make zuul-launcher and the image build jobs a lot simpler :) | 17:18 |
| clarkb | corvus: I think they do something but I'm not sure how deep that validation goes with glance | 17:21 |
| clarkb | like I know that if glance converts the image on the backend after download there is no way as the end user to check that the image you uploaded with glance is the one it received after the fact because the recorded checksum is for the translated/converted data | 17:22 |
| clarkb | I remember looking into this year and years ago after we had a corrupted image with nodepool that was a short write and checksums have some value but not the entire value you expect them to have. But I don't recall all thedetails and things may have changed since | 17:22 |
| corvus | mmm... and that conversion is another step where error could be introduced too | 17:23 |
| clarkb | yup | 17:23 |
| corvus | we could use zuul image validation jobs to check for this case. we've always resisted having a validation that checked whether our jobs should succeed -- but one that did basic checks like "does it boot? are the git repos ok?" seems like it could be in scope. | 17:24 |
| clarkb | ya checking fundamental attributes of the image itself seems safe compared to only allowing images up that allow specific test jobs to pass | 17:25 |
| corvus | that is a feature that is implemented now in zuul-launcher. i don't have time to make jobs for that, but if someone wants to, i'm happy to answer questions. | 17:25 |
| corvus | jobs + pipeline config i should say | 17:26 |
| clarkb | is that a check that can be configured tor un against the image upload in each cloud region or would it just be once per upload? | 17:26 |
| clarkb | I think this will make a good preptg topic (which starst tomorrow) so I'll add it there in a bit | 17:27 |
| corvus | i may be having trouble parsing the question because i think: once per upload == image upload in each cloud region | 17:28 |
| corvus | maybe one of those was supposed to be "once per image build artifact"? | 17:28 |
| clarkb | corvus: yes sorry is it once per build artifact or can we do it once per upload | 17:29 |
| clarkb | I think for this particular concern we need to check each upload in each cloud region independently | 17:29 |
| corvus | got it. it's once per upload. so yes, it addresses this use case. | 17:30 |
| clarkb | checking the central build artifact still has some potential value (if the corruption happens early for example) | 17:30 |
| corvus | yeah, the way to check the artifact would be in the current build job (like was proposed earlier) | 17:30 |
| fungi | right, we had plenty of trouble back when devstack smoketest was our image validator, but i agree something simpler and less prone to random breakage could make sense | 17:32 |
| clarkb | ok this item is added to the preptg agenda | 18:21 |
| clarkb | I guess its a good time to remind people that now is a great time to get your ideas on that agenda as we'll be diving into things starting tomorrow | 18:21 |
| clarkb | https://review.opendev.org/c/opendev/zuul-providers/+/963173 passes now after the noble image cleanup | 18:29 |
| clarkb | infra-root I think if we approve ^ that should cause us to update images after a few days of not doing so due to the noble image problem | 18:29 |
| clarkb | then as followup we should consider if we can/should delete any of those older cirros images | 18:30 |
| *** g-a-c3 is now known as g-a-c | 18:47 | |
| clarkb | TheJulia: re the dib and ai code review topics for the opendev pre ptg since you can't make wednesday and there is the possibility we don't need the extra time on Thursday tomorrow is probably the best time to join. Does something like 1900 UTC tomorrow work for you? I can ensure we get to those two topics around that timeframe if so | 21:15 |
| clarkb | alternatively if tomorrow doesn't work then we can just use the Thursday time for those two items if we finish everything else tomorrow and wednesday | 21:16 |
| clarkb | that would be 1500 Thursday as the alternative | 21:17 |
| TheJulia | Yeah, that should be able to work | 21:19 |
| TheJulia | 1900 that is | 21:20 |
| clarkb | great I'll pencil that in on the etherpad so we don't forget. See you there | 21:20 |
| TheJulia | Thanks! | 21:20 |
| opendevreview | Merged opendev/zuul-providers master: Add cirros 0.6.3 images to cache https://review.opendev.org/c/opendev/zuul-providers/+/963173 | 21:47 |
| clarkb | corvus: do we not have a status for the image uploads to clouds? ^ that change landed and promoted successfully. But if I go here: https://zuul.opendev.org/t/opendev/image/ubuntu-noble I only see the old uploads (no in progress uplaods for example) | 22:17 |
| fungi | clarkb: at the bottom i see a bunch that are in a "pending" state | 22:20 |
| clarkb | fungi: I think those have just shown up so I may have been too impatient and checked before the launchers started processing thye promoted builds | 22:21 |
| clarkb | so i guess the gap here is between the promotion job running and the launcher processing it (maybe ebcause it is doing one iamge at a time and if I checked other images I would've seen them sooner? | 22:22 |
| corvus | it's actually the scheduler that makes those on report and it's all synchronous. but zuul-web relies on a cache which could have a small lag. regardless, i would expect that clicking "refresh" after 5 seconds or so should show the right data. | 22:24 |
| corvus | (btw, the pending uploads are the ones you're looking for; you can match the build id of the artifacts on that page tohttps://zuul.opendev.org/t/opendev/build/f0fd8c6fdc134142971c083839b73229 ) | 22:27 |
| corvus | (eventually the build id on that page should be a link to that) | 22:27 |
| clarkb | that appears to be a link arleady | 22:29 |
| clarkb | sounds like maybe I just wasn't refreshing hard enough. I'll try to keep that in mind for next time and check if it works more like how I would expect it to | 22:30 |
| corvus | no i mean on https://zuul.opendev.org/t/openstack/image/ubuntu-noble the text "f0fd8c6fdc134142971c083839b73229" will someday be a link to https://zuul.opendev.org/t/opendev/build/f0fd8c6fdc134142971c083839b73229 | 22:35 |
| corvus | today you just have to copy/paste | 22:35 |
| corvus | either way, that's how you find out that those image uploads correspond to the artifacts from that job | 22:37 |
| corvus | ah, it is a link in the opendev tenant, just not openstack | 22:37 |
| fungi | i do love that you can click through from an image to the build results with the image build log and the image itself as a downloadable artifact | 22:38 |
| clarkb | ah I am in the opendev tenant since you said I had to be there earlier to delete the bad image upload | 22:38 |
| corvus | i guess i did get around to implementing that ;) | 22:38 |
| fungi | yeah, i keep initially forgetting and looking at images in the openstack tenant and wondering why they don't link anywhere, then remembering you have to be in the opendev tenant since that's where they built | 22:39 |
| corvus | fungi: yeah, there should be no mystery/guessing; everything is traceable :) | 22:39 |
| fungi | exactly, a very elegant design! | 22:39 |
| fungi | awesome work | 22:40 |
| clarkb | corvus: did you see my note about what I think are the old nodepool image uploads? Those should be safe to delete now like the old instances right? | 23:23 |
| corvus | yep! | 23:27 |
| clarkb | ok I'll try to get around to that between meetings tomorrow | 23:33 |
| clarkb | oh I also have a dentist appointment tomorrow afternoon so once meetings are done I may not be around much | 23:33 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!