| opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/957995 | 02:32 |
|---|---|---|
| opendevreview | Michael Still proposed openstack/diskimage-builder master: Automatically accept "dnf mark" commands. https://review.opendev.org/c/openstack/diskimage-builder/+/960329 | 03:33 |
| opendevreview | Michal Nasiadka proposed openstack/diskimage-builder master: almalinux-container: Add support for building 10 https://review.opendev.org/c/openstack/diskimage-builder/+/960336 | 07:01 |
| opendevreview | Merged openstack/diskimage-builder master: Automatically accept "dnf mark" commands. https://review.opendev.org/c/openstack/diskimage-builder/+/960329 | 08:03 |
| opendevreview | Michal Nasiadka proposed openstack/diskimage-builder master: almalinux-container: Add support for building 10 https://review.opendev.org/c/openstack/diskimage-builder/+/960336 | 09:30 |
| opendevreview | Merged zuul/zuul-jobs master: Remove deprecated javascript jobs https://review.opendev.org/c/zuul/zuul-jobs/+/921067 | 13:12 |
| opendevreview | Merged zuul/zuul-jobs master: Remove version defaults for nodejs jobs https://review.opendev.org/c/zuul/zuul-jobs/+/957219 | 13:18 |
| opendevreview | Michal Nasiadka proposed openstack/diskimage-builder master: almalinux-container: Add support for building 10 https://review.opendev.org/c/openstack/diskimage-builder/+/960336 | 13:24 |
| fungi | clarkb: i think what happened is that the upgrade from focal to jammy disabled the ppa but didn't remove the packages we had installed from it, then the upgrade from jammy to noble had some conflicting dependencies and uninstalled most of them (all but the dkms package) | 13:40 |
| fungi | so on the other servers we're running the focal builds of our afs packages on jammy right now, but since it seems to be working (and they're all the same afs source version anyway) i'm not inclined to disrupt things further by upgrading them again before the upgrade to noble | 13:41 |
| fungi | as for lists.o.o, sounds like we're getting somewhere and database profiling/tuning will be next on the agenda | 13:42 |
| fungi | back on afs, as expected the docs volume move from ord to dfw is still in progress | 13:45 |
| opendevreview | Michal Nasiadka proposed openstack/diskimage-builder master: Remove nodepool based testing https://review.opendev.org/c/openstack/diskimage-builder/+/952953 | 14:08 |
| clarkb | fungi: weird re the package selection choices | 15:29 |
| fungi | i guess it doesn't seem all that weird to me, living most of my life on debian/unstable and wrangling those sorts of inconsistencies | 15:30 |
| clarkb | it must've rerun the dkms build against the new kernel though? | 15:30 |
| clarkb | but ya since the afs versions doesn't change I don't think there is any real reason to upgrade the package at that point. | 15:30 |
| fungi | yes, that part was still working | 15:30 |
| fungi | i mean, in some ways it's a testament to shared library symbol management that you can upgrade 2 years worth of packages and others continue to work with them just fine | 15:31 |
| clarkb | I have a meeting in 1.5 hours but afterwards we might want ot sync up on the mariadb for lists istuation if you have time | 15:35 |
| clarkb | I think my first step there is still to benchmark the disk io using something like fio just to see if there is something horribly wrong or potentailly solvable via an ssd volume | 15:36 |
| fungi | yeah, i think i'm going to disappear briefly for lunch, but will be around | 15:36 |
| opendevreview | Clark Boylan proposed opendev/system-config master: Expand old chrome UA rules https://review.opendev.org/c/opendev/system-config/+/960399 | 15:49 |
| fungi | docs afs volume move is still going | 16:48 |
| clarkb | this is across the internet and not within the same region which explains some of the slowness | 16:48 |
| fungi | right, also one of our larger and most active wrt updates | 16:49 |
| *** dhill is now known as Guest26276 | 16:57 | |
| clarkb | fungi: I'm around now if we want to look at lists | 17:39 |
| clarkb | (I'm also happy to just dive in myself, not sure if you are interested in following along | 17:39 |
| clarkb | I probably need to read the fio manpage first | 17:40 |
| clarkb | oracle cloud (https://docs.oracle.com/en-us/iaas/Content/Block/References/samplefiocommandslinux.htm) suggests something like: `fio --direct=1 --ioengine=libaio --bs=4k --iodepth=256 --runtime=120 --eta-newline=1 --numjobs=4 --time_based --group_reporting --rw=read --size=2G --name=clarkb-test` to test sequential read speeds for database use cases. I have this runnign on my local | 18:00 |
| clarkb | machine first. Fio has the potential to impact the running server so want to be careful | 18:00 |
| clarkb | that reports `read: IOPS=532k, BW=2080MiB/s (2181MB/s)(244GiB/120001msec)` for my local nvme device | 18:02 |
| clarkb | maybe we want to start with something smaller on lists though. Like 5 seconds and 250MB or so? | 18:05 |
| clarkb | note that writes numjobs files to the local directory of size size to then start performing the reads against | 18:05 |
| clarkb | it does not appear to delete them when done | 18:05 |
| clarkb | thoughts on the safety of such a thing while lists is running (and possibly busy / sad) ? | 18:06 |
| fungi | i mean, it can't get much more sad that it already is on a regular basis, short of being entirely offline | 18:07 |
| fungi | i think the e-mail receipt/processing/forwarding isn't dependent on the database at all, other than to possibly do subscriber lookups for filtering | 18:08 |
| clarkb | fungi: ya but the whole system shares one device so potentially if we load it up more things can get sad? | 18:09 |
| fungi | fio looks like it's just focused on block device performance though | 18:09 |
| fungi | not anything database-specific | 18:10 |
| clarkb | its a generic io tesing tool. The idea is you configure it to mimic a use case. In this case databases should be sequential read heavy | 18:10 |
| clarkb | so we configure it to test sequential reads | 18:10 |
| fungi | i guess you're wanting to see if that device is sad before we dig into database performance problems? | 18:10 |
| clarkb | yes | 18:11 |
| clarkb | there isn't much point in trying to tune mariadb if the disk is always going to be too slow | 18:11 |
| clarkb | my hunch here is that switchign to an ssd based volume for mariadb will help a lot | 18:11 |
| fungi | seems reasonable, sure | 18:11 |
| clarkb | but I'm hoping to actually measure some of that before hand. But I also don't want to break the system sending a bunch of extra load at it | 18:11 |
| fungi | it looks less heavily-loaded than usual at the moment | 18:12 |
| fungi | seems like as good a time as any | 18:12 |
| clarkb | I have installed fio on lists | 18:15 |
| clarkb | I'm setting runtime to 3 seconds and file sizes to 200M and I'm doing it in ~clarkb/fio_test | 18:15 |
| clarkb | `read: IOPS=28.8k, BW=113MiB/s (118MB/s)(344MiB/3054msec)` is the result of that test | 18:17 |
| clarkb | dmesg -T doesn't show any complaints. I guess I'll do a longer test now 30 seconds? | 18:17 |
| fungi | sounds great | 18:18 |
| fungi | docs afs volume move finished in 19h40m | 18:19 |
| clarkb | `read: IOPS=30.5k, BW=119MiB/s (125MB/s)(3584MiB/30033msec)` that seems faily consistent over short periods of time | 18:19 |
| clarkb | (still much less zoomy than my nvme drive, but not sure if that is bad numbers of mariadb) | 18:19 |
| fungi | did a vos release on docs now just for safety, but vos listvldb shows no rw volumes on afs01.ord any longer, and the expected ones on afs01.dfw | 18:22 |
| fungi | i'm going to get the larger batch of afs02.dfw to afs01.dfw moves going now, and will then look at further upgrades to noble while that's underway | 18:23 |
| clarkb | fwiw I don't think those fio numbers point at anything drastically wrong. I would expect that to be plenty of bw for the amount of total transfer that iotop reported for mariadb | 18:23 |
| fungi | though obviously won't upgrade afs02.dfw until the rw moves off of it are done | 18:23 |
| clarkb | mariadb did a total read amount of like 10G over several minutes yesterday. Which seems like well below 120M/s | 18:24 |
| fungi | clarkb: so we probably shouldn't expect a huge performance boost from moving the db to a dedicated ssd-backed cinder volume, is what i'm hearing? | 18:24 |
| clarkb | fungi: thats kind of what I'm thinking based on ^ | 18:24 |
| clarkb | unless there is noisy neighbor problems or something like that hitting us at times that are not now | 18:24 |
| fungi | right, i'm starting to wonder if right now things are fine, and our i/o bandwidth ends up horribly oversubscribed at other times | 18:24 |
| clarkb | ya maybe we need to rerun this fio command when we notice slowness? | 18:25 |
| fungi | though trying to load the moderation queue for openstack-discuss just took my browser about 80 seconds wall clock time | 18:26 |
| fungi | after that, deleting a message from the queue only took roughly 5 seconds though | 18:27 |
| opendevreview | Michal Nasiadka proposed openstack/diskimage-builder master: almalinux-container: Add support for building 10 https://review.opendev.org/c/openstack/diskimage-builder/+/960336 | 18:27 |
| fungi | feels like disk access vs resident memory speed differences | 18:27 |
| clarkb | or maybe we're filling up some queues (in apache, uwsgi, or mariadb) and you have to wait to get your spot in the line? | 18:28 |
| clarkb | I just reran my test and got `read: IOPS=24.9k, BW=97.2MiB/s (102MB/s)(2924MiB/30095msec)` so worse but not terribly so | 18:28 |
| fungi | also system load has climbed to 5, where it was 1.something just a few minutes ago | 18:28 |
| fungi | and now it's started dropping again | 18:28 |
| fungi | ooh, we probably haven't looked at apache stats | 18:29 |
| fungi | good call, the scorecard might have some insights | 18:29 |
| clarkb | fungi: I think our uwsgi config is configured with 2 processes and 2 threads. That seems maybe low | 18:30 |
| clarkb | we have 4vcpu there and if we're hitting a system load of 5 that seems liek we're probably loading things up pretty well already. We could try say 4 processes and double the total number of threads? | 18:32 |
| clarkb | internet suggests 2 * cpu for process count | 18:34 |
| clarkb | so ya maybe bump that to 4 and see if it helps. If nothing bad happens bump all the way to 8? | 18:34 |
| fungi | sounds great to me, worth a try for sure | 18:37 |
| opendevreview | Clark Boylan proposed opendev/system-config master: Double the total mailman3 uwsgi processes to 4 https://review.opendev.org/c/opendev/system-config/+/960412 | 18:40 |
| clarkb | fungi: so I notice that ^ we're setting the value as a hardcoded value in the container itself. We could bind mount that file in and make the rtt on these updates a bit more efficient. Do you think we should do that now too? | 18:41 |
| fungi | maybe worth checking how much our copy of that file deviates from the one in maxking's repo | 18:43 |
| fungi | i can't remember if we've edited it at all up to now | 18:43 |
| fungi | but yeah, it's configuration and if we're changing it then we're probably better off mounting our copy over it | 18:44 |
| clarkb | there is no difference before my change | 18:44 |
| fungi | not bothering with container rebuilds just to alter a setting | 18:44 |
| clarkb | ok let me start on a followup change to bind mount that over instead | 18:45 |
| fungi | then that would explain why we didn't mount in our own copy previously. sounds like a great idea | 18:45 |
| clarkb | er not a follow change. I'll do a second patchset | 18:46 |
| fungi | remember to include a breadcrumb comment at the top indicating the upstream repo commit it's copied from and what we altered | 18:46 |
| fungi | i think there are a few others where i did that | 18:46 |
| fungi | makes it easier for when we update later | 18:46 |
| fungi | we're probably do for another mailman update soon, but i know they've got a release pending that will add python 3.13 support so worth waiting for that i think | 18:47 |
| fungi | s/ do / due / | 18:47 |
| fungi | afs moves from afs02.dfw to afs01.dfw are in progress. it skipped debian because the volume was already locked by another writer but i'll come back around to get that and any others that don't move in the first attempt | 18:50 |
| fungi | working on upgrading afs01.ord to noble next... i'll adjust /etc/network/interfaces and /etc/default/grub similar to afs01.dfw, then post-upgrade i'll add our openafs ppa and reinstall the missing packages from it. i *think* that's all the gotchas? | 18:52 |
| opendevreview | Clark Boylan proposed opendev/system-config master: Double the total mailman3 uwsgi processes to 4 https://review.opendev.org/c/opendev/system-config/+/960412 | 18:52 |
| clarkb | fungi: those three items are all that I recall from afs01.dfw. | 18:53 |
| clarkb | was the docs volume the only RW volume on afs01.ord? | 18:53 |
| fungi | no, it was merely the last one i moved | 18:53 |
| fungi | there were 5 others but they were much smaller | 18:53 |
| fungi | project.airship was the next largest volume, for reference | 18:54 |
| fungi | i saved docs for last because i knew it would take a while and i wanted it to happen in parallel with my meatspace low-power standby recharge | 18:54 |
| opendevreview | Clark Boylan proposed opendev/system-config master: Double the total mailman3 uwsgi processes to 4 https://review.opendev.org/c/opendev/system-config/+/960412 | 18:55 |
| clarkb | got it | 18:55 |
| fungi | i stuck the list i've been working from in a paste, lemme see if i can find it in scrollback real quick | 18:56 |
| fungi | clarkb: https://paste.opendev.org/show/bWxZSt3nyaG8bxa3yW54/ | 18:57 |
| fungi | that larger list at the top is the one i'm working from, though i hand-reordered it a bit to insert some smaller breather between the larger volumes in hopes that they'll vos release normally and avoid us running out of space again | 18:58 |
| fungi | but now that i'm fairly sure a vos release is what frees up the additional usage, it's not a big deal to address | 19:00 |
| clarkb | ack | 19:06 |
| fungi | it's working on mirror.centos-stream at the moment, which will likely continue until well after i've checked out for the evening, but i'll check in on it again when i wake up and see how far it's gotten | 19:15 |
| fungi | i'm expecting it'll be the weekend at best before they're all finished though | 19:16 |
| fungi | but i should be able to get the rest of the servers upgraded in the meantime and then do afs02.dfw last | 19:20 |
| clarkb | fungi: we didn't delete the openeuler content yet did we? | 19:39 |
| fungi | not yet, no | 19:39 |
| clarkb | should we make sure that happens to speed things up? | 19:39 |
| clarkb | https://review.opendev.org/c/opendev/system-config/+/960412 just passed and you already +2'd it. Should we go ahead and land that? I doubt anyone will review it until tomorrow | 19:39 |
| fungi | at this point it probably doesn't matter, we can worry about additional cleanup once the upgrades are in the rear-view mirror | 19:39 |
| clarkb | wfm | 19:40 |
| fungi | it would speed up the moves by maybe 6-ish hours, but over the whole it's not that important | 19:41 |
| fungi | there's other stuff we could clean up too, i'm sure | 19:41 |
| opendevreview | Merged opendev/system-config master: Double the total mailman3 uwsgi processes to 4 https://review.opendev.org/c/opendev/system-config/+/960412 | 20:17 |
| clarkb | ok looks like ^ should be deployed now | 20:23 |
| clarkb | I see processes restarted | 20:23 |
| clarkb | still waiting for things to finish restarting and be available though | 20:23 |
| clarkb | fungi: any idea how long it typically takes ? | 20:25 |
| clarkb | ok I get a site now but says something went wrong please start mailman-core | 20:27 |
| clarkb | its doing a chown -R /opt/mailman/var in the mailman-core container I think | 20:28 |
| clarkb | ok thats the directory with all the caches and stuff. So it may be slow getting to all the files? | 20:29 |
| clarkb | I see high iowait now | 20:30 |
| clarkb | the chown has been running for about 6 minutes now. This has me wondering if its back to general disk io problems given this behavior | 20:31 |
| clarkb | in theory with things not fully started up we're not handling any requests from elsewhere unless one of hyperkitty vs postorius is up and is getting hit and that is slow and slowing down other io? | 20:32 |
| clarkb | strace shows the chown is chowning things so I don't think we've hit some bug or problem. Its just taking a while to run the command against all the files. I will attempt to practice patience | 20:33 |
| clarkb | the expected uwsgi worker count of 4 is present so that is good | 20:33 |
| clarkb | fwiw I think that this behavior is pushing me back to "the disk is really slow" | 20:35 |
| clarkb | and maybe the problem is writes not reads? or random not sequential? | 20:35 |
| clarkb | so I think I may be back to trying to quantify that again once things settle | 20:36 |
| clarkb | ok things did eventually come up after teh chown completed | 20:39 |
| clarkb | durign the chown there was high iowait. The chowns aren't going to be super heavy on the throughput but are potentially iops limited and random not sequential | 20:39 |
| clarkb | these might be good clues for what we should be trying to test with fio | 20:40 |
| clarkb | things aren't super speed right now but aren't super slow either | 20:42 |
| fungi | clarkb: oh sorry, was afk for a few but yeah, 10 minutes isn't unusual | 20:43 |
| clarkb | fungi: I think it was closer to 15 but that is good to know | 20:43 |
| clarkb | looks like the entrypoint script for mailman-core does a chown -R of the entire mailman var dir | 20:43 |
| clarkb | which includes a bunch of files from caches to messages | 20:44 |
| clarkb | and while that was happening iowait was very high so maybe my fio test to emulate a database is looking at the wrong profile of io | 20:44 |
| fungi | i guess they assume that will normally be very fast | 20:44 |
| fungi | i mean, i normally expect a recursive chown or chmod to be very fast even when processing lots of files | 20:45 |
| clarkb | yes I would too. Its like getdirents in a loop and then a chown call | 20:46 |
| clarkb | not significant amounts of data but lots of potentially random access? | 20:46 |
| clarkb | let me see about doing a random read test using fio going | 20:47 |
| clarkb | s/rw=read/rw=randread/ on my local machine produces better results than sequential reads. Yay nvme I guess | 20:48 |
| clarkb | actually thats more about solid state than the io itnerface, but still zoom zoom | 20:48 |
| clarkb | fungi: `read: IOPS=984, BW=3937KiB/s (4031kB/s)(116MiB/30099msec)` on lists running the same command as before but using the randread method | 20:50 |
| clarkb | I think this is the smoking gun | 20:50 |
| clarkb | less than 1k ops and less than 4MiB/s | 20:50 |
| clarkb | that is sloooooowwwww | 20:50 |
| fungi | ah yeah | 20:51 |
| clarkb | I need to do a school run shortly. But maybe the next step is running that same randread test on other disks in the same provider to get more of a baseline. eg is using the root disk always this bad or is this host sad | 20:51 |
| clarkb | `fio --direct=1 --ioengine=libaio --bs=4k --iodepth=256 --runtime=30 --eta-newline=1 --numjobs=4 --time_based --group_reporting --rw=randread --size=200M --name=clarkb-test` this is the command if anyone else wants to try it | 20:51 |
| clarkb | fio will write files to the directory you are currently in of size 200MB and it won't delete them so just keep that in mind | 20:52 |
| clarkb | you don't need root/sudo | 20:52 |
| fungi | might also make sense to do it on the lists server with mailman/mariadb processes stopped | 20:52 |
| clarkb | ya and maybe take several samples over time to see if it is persistent | 20:52 |
| clarkb | not persistent could point at a noisy neighbor etc | 20:53 |
| fungi | it does seem likely, and would explain why we see slow responses sometimes but not others | 20:53 |
| clarkb | I can take another sample when I get back from the school run | 20:54 |
| clarkb | and then look at measuring some other nodes. If you have ideas for other locatiosn to test let me know. | 20:54 |
| clarkb | `read: IOPS=687k, BW=2682MiB/s (2812MB/s)(26.2GiB/10002msec)` is what I got on my local disk over 10 seoncds (not 30 | 20:55 |
| clarkb | just to put it in comparison | 20:55 |
| fungi | mirror.dfw.rax maybe? its rootfs isn't used for much anything, all the i/o is on a separate block device | 20:56 |
| clarkb | ++ | 20:56 |
| clarkb | I think apache logs do go to the root disk but yes the majority of io is on dedicated devices | 20:56 |
| fungi | ah, yeah /var/log is on the rootfs but that's about it | 20:57 |
| clarkb | ok I'm off back soon | 21:02 |
| fungi | have fun storming the castle! | 21:03 |
| fungi | preparatory ifupdown and grub configs rebooted fine on afs01.ord in jammy, do-release-upgrade is running there now | 21:04 |
| fungi | hopefully this one will come back up on its own afterward, and also have the correct cpu count | 21:05 |
| fungi | then i'll get our openafs ppa readded and those packages reinstalled | 21:06 |
| clarkb | just got back new run data for randread: `read: IOPS=869, BW=3477KiB/s (3561kB/s)(102MiB/30102msec)` and read (sequential): `read: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(2579MiB/30052msec)` | 21:42 |
| clarkb | now to install the tool on the mirror node and test there | 21:42 |
| fungi | afs01.ord did come back up on noble with no additional work, and lists 8 processors | 21:43 |
| clarkb | excellent | 21:46 |
| clarkb | on mirror.dfw randread: `read: IOPS=20.4k, BW=79.8MiB/s (83.7MB/s)(2395MiB/30004msec)` read (sequential): `read: IOPS=140k, BW=548MiB/s (575MB/s)(16.1GiB/30004msec)` | 21:47 |
| clarkb | it is possible that other system activity accounts for the difference however I suspect something else is going on | 21:47 |
| clarkb | the difference in what is otherwise expected to be very similar hardware is quite large | 21:48 |
| clarkb | so maybe we need to start thinking about picking a time to shut down services and measure without as much noise as we can manage and if things are still sad then consider a volume or replacing the host or askign the cloud if there is something wrong? | 21:49 |
| fungi | we could also consider moving it to a larger flavor and upgrading to noble | 21:53 |
| clarkb | yes, but if that doesn't change the root device location then it probably won't help | 21:54 |
| fungi | on a new server i mean, not in-place | 21:54 |
| clarkb | ya we could go through the fun of a new ip again. It wasn't that painful toher than windriver subscribing to that one list causing problems | 21:54 |
| clarkb | I do wonder if this is something the cloud would be interested in simply because it might indicate a problem they should address | 21:55 |
| fungi | though a resize in rackspace would reschedule it to a new host anyway, the downtime is unpredictably long | 21:55 |
| clarkb | and with the very large root disk on this node probably longer than usual | 21:55 |
| clarkb | (also thats another difference with this node we picked one iwth a large root device to match the old server iirc) | 21:56 |
| fungi | yeah | 21:56 |
| clarkb | but I think I'm reasonably convinced now that something is up with general disk io particularly with random access | 21:56 |
| clarkb | and then that seems to impact our ability to serve content at a reliable response time | 21:56 |
| fungi | okay, the missing openafs packages have been reinstalled from our ppa and afs01.ord rebooted again. tomorrow i'll try to get the afsdb and kdc servers upgraded similarly | 22:02 |
| fungi | then afs02.dfw will happen once all the rw volume moves are done | 22:02 |
| fungi | which will probably be early next week | 22:02 |
| clarkb | sounds great | 22:03 |
| clarkb | for lists any thoughts on when might be a good time to take some no services running samples | 22:04 |
| fungi | any time really, but friday might be lower-demand | 22:04 |
| fungi | however, predicting the noisy neighbor bursts may prove challenging, if that's what's going on | 22:05 |
| clarkb | ok lets plan for friday then and based on what we learn maybe we file a ticket to see what rax has to say about it but also maybe start planning a replacement or volume attachment | 22:05 |
| clarkb | ya though so far my small sample set over a couple of hours seems to be pretty consistent | 22:05 |
| clarkb | `read: IOPS=1022, BW=4091KiB/s (4189kB/s)(120MiB/30120msec)` from just now | 22:06 |
| fungi | moving it to a cinder volume would be the easiest workaround, and shouldn't require much actual downtime | 22:06 |
| clarkb | right we can in theory rsync most of the data to reduce the delta then shut things down and rsync again then start stuff up minimizing the downtime | 22:06 |
| clarkb | that doesn't help things that would stay on the root disk though but maybe that is good enough | 22:07 |
| fungi | archive files and the database mainly | 22:13 |
| clarkb | very hand wavy so don't take too seriously: 15k spinning disks have a random read iops limit of ~200 iops. So either we're on an array of disks and our data is distributed amongst them or we're on ssds | 22:18 |
| clarkb | and if we are on ssds I believe that read performance can degrade over time as the drive consumes its extra blocks and has to distribute things less efficiently? | 22:19 |
| clarkb | so maybe this is a signal that we're on ssds that need some care and feedign? | 22:19 |
| fungi | needs a trim ;) | 22:21 |
| fungi | pun intended | 22:21 |
| clarkb | it wouldn't surprise me | 22:21 |
| clarkb | if that is legitimately the solution here | 22:21 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!