jrosser | noonedeadpunk: disabling tempest means we don’t get a public network in AIO | 07:20 |
---|---|---|
jrosser | that sounds like a good use for openstack_resources - but interesting question if creating that network should be in the tempest role, or somewhere else | 07:22 |
noonedeadpunk | jrosser: yeah, I'm due to create a playbook for using the role independently | 08:17 |
noonedeadpunk | haven't looked there yet though | 08:17 |
noonedeadpunk | I would leave tempest "as is", or well, as is but with https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/889741 | 08:18 |
noonedeadpunk | not to break expectations too much | 08:18 |
noonedeadpunk | As there're quite some assumptions in tempest that not worth moving outside of it I guess | 08:18 |
noonedeadpunk | I'm also thinking if we can indeed improve images download part in some follow-up | 08:20 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: WIP - Bootstrapping playbook https://review.opendev.org/c/openstack/openstack-ansible-ops/+/902178 | 09:52 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add role to install and run sonobouy k8s validation tests https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906054 | 09:52 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add playbook to run functional test of magnum capi driver https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906361 | 09:52 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add hook playbook install and test magnum capi driver https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906363 | 09:52 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: WIP - Bootstrapping playbook https://review.opendev.org/c/openstack/openstack-ansible-ops/+/902178 | 09:54 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add role to install and run sonobouy k8s validation tests https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906054 | 09:54 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add playbook to run functional test of magnum capi driver https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906361 | 09:54 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add hook playbook install and test magnum capi driver https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906363 | 09:54 |
andrewbonney | noonedeadpunk: re: proxy protocol #906447, we haven't needed to add the VIP to the allowed addresses, but I note that Zed doesn't carry the management_address/ansible_host patch. We originally had to override it for that reason | 10:40 |
noonedeadpunk | ok, so that's valid patch in fact? | 10:45 |
andrewbonney | I'm not sure. In that case I'd have thought you'd still want the host management addresses rather than the VIP. It may be the patch covers a different case, but it's not one I've encountered | 10:49 |
noonedeadpunk | yeah, I thought actually the same. And there were couple of cases already where haproxy used VIP for connections rather then it's mgmt ip | 11:01 |
opendevreview | Merged openstack/openstack-ansible-ceph_client master: Align extra conf files mode https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/906030 | 13:01 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_tempest master: Add variable to prevent tempest installation https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/906641 | 13:03 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: WIP - Bootstrapping playbook https://review.opendev.org/c/openstack/openstack-ansible-ops/+/902178 | 13:03 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add role to install and run sonobouy k8s validation tests https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906054 | 13:04 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add playbook to run functional test of magnum capi driver https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906361 | 13:04 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add hook playbook install and test magnum capi driver https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906363 | 13:04 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add job to test Vexxhost cluster API driver https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/905199 | 13:04 |
opendevreview | Merged openstack/openstack-ansible-os_glance master: Fix iteration over backends config https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/906048 | 13:15 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_glance stable/2023.2: Fix iteration over backends config https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/906491 | 13:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-ceph_client stable/2023.2: Align extra conf files mode https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/906492 | 13:23 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_tempest master: Add variable to prevent tempest installation https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/906641 | 16:11 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: WIP - Bootstrapping playbook https://review.opendev.org/c/openstack/openstack-ansible-ops/+/902178 | 16:12 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add role to install and run sonobouy k8s validation tests https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906054 | 16:12 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add playbook to run functional test of magnum capi driver https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906361 | 16:12 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Add hook playbook install and test magnum capi driver https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906363 | 16:12 |
admin1 | what is the oldest osa i can install ? | 16:38 |
admin1 | that you guys know of still works ( due to repos and packages still being available ) | 16:39 |
mgariepy | what version do you need? | 16:42 |
admin1 | ocata | 16:44 |
admin1 | though i feel the rbac rules have been the same in all openstack versions until the more recent ones .. this is to test some rbac rules | 16:45 |
mgariepy | hmm. that would be a challenge i guess. | 16:45 |
jrosser | you would really have to try it, and fix it as you go | 16:46 |
mgariepy | rabbitmq and galera will probablement need some work. | 16:46 |
admin1 | this is for rbac rules testing especially on keystone | 16:46 |
admin1 | so i don't even need nova or neutron to be up | 16:47 |
admin1 | on that note, what is the oldest we know might work out of the box | 16:47 |
jrosser | i don't think ocata supports a metal deploy either which might be the most likley to work | 16:48 |
jrosser | though i think that might not be so true for older releases | 16:49 |
jrosser | lxc images are probably going to need some effort/fixing too for that old | 16:49 |
jrosser | wallaby CI looks broken | 16:51 |
jrosser | but xena looks ok https://zuul.opendev.org/t/openstack/builds?project=openstack%2Fopenstack-ansible&branch=stable%2Fxena&pipeline=periodic&skip=0 | 16:51 |
admin1 | thanks jrosser for the link .. now I know how to check :) | 16:55 |
admin1 | in which tag might we be able to use the magnum capi driver ? | 17:13 |
admin1 | then i have to upgrade a lot of clusters to that version .. | 17:13 |
admin1 | and what versions can check-pick it .. | 17:13 |
jrosser | admin1: i have it running in a lab running antelope | 17:18 |
jrosser | but i am happy to apply *tons* of patches there to make it work | 17:19 |
jrosser | if you want an "out of the box" experience i think for OSA it will be caracal /2024.1 | 17:21 |
jrosser | having said that it would really be very nice if there was some testing of this beyond myself beforehand | 17:21 |
admin1 | i have around 10 diff versions of clusters on diff tags where i can apply to test ( in prod ) | 17:31 |
admin1 | i have 28.0.1 running where i can test | 17:32 |
admin1 | i need some guidiance on the patches to apply/cherry-pick and how to apply them | 17:33 |
jrosser | admin1: it all starts from here https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/905199 | 17:38 |
spatel | I am running capi in production with kolla-ansible and recently I took trove to production. its fun.. | 18:00 |
spatel | OSA it little complicated when I think of doing trove and capi stuff | 18:01 |
jrosser | well I take the long path | 18:01 |
jrosser | it is not comparing equivalent architectures really | 18:02 |
jrosser | and to make a useful CI job is really quite large effort | 18:04 |
noonedeadpunk | not sure what is hard in trove with osa though | 18:06 |
noonedeadpunk | like it's running in background for years now here | 18:06 |
noonedeadpunk | BUT, trove is jsut broken when it comes to clustering | 18:07 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Slighly simplify primary node redeployment https://review.opendev.org/c/openstack/openstack-ansible/+/906750 | 18:07 |
noonedeadpunk | it's known not to work | 18:07 |
noonedeadpunk | spatel: I assume you're having separate rabbitmq cluster for trove? | 18:07 |
spatel | No | 18:10 |
spatel | I have everything running on same rabbitMQ for all openstack components.. | 18:10 |
spatel | I did some networking magic do trove-guest-agent can talk to controller node.. | 18:11 |
spatel | :) | 18:11 |
noonedeadpunk | you not it's not really safe, do you ?:) | 18:12 |
noonedeadpunk | in terms - given you can escape docker (which you can from time to time), with this rabbitmq cluster you can actually intercept any messages | 18:13 |
jrosser | that is 8-O | 18:13 |
noonedeadpunk | And there were quite nice showcases on how to gain admin with that back in the days | 18:13 |
noonedeadpunk | like all you need is to have rabbitmq access | 18:14 |
noonedeadpunk | and you pass that basically to tenant vms | 18:14 |
jrosser | this is kind of what I mean about architecture | 18:14 |
noonedeadpunk | yeah | 18:14 |
jrosser | you can make anything work in a simple environment like devstack | 18:14 |
jrosser | or if you just bridge/route “all the networks” | 18:15 |
noonedeadpunk | and this is totally fine for small basement project to run for years | 18:15 |
jrosser | but that is a truly gigantic attack / risk surface | 18:15 |
jrosser | the reason things are sometimes “difficult” with OSA is that the architecture aims to prevent bad things happening, by design | 18:16 |
spatel | Why and how someone gain access ? | 18:16 |
jrosser | but sadly some services like trove are not built really with real environments in mind | 18:16 |
noonedeadpunk | why is really a good question :D | 18:17 |
spatel | Trove VM going to run in service project and nobody can access VM. | 18:17 |
spatel | End user only get access of DB. that person has no access of VM or SSH or anything | 18:17 |
noonedeadpunk | You gain access to rabbitmq to this VM | 18:17 |
noonedeadpunk | And kinda access to it resides inside a VM | 18:18 |
noonedeadpunk | And you give some kind of access to VM to users | 18:18 |
spatel | How they will get access of rabbitMQ from where? you are saying from Trove VM ? | 18:18 |
noonedeadpunk | And in some DBs you can run commands with mysql queries | 18:19 |
noonedeadpunk | https://paste.openstack.org/show/bSzr1UA2aa77dXAmZjYx/ | 18:19 |
spatel | You are saying mysql privilege escalation method.. (someone use mysql query to run shell level of command right? ) | 18:20 |
noonedeadpunk | So it's jsut matter to escape docker container :D | 18:20 |
noonedeadpunk | Yeah, and you can get root to the DB in trove | 18:20 |
noonedeadpunk | whatever | 18:20 |
noonedeadpunk | Like I still think we have that for a good reason: https://docs.openstack.org/openstack-ansible-os_trove/latest/configure-trove.html#use-stand-alone-rabbitmq | 18:21 |
spatel | Same goes to Octavia :) | 18:21 |
noonedeadpunk | no, not at all | 18:21 |
spatel | someone can exploit haproxy and gain access of shell and access using lb-mgmt-net :) | 18:21 |
noonedeadpunk | you don't have rabbit talking on this network | 18:21 |
noonedeadpunk | and through rabbit you can intercept/inject whatever you want | 18:22 |
noonedeadpunk | It's alike admin access to your cloud | 18:22 |
spatel | Not rabbitMQ but person can gain access of system or exploit octavia-manager service... | 18:22 |
noonedeadpunk | how? | 18:22 |
spatel | Smart dude can exploit anything :) | 18:22 |
noonedeadpunk | meh | 18:22 |
jrosser | not really | 18:22 |
johnsom | Spatel no, haproxy is in a network namespace that has no access to the lb-mgmt-net | 18:23 |
noonedeadpunk | mTLS there for reason:) | 18:23 |
jrosser | connection is octavia to amphora | 18:23 |
jrosser | other way round -> good design | 18:23 |
jrosser | trove is all backwards | 18:23 |
noonedeadpunk | and that ^ | 18:23 |
spatel | indeed.. | 18:23 |
spatel | but lets say if we deploy dedicated rabbitMQ then we need to wire it up with all other trove components | 18:24 |
johnsom | Trove is a very different design with issues | 18:24 |
spatel | johnsom welcome john (octavia keyword must brought you here ) | 18:24 |
noonedeadpunk | yes. but at least you're not giving control to rest of your openstack infrastructure - it's more or less isolated network without and isolated empty rabbit | 18:26 |
spatel | I don't understand how does isolated rabbitMQ will work.. | 18:27 |
spatel | Trove components and trove-guest-agent will talk to isolated rabbitMQ great!! but how openstack other components will talk to trove ? | 18:28 |
jrosser | it’s a separate cluster that you *do* allow the trove parts to see | 18:28 |
jrosser | so new interface on trove container and extra rabbit container, to put this in an OSA context | 18:28 |
jrosser | and then that is a provider network | 18:29 |
spatel | Let me create isolated rabbitMQ with trove and give it a try.. I am very interested in that design | 18:29 |
jrosser | this all started with “architecture” for capi | 18:29 |
noonedeadpunk | They don't need to | 18:29 |
spatel | noonedeadpunk ? | 18:30 |
noonedeadpunk | Services between each other talk only via API | 18:30 |
noonedeadpunk | Rabbit is used only for inside of the service | 18:30 |
noonedeadpunk | ALSO | 18:30 |
jrosser | it’s totally possible to just throw a k8s up randomly and make it work as quick as you can | 18:30 |
spatel | Really? trove doesn't use RabbitMQ to talk to openstack components.. | 18:30 |
noonedeadpunk | IIRC Trove does have a setting for using other messaging url for it's VMs | 18:30 |
noonedeadpunk | Nothing use rabbitmq for cross-component | 18:30 |
noonedeadpunk | Cross-component only API | 18:30 |
noonedeadpunk | Nova -> Cinder = API, Octavia -> Nova = API, etc | 18:31 |
noonedeadpunk | So you can (and maybe even should) have a rabbitmq cluster per service | 18:31 |
noonedeadpunk | Or at least I guess that's how helm does | 18:31 |
noonedeadpunk | or anyway - quite some ppl just have small 1 rabbitmq cluster per service | 18:32 |
noonedeadpunk | Like Mirantis had most of their deployments this way | 18:32 |
noonedeadpunk | Rabbit fails - scrap it, just 1 service affected | 18:32 |
spatel | I can create single node rabbitMQ for trove running on small VM :) | 18:32 |
noonedeadpunk | drop/create container and you're good to go again | 18:32 |
spatel | its not going to kill Trove | 18:32 |
spatel | Yes.. let me try that method and see.. | 18:33 |
spatel | Its a good point.. | 18:33 |
noonedeadpunk | I guess it's we're jsut lazy of doing big cluster for rabbit by default. | 18:35 |
noonedeadpunk | But it's totally possible to do plenty of small ones as well | 18:35 |
spatel | Do you guys running 3 node cluster for rabbit or small pieces | 18:47 |
noonedeadpunk | We run just 1 big but kinda lazy to change that | 18:52 |
spatel | haha | 19:03 |
noonedeadpunk | I mean - that would involve training, change of monitoring, rewriting plenty of docs... meh | 19:06 |
noonedeadpunk | BUT, for Trove we have separate cluster | 19:06 |
spatel | I will deploy or try separate cluster | 19:39 |
spatel | noonedeadpunk around? | 20:19 |
spatel | I have question related Ceph S3 implementation | 20:20 |
spatel | How do you expose S3 to Public network? | 20:20 |
noonedeadpunk | through haproxy? | 20:20 |
spatel | Are you using some kind of LB ? | 20:20 |
noonedeadpunk | though different one | 20:20 |
admin1 | via haproxy | 20:20 |
admin1 | i use it via haproxy overrides | 20:21 |
noonedeadpunk | (not same hardware as for control planes) | 20:21 |
admin1 | yes | 20:21 |
admin1 | its totally external ceph | 20:21 |
noonedeadpunk | but using same role | 20:21 |
admin1 | the only thing that ties together is the haproxy and keystone | 20:21 |
spatel | Do you run rgw service on dedicated nodes or with shared nodes | 20:21 |
spatel | I have dedicated Ceph cluster (not attached to openstack) | 20:21 |
spatel | I am thinking to run rgw on same mon nodes (I have dedicated 3 nodes for mon) | 20:25 |
noonedeadpunk | well.... that depends on traffic | 20:29 |
noonedeadpunk | I think we run rgw together with haproxy and mds | 20:29 |
noonedeadpunk | but not with mon/mgr | 20:29 |
spatel | noonedeadpunk got it.. | 20:31 |
spatel | noonedeadpunk how do you wire it up with keystone ? | 20:31 |
noonedeadpunk | through public network | 20:32 |
noonedeadpunk | or well "public" | 20:32 |
spatel | I meant how does ceph talk to keystone | 20:32 |
spatel | Does ceph has option to integrate with keystone? | 20:32 |
noonedeadpunk | yes, sure | 20:32 |
jrosser | spatel: you can study the osa AIO for all of this info :) | 20:33 |
noonedeadpunk | https://docs.ceph.com/en/latest/radosgw/keystone/ | 20:33 |
spatel | jrosser ofc.. just trying to get info as fast as possible :) | 20:33 |
jrosser | we run rgw on the mgmt network and have extra haproxy | 20:34 |
jrosser | then rgw>keystone on the mgmt network/internal vip | 20:34 |
spatel | I am first time deploying rgw so not sure how to architecture also I have no idea how much traffic it will bring | 20:34 |
jrosser | but loads of ways to do it really | 20:34 |
spatel | Got it. in-short make them reachable | 20:35 |
spatel | does horizon has GUI or something to create rgw account / bucket etc.. | 20:35 |
jrosser | you need to run the swift compatibility to make that work | 20:36 |
spatel | Now swift coming in picture | 20:37 |
jrosser | so openstack cli / horizon uses Swift API against rgw to do all those things | 20:37 |
jrosser | just to be clear, you don’t need the openstack swift service | 20:37 |
spatel | Hmmm | 20:37 |
spatel | okie!! | 20:37 |
jrosser | you expose a swift compatible api from rgw, as well as s3 | 20:37 |
spatel | let me google it and understand the workflow.. | 20:38 |
jrosser | we did s3 static sites with it recently and that was interesting | 20:38 |
spatel | static sites? | 20:39 |
admin1 | you setup ceph to use keystone for auth .. and then you add the object storage service to endpoints .. then it will appear as object storage in horizon .. and then can use the APIs .. so ceph is completley indepdent of osa , only haproxy and keystone and horizon is used | 20:39 |
admin1 | static sites work good :) | 20:39 |
admin1 | upto zed release, i use to have my endpoints as id.domain.com, s3.domain.com etc .. then after zed, something changed in the way we use overrides , so now not anymore in new ones | 20:40 |
jrosser | need a bit of thinking about if to enable the multi tenancy setting in rgw | 20:40 |
jrosser | that made static sites a little tricky for us but we made it work eventually | 20:41 |
admin1 | spatel, be prepared to get logged out of horizon countless times until you get the keystone settings correct :D | 20:41 |
spatel | sorry but what is static sites means? | 20:41 |
admin1 | it means you run your whole webite ( css, html, java, SPA) direct from s3 | 20:42 |
jrosser | https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html | 20:42 |
spatel | oh | 20:42 |
spatel | admin1 what do you means countless logout? that is because of integration with ceph? | 20:43 |
admin1 | well, when you do the keystone setting in ceph , ceph will authenticate via keystone .. so if any setting is not correct, when you click the object storage link in horizon and then it will log you out | 20:52 |
admin1 | you will know it when you see it :) | 20:52 |
spatel | Do I need to integrate entire ceph with keystone or just rgw service? | 20:53 |
spatel | I am running cephadm | 20:53 |
admin1 | i have one all in all cluster .. 3 nodes .. where each node is ceph everything ( mon, osd, mgr) and openstack everything ( compute ,controller, network ) | 20:53 |
spatel | do you have any doc or blog for rgw integration with keystone | 20:53 |
admin1 | no but there is some youtube video | 20:54 |
jrosser | the osa code does this | 20:54 |
admin1 | it does ? | 20:54 |
admin1 | even if the ceph is external ? | 20:54 |
jrosser | yes | 20:54 |
admin1 | it never did in mine | 20:54 |
admin1 | and i have like half a dozen different osa + ceph running | 20:54 |
admin1 | what setting does it | 20:54 |
admin1 | how can it go into cephadm docker container and make it work ? | 20:55 |
admin1 | new ones = cephadmin where the docker does not allow ssh | 20:55 |
spatel | I have dedicated ceph cluster running cephadm :( | 20:55 |
admin1 | spatel, i have multiple of those .. dedi ceph clusters via cephadm | 20:56 |
admin1 | but it works | 20:56 |
jrosser | https://github.com/openstack/openstack-ansible/blob/master/playbooks/ceph-rgw-keystone-setup.yml | 20:56 |
admin1 | jrosser, this does not touch cephadm | 20:56 |
jrosser | osa does not understand cephadm, never has | 20:56 |
spatel | :( | 20:58 |
jrosser | I think I gave you the keystone setup you need | 20:58 |
spatel | cephadm is very cool and I am deploying all my deploying using cephadm | 20:58 |
jrosser | not the rgw settings for keystone | 20:58 |
jrosser | there are parts on both sides | 20:58 |
spatel | I can take example and try to integrate with it | 20:58 |
admin1 | spatel => https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/CRYFR777VKQLYLXPSNJVBOKZRGXLXFV6/ | 20:59 |
admin1 | i use this as reference | 20:59 |
admin1 | in the new haproxy .. 28.0.1 for example, how do I override endpoint such that keystone is on https://id.domain.com/ ( | 21:01 |
admin1 | old method was => haproxy_horizon_service_overrides: haproxy_frontend_raw: - acl cloud_keystone hdr(host) -i id.domain.com .. - use_backend keystone_service-back if cloud_keystone | 21:02 |
admin1 | this method not work since zed if i recall | 21:03 |
spatel | admin1 This is helpful | 21:04 |
spatel | specially for cephadm | 21:04 |
admin1 | yes .. all new ones are cephadm based | 21:11 |
admin1 | spatel, do you have blog for trove ? | 21:33 |
admin1 | is there postgresl ? | 21:33 |
admin1 | postgres rabbit and redis will suppliment k8s cluster nicely | 21:34 |
spatel | Trove support all kind of DB.. mysql, postgres, mangodb, redis etc... | 21:34 |
spatel | I have plan to create trove blog.. but I want to try one more experiment which is dedicated rabbitMQ node for trove as per today discussion | 21:35 |
spatel | calling you.. hope its not later there | 21:36 |
admin1 | i am in another call | 21:36 |
admin1 | give me 5 mins | 21:37 |
admin1 | will call back | 21:37 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-plugins master: Add openstack_resources role skeleton https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/878794 | 22:02 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!