NeilHanlon | o/ good morning folks | 14:47 |
---|---|---|
NeilHanlon | reminder: meeting in ~13 minutes at the top of the hour | 14:47 |
jrosser | NeilHanlon: is there an official way to get a newer kernel on Rocky (like ubuntu has HWE kernels for example) | 14:51 |
NeilHanlon | yep! we have a relatively new release of a kernel-mainline package, which you can access by installing `rocky-release-kernel` | 14:51 |
NeilHanlon | plan is to have mainline and LTS options | 14:52 |
NeilHanlon | (some notes here: https://sig-kernel.rocky.page/meetings/2023-11-30/#discussions) | 14:52 |
jrosser | ahha interesting | 14:53 |
jrosser | we needed >= 5.19 | 14:53 |
NeilHanlon | we're providing 6.5.12 at the moment, will be bumping that to 6.6 soon-ish | 14:54 |
NeilHanlon | there is a bit of a problem when it comes to 6.7, but.. that will be a problem for almost everyone I think -- it's related to SecureBoot and NX support | 14:54 |
NeilHanlon | i forget which LTS branch we decided to go with | 14:55 |
jrosser | it turns out that everyone who ever enabled kTLS on linux and wrote a fancy blog about achieving hardware TLS offload and zero-copy forgot to actually measure the before/after memory bandwidth | 14:57 |
jrosser | you need a new-ish kernel and 3 week old (as of today) openssl for that to all actually work | 14:59 |
NeilHanlon | ouch. | 15:00 |
NeilHanlon | #startmeeting openstack_ansible_meeting | 15:01 |
opendevmeet | Meeting started Tue Dec 19 15:01:05 2023 UTC and is due to finish in 60 minutes. The chair is NeilHanlon. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:01 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:01 |
NeilHanlon | #topic rollcall | 15:01 |
NeilHanlon | g'morning again, folks :) | 15:01 |
jrosser | o/ hello | 15:05 |
NeilHanlon | #topic bugs | 15:06 |
NeilHanlon | nothing new, i don't think. We still have https://bugs.launchpad.net/openstack-ansible/+bug/2046172 which I need to look into more | 15:06 |
NeilHanlon | there is also https://bugs.launchpad.net/openstack-ansible/+bug/2046223 -- which we were able to fix last week | 15:07 |
NeilHanlon | also of course, we released 2023.2, so really great work everyone on that. again, my apologies for the rocky gates; will keep a better handle of this going forward. | 15:08 |
damiandabrowski | hi! | 15:08 |
NeilHanlon | hi damiandabrowski :) | 15:08 |
NeilHanlon | anyone have anything for bugs? | 15:08 |
jrosser | so we need to merge https://review.opendev.org/c/openstack/openstack-ansible/+/903545 next for the haproxy fix | 15:08 |
NeilHanlon | ah yes, was just looking into if we'd backported it yet | 15:09 |
NeilHanlon | #topic office hours | 15:11 |
damiandabrowski | In a week or two I'll try to implement OVN on our multi-AZ dev environment | 15:14 |
damiandabrowski | so I'll have a chance to test https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/858271 :D | 15:14 |
jrosser | we have been doing a ton of work on magnum-cluster-api | 15:15 |
jrosser | unfortunately it being out of tree driver has made it very difficult for me to get any of this merged | 15:16 |
NeilHanlon | that's "fun" | 15:17 |
jrosser | but it is pretty much ready to be tested https://review.opendev.org/c/openstack/openstack-ansible/+/893240 | 15:17 |
NeilHanlon | nice work, i'll take a look at the change set | 15:18 |
jrosser | there was a long discussion here yesterday with spatel to help with the same thing in a kolla deploy | 15:18 |
jrosser | and lots on the ML in general too | 15:18 |
NeilHanlon | yeah i had seen some of that fly by | 15:19 |
jrosser | so, imho, this could do with a higher profile] | 15:19 |
NeilHanlon | yes, there definitely does seem to be a lot of interest in it | 15:19 |
NeilHanlon | thanks everyone for coming - a reminder we have cancelled the next couple of meetings. The next meeting will be Tuesday, January 9th. | 15:38 |
NeilHanlon | Wishing you all the best in this festive season :) | 15:38 |
NeilHanlon | #endmeeting | 15:38 |
opendevmeet | Meeting ended Tue Dec 19 15:38:48 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:38 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-12-19-15.01.html | 15:38 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-12-19-15.01.txt | 15:38 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-12-19-15.01.log.html | 15:38 |
opendevreview | Merged openstack/openstack-ansible stable/2023.2: Fix http-check ordering for services https://review.opendev.org/c/openstack/openstack-ansible/+/903545 | 17:33 |
jrosser | spatel: did you make your magnum stuff work in the end? | 17:46 |
spatel | I am still dealing with some public network issue.. | 17:46 |
spatel | I will sure let you know as soon as it work | 17:47 |
spatel | jrosser Hey! how do I tell CAPI control plane to use PublicURL endpoint? | 19:31 |
spatel | who pass info to CAPI related what endpoint to use? | 19:31 |
jrosser | complicated | 19:31 |
jrosser | because there are several things talk to endpoints | 19:32 |
spatel | I have noticed that its using CAPI using private endpoint to talk to openstack | 19:32 |
jrosser | what though? | 19:32 |
jrosser | magnum service? | 19:33 |
spatel | My CAPI k8s running on public node and openstack has public and private both IPs. | 19:33 |
spatel | In logs of magnum all I am seeing private endpoints info | 19:33 |
jrosser | adjust the clients in magnum.conf | 19:34 |
spatel | that means magnum sending private endpoint data to CAPI and CAPI trying to talk to private endpoint and its failing (because its located on public network) | 19:34 |
jrosser | you are not being very specific unfortunately | 19:35 |
jrosser | there is stuff done by magnum service to the openstack endpoint | 19:35 |
jrosser | also by the controlplane k8s cluster | 19:35 |
jrosser | and also by the workload cluster | 19:36 |
spatel | whole workload is so confusing about public and private | 19:36 |
jrosser | well, perhaps | 19:36 |
spatel | doc just saying keep control plane on same LAN where openstack control plan running | 19:37 |
spatel | But in my case CAPI control plane running very far in different network on public IP machine. | 19:37 |
spatel | I have put ngnix (with public IP) in front of my private openstack to expose on public and updated Public endpoint to ngnix public IP | 19:38 |
spatel | Now I want to tell magnum to always use PublicURL for all request | 19:39 |
jrosser | like I say you configure the things that magnum is going to interact with in magnum.conf | 19:40 |
jrosser | and sounds like you need a mix? | 19:40 |
spatel | [magnum_client] is this what I should tell? | 19:41 |
jrosser | magnum container can use internal endpoint | 19:41 |
spatel | https://docs.openstack.org/magnum/latest/configuration/sample-config.html | 19:41 |
jrosser | but you need to tell magnum-cluster-api to use the public endpoint | 19:41 |
jrosser | [capi_client] config section | 19:41 |
spatel | in magnum.conf just add section [capi_client] and put endpoint_type = publicURL right? | 19:43 |
spatel | let me add and restart services and see | 19:44 |
NeilHanlon | ugh... we have some cycle i think with the rocky upgrade job in gate for https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/903544, jrosser | 19:45 |
NeilHanlon | 2023.1 tries to install erlang-25.2.3, but uses the el/8/ repo, which apparently doesn't have this version | 19:46 |
spatel | jrosser no luck.. its still sending internal URL to CAPI | 19:47 |
jrosser | spatel: honestly, I spent several weeks making this all work | 19:47 |
spatel | [capi_client] | 19:48 |
spatel | endpoint_type = publicURL | 19:48 |
spatel | This is what I added in magnum.conf | 19:48 |
hamburgler | Has anyone had any luck integrating horizon - swift dash with ceph backend so that swift storage policies are mapped 1:1 with ceph placement-targets? Thinking this is probably a no :p | 19:48 |
jrosser | spatel: check the magnum-cluster-api docs for the config options | 19:49 |
spatel | https://github.com/vexxhost/magnum-cluster-api/blob/main/docs/user/configs.md | 19:50 |
spatel | where should I add this flag? does magnum-cluster-api has different config file? | 19:51 |
jrosser | hamburgler: I see two choices that I expect in the storage policy drop-down in horizon | 19:52 |
spatel | I am reading your patch here - https://review.opendev.org/c/openstack/openstack-ansible/+/893240/31/tests/roles/bootstrap-host/templates/user_variables_k8s.yml.j2 | 19:52 |
hamburgler | jrosser: is your backend ceph for object storage? or swift? | 19:53 |
spatel | I have added in correct place in magnum.conf file | 19:53 |
jrosser | spatel: magnum-cluster-api is a driver for magnum, it uses settings from magnum.conf | 19:53 |
spatel | +1 | 19:53 |
jrosser | hamburgler: it’s ceph/radosgw | 19:53 |
jrosser | no actual swift | 19:53 |
hamburgler | so no swift installation at all, just in local_settings.py for horizon you enabled swift dash? this is what I have done, everything works fine except cannot create buckets via horizon, can oddly fully interact within them after the fact if i crated via cli | 19:55 |
hamburgler | containers* not buckets | 19:55 |
jrosser | well, we have a pretty complicated radosgw setup | 19:57 |
spatel | jrosser as per doc default endpoint is publicURL | 19:57 |
spatel | This is crazy :( | 19:57 |
spatel | let me see what I can do | 19:57 |
jrosser | but, there is one specifically setup to be Swift API for horizon | 19:57 |
hamburgler | ah k :) you mean a dedicated cluster for horizon? | 20:02 |
jrosser | a dedicted radosgw yes, but that’s just as we have a slightly odd setup | 20:03 |
jrosser | I can take a look at the config tomorrow if that is helpful | 20:04 |
jrosser | also ask my colleague who did this if we had any trouble | 20:04 |
hamburgler | ah k that makes sense :), i guess there would be issues if too many different api types were enabled on that set of of gateways | 20:04 |
jrosser | it was something like that yes | 20:05 |
jrosser | eventually it was simpler to just run different unique instances for swift / s3 and “internal” for horizon | 20:05 |
hamburgler | that'd be great if you wouldn't mind - running reef over here | 20:06 |
hamburgler | yeah totally makes sense | 20:06 |
jrosser | aha we didn’t dare start to try reef until the .1 release just made | 20:07 |
hamburgler | haha :D yeah saw the notice yesterday | 20:07 |
hamburgler | i've had no issues running it thankfully | 20:07 |
spatel | jrosser one more question, does workload cluster k8s instance required talking to CAPI control plane k8s ? | 20:18 |
jrosser | spatel: no it’s the other way round | 20:27 |
jrosser | see my diagram | 20:27 |
spatel | oh so CAPI k8s will make connection to workload master/worker nodes? | 20:27 |
jrosser | yes | 20:28 |
spatel | now I am having issue of my instance not getting floating IP attach :( | 20:34 |
spatel | in logs its saying - CAPI OpenstackCluster status reason: Successfulcreatefloatingip: Created floating IP 104.xxx.xxx.204 with id 2422f704-78e4-45ff-9f3d-bccb713d59d3) | 20:34 |
spatel | but in nova list Its not showing up.. | 20:34 |
jrosser | openstack server list? | 20:36 |
jrosser | spatel: the floating ip is for the loadbalancer | 20:39 |
spatel | my external network is public in template so its always use public floating IP | 20:40 |
jrosser | that’s fine | 20:40 |
jrosser | you said trouble “with instance getting floating up” | 20:40 |
spatel | even in my old deployment all my master and worker nodes has public IP (floating) | 20:40 |
jrosser | but the floating ip is for octavia | 20:40 |
spatel | Yes octavia should have floating IP | 20:41 |
spatel | but in my old environment all the k8s nodes has public floating IP | 20:42 |
jrosser | well that was the heat driver? | 20:42 |
spatel | yes it was heat | 20:42 |
spatel | I hate heat so trying to make CAPI workable | 20:42 |
jrosser | spatel: I’m not sure this puts floating ip on the instances | 21:02 |
spatel | jrosser here its, I just fixed something and its showing floating IP now - https://paste.opendev.org/show/bxtZb82TkS5Clk1mtLP7/ | 21:07 |
jrosser | and the loadbalancer? | 21:11 |
spatel | jrosser I am inside master k8s node and seeing this error - https://paste.opendev.org/show/bRkZftSA4PioHzQ026VK/ | 21:15 |
spatel | my cluster still in creating state | 21:15 |
spatel | where kube config located inside master node? | 21:18 |
jrosser | then look at usual k8s stuff to debug | 21:19 |
jrosser | like pod status, kubelet log | 21:19 |
jrosser | all the config came in through cloud-init | 21:19 |
spatel | I am not able to find ~/.kube/conf file | 21:19 |
jrosser | look at the could-init user data for that | 21:19 |
jrosser | it’s in /etc/kubernetes ? | 21:20 |
jrosser | also there I think is clouds.yaml for openstack api setup | 21:20 |
spatel | https://paste.opendev.org/show/bIpAzjocZpB4LOYzktk4/ | 21:23 |
spatel | how do i give path of config in kubectl command :D | 21:24 |
jrosser | right, so admin.conf? | 21:24 |
spatel | I am little noob | 21:24 |
jrosser | —kubeconfig | 21:24 |
jrosser | hah I am also total noob at this | 21:24 |
spatel | cp admin.conf ~/.kube/config | 21:25 |
jrosser | once the cluster deploys I have absolutely no clue what you would do next | 21:25 |
spatel | it works - https://paste.opendev.org/show/besdCfQThMMIUafiKfZk/ | 21:25 |
spatel | I am also learning :) | 21:25 |
spatel | This is fun now.. because I have no clue what to look here to find my cinder issue | 21:28 |
spatel | https://paste.opendev.org/show/bOxDdbp5i65y6RxM452b/ | 21:28 |
opendevreview | Doug Goldstein proposed openstack/openstack-ansible master: abstract bootstrap host disk partition names https://review.opendev.org/c/openstack/openstack-ansible/+/901106 | 21:29 |
jrosser | check the kubelet journal | 21:29 |
spatel | https://paste.opendev.org/show/b78hlrkve1WHgS68zfvg/ | 21:30 |
spatel | "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-csi-plugin\" with CrashLoopBackOff: | 21:31 |
spatel | that is all I can see | 21:31 |
spatel | let me ask this in mailing list and see | 21:37 |
jrosser | get the pod logs for cinder csi if you can | 21:40 |
jrosser | spatel: does the cloud.conf look sensible? (don’t paste the app creds!) | 21:42 |
spatel | This is my playground cluster so not worried about it :) | 21:42 |
spatel | good for heads up | 21:42 |
spatel | I am thinking to build different version of cluster and see getting similar error or not. | 21:43 |
spatel | or I can remove cinder dependency from template and try (at least it will help me to understand real issue) | 21:43 |
spatel | jrosser here you go - https://paste.opendev.org/show/bCpPzeyLS1ezMuYx9PO5/ | 21:45 |
jrosser | did you check the cloud.conf? | 21:45 |
jrosser | reason i ask is that both cinder csi and openstack-cloud-controller-manager are failing | 21:46 |
jrosser | those two things are related to your openstack | 21:46 |
jrosser | for example, you say it's a playground cluster, so what is the situation with SSL on the public endpoint? | 21:47 |
spatel | here is my cloud.conf file - https://paste.opendev.org/show/bY2aj3JbksdC6QaNPv5r/ | 21:47 |
spatel | This file pointing to private IP and from instance I can't curl that URL | 21:48 |
jrosser | it is the public endpoint though? | 21:48 |
spatel | That could be my issue.. totally | 21:48 |
spatel | no its internal os2-int | 21:48 |
jrosser | oh 'int' -> internal | 21:48 |
spatel | yes | 21:48 |
jrosser | ahha so this is no good | 21:49 |
spatel | that is where my head is exploding because I am not able to tell magnum to use public endpoint :( | 21:49 |
spatel | is there anyway I can tell it to change it to public endpoint in template or somewhere ? | 21:50 |
jrosser | that is read from your service catalog | 21:53 |
jrosser | https://github.com/vexxhost/magnum-cluster-api/blob/main/magnum_cluster_api/utils.py#L116 | 21:53 |
spatel | Hmm my service catalog has private IP for public endpoint | 21:54 |
spatel | as I said earlier I have ngnix outside openstack to expose public | 21:54 |
jrosser | serivce catalog should have the nginx ip then for the public endpoint | 21:55 |
spatel | when I changed that IP to public my vm stopped receiving floating IP :?( | 21:57 |
spatel | Let me change it and debug what is going on | 21:57 |
spatel | hold on.. | 21:57 |
spatel | jrosser I think I found issue.. my nginx not parsing header properly.. let me fix it | 22:09 |
opendevreview | Gaudenz Steinlin proposed openstack/openstack-ansible-plugins master: Ensure consistent ordering of network_mappings https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/904040 | 22:11 |
spatel | jrosser progress.. now I can see vm building process | 22:23 |
spatel | fingre cross | 22:23 |
spatel | finger* | 22:23 |
spatel | jrosser epic win :) CREATE_COMPLETE | 22:26 |
jrosser | \o/ | 22:27 |
jrosser | nice | 22:27 |
spatel | jrosser Thank you so much for the help :) | 22:31 |
jrosser | thats ok :) | 22:31 |
jrosser | thing is, for this your deployment has to be in really good shape and everything correcy | 22:32 |
jrosser | correct | 22:32 |
spatel | credit should goes to you | 22:32 |
spatel | time to create blog before my brain shutdown | 22:32 |
jrosser | yeah write it down quick | 22:32 |
jrosser | i was doing docs for OSA today for this same stuff | 22:32 |
spatel | Yep! | 22:32 |
spatel | for production do I need multi-node CAPI control-plane ? | 22:33 |
jrosser | though my patches are a bit different and deploy everything including control plane | 22:33 |
jrosser | well maybe, i believe that it is stateful and important stuff is in the etcd | 22:33 |
spatel | are you using kind cluster? | 22:33 |
jrosser | so if you lose that then you are hosed | 22:33 |
spatel | ouch!!! so one more critical stuff to handle :O | 22:34 |
jrosser | no, i am using this https://github.com/vexxhost/ansible-collection-kubernetes | 22:34 |
spatel | hmm! | 22:34 |
jrosser | yes i think that it is important like mariadb to have some plan for disaster / upgrades etc | 22:34 |
jrosser | and so far that is something that i have not yet really put much time into | 22:35 |
jrosser | other than with that collection i get a completely specified k8s environment, all versions controlled | 22:35 |
spatel | This is crazy then.. to maintain one more beast which I am not good at it | 22:35 |
jrosser | btw you do not have to make the control plane k8s public | 22:36 |
jrosser | you can do it public if you want | 22:36 |
spatel | agreed, I have moved my k8s CAPI to private network and it works | 22:36 |
jrosser | but you can also keep it completely on the mgmt network | 22:36 |
jrosser | also the public floating IP is optional if you want to reduce usage of public IP | 22:37 |
spatel | hmm! | 22:37 |
jrosser | there is an optional service you can run | 22:38 |
spatel | okie okie! | 22:39 |
jrosser | https://github.com/vexxhost/magnum-cluster-api/tree/main/magnum_cluster_api/proxy | 22:39 |
jrosser | sadly there is zero docs for this so far | 22:39 |
jrosser | but my OSA patches provide an example | 22:40 |
jrosser | you run it on your network node and it bridges between the management network and the tenant network | 22:40 |
jrosser | "bridges" -> provides an http endpoint between them with haproxy, automatically | 22:40 |
jrosser | that might be important, depending on what your production environment network looks like | 22:41 |
jrosser | and how tight your supply of floating IP is | 22:41 |
spatel | I think it required proper planning or placement of capi node | 22:42 |
jrosser | yes, though now you are in a goot position to think about those things | 22:42 |
jrosser | *good | 22:42 |
spatel | look at me.. how i was struggling between public/internal endpoints :( | 22:43 |
jrosser | i only know all this because we had to got some fixes done for the same stuff | 22:43 |
jrosser | and we also contribute some code to it as well now | 22:43 |
jrosser | because we have a very strict split between internal/public | 22:43 |
spatel | in OSA why we just run CAPI in infra nodes and expose via HAProxy | 22:44 |
jrosser | so if there are errors in the code / selection of endpoints it's kist all broken | 22:44 |
jrosser | thats exactly what happens on the internal endpoint, 6443 is capi k8s | 22:44 |
spatel | In that case we don't need to worry about capi proxy etc.. (let the end user decide how they want to place it outside OSA) | 22:45 |
jrosser | remember capi k8s needs to contact the k8s api endpoint in your workload cluster | 22:45 |
jrosser | you just did that with your floating ip | 22:45 |
spatel | In my case only master node has floating IP so capi can talk to it | 22:46 |
jrosser | so the proxy allows that to happen when there is not a floating IP on the octavia lb | 22:46 |
jrosser | no i keep saying :) | 22:46 |
jrosser | this deploys a LB with octavia | 22:46 |
spatel | Not worker node | 22:46 |
spatel | But we need floating IP on workload master node, otherwise how customer will access it ? | 22:47 |
spatel | Via octavia ? | 22:48 |
jrosser | thats a different thing | 22:48 |
jrosser | i expect the idea is that you already have other infrastructure, a bastion or something else in your tenant | 22:48 |
jrosser | and you access via that and the neutron router | 22:48 |
spatel | my customers are public users so I have to place it on public IP to let them access | 22:48 |
jrosser | sure | 22:49 |
jrosser | they may have a bastion with a floating ip | 22:49 |
spatel | Hmm! (may be private cloud) | 22:49 |
jrosser | or some other vm | 22:49 |
spatel | Yep! that should work | 22:49 |
jrosser | also pay attention to the cluster template | 22:49 |
jrosser | the only way a ssh key gets into the instances is via the cluster template | 22:49 |
spatel | No.. I am injecting key when I create cluster | 22:50 |
spatel | openstack coe cluster create --cluster-template k8s-v1.27.4 --master-count 1 --node-count 2 --keypair spatel-ssh-key mycluster1 | 22:50 |
jrosser | ok that probably overrides the template, not sure | 22:51 |
spatel | This works for me | 22:51 |
jrosser | anyway, i expect the idea is that you obtain the .kube/config using the openstack cli | 22:51 |
jrosser | and there should be not much need to actually get into the nodes | 22:51 |
spatel | I have used - openstack coe cluster config mycluster1 | 22:52 |
jrosser | also this supports auto-healing and ausoscaling so it makes not much sense for manual config | 22:52 |
jrosser | i think if you delete a worker vm it will spawn another to replace it | 22:52 |
spatel | Hmm! interesting point :) | 22:53 |
jrosser | so workflow here for your users is important | 22:53 |
jrosser | if they think they need to login to the instances to do things by hand, thats probably wrong | 22:53 |
spatel | We let customer do whatever they want. If they want to SSH on master node then sure they can do it | 22:54 |
jrosser | deploy the apps and whatever remotely, using the api and credentials obtained from opensatack cli | 22:54 |
jrosser | sure they can do that | 22:54 |
jrosser | but it should be no surprise if their vm get recycled, or replaced | 22:54 |
spatel | Yes +1 | 22:55 |
jrosser | something else i have not looked at is how upgrades are supposed to work | 22:55 |
jrosser | would not surprise me if the nodes are rolling-replaced with a newer k8s image from glance | 22:55 |
spatel | I am damn new to k8s.. I have no idea how stuff work inside k8s | 22:57 |
spatel | time to learn.. | 22:57 |
jrosser | yeah actually `openstack coe cluster upgrade <cluster ID> <new cluster template ID>` | 22:57 |
jrosser | the new cluster template would specify a different image in glance | 22:57 |
jrosser | so i expect that the nodes would be replaced to conform to the new cluster template | 22:58 |
jrosser | ok it is late here | 22:58 |
spatel | same here! | 23:02 |
spatel | I will catch you next time :) | 23:02 |
spatel | goodnight folks! and merry Christmas if I don't see you here | 23:03 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!