opendevreview | Abhishek Kekane proposed openstack/glance master: DNM [Upload policy] functional-py36 timeout test https://review.opendev.org/c/openstack/glance/+/804898 | 06:04 |
---|---|---|
opendevreview | Abhishek Kekane proposed openstack/glance_store master: Xena cycle Release Notes https://review.opendev.org/c/openstack/glance_store/+/804952 | 06:38 |
opendevreview | Mridula Joshi proposed openstack/glance master: It was observed that md-tag-create-multiple (/v2/metadefs/namespaces/{namespace_name}/tags) API overwrites existing tags for specified namespace rather than creating new one in addition to the existing tags. This patch resolves the issue by not deleting the previous data and adding the new tags to existing ones. https://review.opendev.org/c/openstack/glance/+/804966 | 08:33 |
abhishekk | Summary of timeout issue; | 08:45 |
abhishekk | Locally I ran functional-py36 on upload patch more than 80 times - No failure | 08:45 |
abhishekk | Upstream DNM functional-py36 on upload is failing 8 times out of 10 (functional-py38 passes every time) | 08:45 |
abhishekk | Upstream DNM functional-py36 on Master is passing 100% | 08:45 |
abhishekk | To unblock the progress, I going to pull out upload policy change patch out of the chain for time being and then keep looking for possibile failure. | 08:45 |
abhishekk | dansmith, croelandt, lbragstad ^^^ | 08:45 |
opendevreview | Abhishek Kekane proposed openstack/glance master: Check download_image policy in the API https://review.opendev.org/c/openstack/glance/+/804547 | 09:27 |
opendevreview | Abhishek Kekane proposed openstack/glance master: Check policies for staging operation in API https://review.opendev.org/c/openstack/glance/+/804558 | 09:27 |
abhishekk | After rebasing on master branch both has timed out | 10:49 |
opendevreview | Abhishek Kekane proposed openstack/glance master: Check policies for delete image fro store in API https://review.opendev.org/c/openstack/glance/+/804585 | 11:17 |
opendevreview | Abhishek Kekane proposed openstack/glance master: Check policies for delete image for store in API https://review.opendev.org/c/openstack/glance/+/804585 | 11:21 |
JqckB | Hello, I have a small question that I can't find an answer : is it possible to get something like an image alias ? I would want to have as much image as distribution have but have only one target for my users. | 11:47 |
JqckB | Something like https://wiki.openstack.org/wiki/Glance/ImageAliases | 11:47 |
JqckB | Do you have an idea on how to do this ? Or if you have an counter-proposition, I'm open to it | 11:48 |
abhishekk | croelandt, dansmith we need to find out some workaround for timeout issue :/ | 13:45 |
dansmith | abhishekk: I was going to ask earlier when you were offline.. it's not clear to me, but is it still just that one patch or did you say that rebasing the others on master makes them timeout also? | 13:46 |
abhishekk | also please review this reno patch for glance store, so that once this is merged I can tag glance-store release tonight or tomorrow | 13:46 |
dansmith | I went to -qa to talk to gmann about the timeout thing and got sucked into a scope discussion | 13:47 |
abhishekk | I rebased download patch on master and staging patch on top of download, and then both timed out | 13:47 |
abhishekk | second attempt they worked | 13:47 |
dansmith | ack okay I thought that's what you meant, just confirming | 13:47 |
dansmith | good news is that makes sense right? that it's not something wonky with a patch that clearly shouldn't be causing timeouts | 13:48 |
abhishekk | I don't think timeout has anything to do with our changes | 13:48 |
abhishekk | yes | 13:48 |
dansmith | maybe we should ask him here, since it's busy in-qa | 13:50 |
abhishekk | we almost have everything covered | 13:50 |
abhishekk | yeah | 13:50 |
dansmith | gmann: we're having an issue with our -py36 jobs timing out pretty seriously, but the same -py38 job is fine | 13:50 |
dansmith | gmann: wondering if it has to do with some of the py36-specific constraints, because otherwise we are not sure what is different | 13:51 |
abhishekk | https://review.opendev.org/c/openstack/glance/+/804898 | 13:51 |
abhishekk | this is reference | 13:51 |
dansmith | gmann: have you heard anything like this from other projects or have any ideas? | 13:51 |
abhishekk | this is the latest pipeline for time out, https://zuul.opendev.org/t/openstack/builds?result=TIMED_OUT | 13:51 |
abhishekk | I can see neutron and tacker has latest hit | 13:52 |
abhishekk | seems similar kind of failure | 13:54 |
dansmith | I dunno, | 13:54 |
dansmith | the neutron one is py39 | 13:55 |
dansmith | the tacker one is py36 tho | 13:55 |
abhishekk | yeah, I think neutron lower constraint is on py36 | 13:55 |
dansmith | okay | 13:56 |
dansmith | well, let's see what gmann has to say | 13:56 |
abhishekk | swift functional is also down there | 13:57 |
abhishekk | ack | 13:57 |
* abhishekk 3 important days gone | 13:59 | |
* abhishekk will be back in 30 mins | 14:12 | |
opendevreview | Dan Smith proposed openstack/glance master: Test Revert "Resolve compatibility with oslo.db future (redux)" https://review.opendev.org/c/openstack/glance/+/805035 | 14:34 |
opendevreview | Dan Smith proposed openstack/glance master: Test Revert "Resolve compatibility with oslo.db future" https://review.opendev.org/c/openstack/glance/+/805036 | 14:34 |
dansmith | abhishekk: just a guess ^ | 14:34 |
abhishekk | dansmith, ack, worth trying | 14:46 |
dansmith | looks like both have passed py36 | 14:49 |
dansmith | we can recheck several times of course | 14:50 |
dansmith | s/can/should/ | 14:50 |
abhishekk | yeah | 14:50 |
abhishekk | topic :P | 14:55 |
abhishekk | I think here also we should keep 2 jobs only to save the time and try several rechecks | 14:57 |
dansmith | topic? | 15:00 |
dansmith | these are only running small jobs, so it's not a huge deal | 15:00 |
dansmith | oh my topic, I see :) | 15:00 |
abhishekk | functional protection is installing openstack I guess | 15:01 |
abhishekk | yeah | 15:01 |
dansmith | devstack? | 15:02 |
abhishekk | yes | 15:02 |
dansmith | I can rebase it on your 804898 patch when this run finishes | 15:02 |
abhishekk | ++ | 15:02 |
dansmith | or I guess we already know it passed, so I can do it now, hang on | 15:02 |
abhishekk | ++ ++ | 15:03 |
opendevreview | Dan Smith proposed openstack/glance master: DNM [Upload policy] functional-py36 timeout test https://review.opendev.org/c/openstack/glance/+/804898 | 15:04 |
opendevreview | Dan Smith proposed openstack/glance master: Test Revert "Resolve compatibility with oslo.db future (redux)" https://review.opendev.org/c/openstack/glance/+/805035 | 15:04 |
opendevreview | Dan Smith proposed openstack/glance master: Test Revert "Resolve compatibility with oslo.db future" https://review.opendev.org/c/openstack/glance/+/805036 | 15:04 |
whoami-rajat | dansmith, hey, i remember you worked on a feature to set virtual_size to image during image create operation? | 15:08 |
dansmith | yup | 15:08 |
abhishekk | In local run these tests are skipped | 15:08 |
dansmith | abhishekk: what? | 15:08 |
gmann | dansmith: abhishekk nto any i know about timeout. seems it is not 100% right https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-functional-py36 | 15:08 |
dansmith | gmann: like 80% | 15:08 |
whoami-rajat | dansmith, so does it apply to the case when we 'upload a volume to image' ? if not, is it feasible to do it? | 15:09 |
dansmith | whoami-rajat: it should apply to any upload case yeah | 15:09 |
dansmith | abhishekk: oh the db migration tests? | 15:10 |
abhishekk | yeah, just confirming | 15:10 |
dansmith | abhishekk: ah because they need real db running... well, that would explain it yeah? :) | 15:11 |
whoami-rajat | dansmith, hmm, somehow it doesn't show up in this bug https://bugs.launchpad.net/cinder/+bug/1939972 | 15:11 |
gmann | dansmith: ohk and in glance only | 15:11 |
whoami-rajat | i mean the virtual_size is None | 15:11 |
abhishekk | gmann, yes | 15:11 |
dansmith | whoami-rajat: there are logs if it fails to compute it or read the qcow file, so I would look for those | 15:12 |
dansmith | whoami-rajat: but also, we don't know the virtual size until we've read enough of the image to have the metadata.. don't we have to have created the volume before that? I'm not sure if the bug is really that we never get the virtual size so much as we have already created the volume at that point | 15:13 |
dansmith | whoami-rajat: look for these: https://github.com/openstack/glance/blob/dd3155516cec2cabf8f74963a44ab642d507384b/glance/location.py#L572-L585 | 15:15 |
whoami-rajat | dansmith, in this scenario, a) we create the volume, b) upload it as an image, c) create a new volume from that image | 15:15 |
whoami-rajat | so we should have virtual_size in step b) | 15:16 |
dansmith | ack | 15:16 |
whoami-rajat | dansmith, ok, i will try that scenario out and see for any issues | 15:16 |
dansmith | whoami-rajat: cool, let me know what you find | 15:17 |
dansmith | whoami-rajat: I think there was some discussion of a " | 15:18 |
dansmith | of a "qcow2 v2" recently, so maybe we're not detecting the new version properly or something? the amount of data we need for qcow2 is very small, so I would hope it hasn't changed *that* much | 15:18 |
* abhishekk successful on all 3 patches | 15:19 | |
dansmith | that likely means it's just the redux patch I think | 15:21 |
dansmith | since they revert in reverse order | 15:21 |
whoami-rajat | dansmith, sure, IIRC the discussion was around making qcow2-v2 images read only which would render them unusable unless converted to qcow2-v3 format or maybe I'm thinking of another discussion | 15:21 |
dansmith | whoami-rajat: okay I thought it was qcow2-v1 and qcow2-v2, but still my point is, if we're not handling something about the *newer* format, that could explain it | 15:22 |
dansmith | whoami-rajat: but if you can repro it not working, then we can go from there | 15:22 |
abhishekk | dansmith, likely, added rechecks on both the patches | 15:22 |
whoami-rajat | dansmith, sure, thanks for your inputs, i will try and let you know | 15:22 |
dansmith | whoami-rajat: cool thanks.. I was pretty excited about this feature because I thought the implementation was cool, so I definitely want to make sure it's working :) | 15:23 |
dansmith | abhishekk: ack.. I think we should recheck several more just to be sure, but this actually makes sense, which is good | 15:23 |
abhishekk | ++ | 15:24 |
dansmith | abhishekk: also, stephenfin is really smart, so once we tell him we think there's a problem I'm sure he'll fix it quick...oops :) | 15:24 |
abhishekk | :) | 15:24 |
stephenfin | appealing to my ego, I see. Clever... | 15:24 |
dansmith | heh | 15:25 |
abhishekk | may be you will explain it better and in short | 15:25 |
whoami-rajat | dansmith, yep, i think i requested it long ago in a PTG and my general usecase was to know the virtual_size before uploading image to volume (cinder store), i wasn't aware at that time that glance streams the image in chunks and writes to volume so it wasn't possible | 15:25 |
whoami-rajat | but anyway it's a good feature :) | 15:26 |
dansmith | whoami-rajat: :) | 15:26 |
dansmith | stephenfin: it seems like the redux patch is causing one worker to completely hang about 80% of the time, only on py36 and causing most of our runs to timed_out on that job | 15:27 |
dansmith | that's the current theory anyway | 15:27 |
dansmith | we couldn't repro the failure locally, but now thinking because nobody is running those tests locally because they don't have the opportunistic config | 15:27 |
stephenfin | Oh, that's interesting. In theory all I've done is made the session/connection management explicit rather than automatic | 15:28 |
dansmith | ack, I haven't even looked at them, we were just stabbing in the dark | 15:28 |
stephenfin | but frickler (I think) noted...somewhere that SQLA 1.4 had some significant performance issues | 15:29 |
dansmith | but the fact that it seems to always only happen on py36 is interesting | 15:29 |
stephenfin | very | 15:29 |
dansmith | do we pin sqla or any related dep on py36 but not py38? | 15:29 |
abhishekk | as we are talking it passed 3rd consecutive time on redux revert | 15:29 |
dansmith | I looked at u-c and saw some different pins, but... | 15:29 |
stephenfin | if you get a reliable pass rate, it might be worth dragging in zzzeek if he can spare the time | 15:29 |
stephenfin | we shouldn't be, but I'll check | 15:29 |
dansmith | I don | 15:30 |
dansmith | I don't think it's just running slow or something, it seems totally hung because in the subunits I looked at, we got zero reports ever from the affected worker | 15:30 |
stephenfin | nope, nothing specific in openstack/requirements or glance itself, fwict | 15:30 |
dansmith | we have some different pins in u-c based on python version, but nothing really substantial that I saw | 15:31 |
dansmith | like networx, which I know changed recently, but doubt that is related | 15:31 |
stephenfin | Yeah, hangs coupled with the fact the patches made changes to DB session management would suggest a relationship. I'll play around with it... | 15:33 |
dansmith | are we really running the same version of mysql, pymysqlclient, and sqla on both py36 and py38? I guess I would expect maybe some delta there since py36 distros would have been a while ago | 15:35 |
dansmith | since it's apparently version specific, I'd expect this to be related to a different version of something in that stack and not just python | 15:35 |
stephenfin | Fair point. I think we're running on Ubuntu 18.04 for the 3.6 tests and 20.04 for the 3.8 ones | 15:46 |
stephenfin | so it's not identical | 15:46 |
dansmith | ah yeah, so that means different mysql at least I assume | 15:47 |
opendevreview | Ghanshyam proposed openstack/glance master: Suppress policy deprecation and default change warnings https://review.opendev.org/c/openstack/glance/+/805049 | 15:48 |
dansmith | gmann: thank $deity | 15:48 |
gmann | dansmith: abhishekk ^^ these warning may cause timeout with log filling | 15:49 |
dansmith | they certainly cause pain even without timeouts, so thanks | 15:49 |
abhishekk | ++ | 15:49 |
* abhishekk going for dinner | 15:54 | |
opendevreview | Ghanshyam proposed openstack/glance master: Suppress policy deprecation and default change warnings https://review.opendev.org/c/openstack/glance/+/805049 | 16:13 |
abhishekk | smcginnis, could you please review https://review.opendev.org/c/openstack/glance_store/+/804952 | 16:46 |
abhishekk | dansmith, I think 4 successful runs are enough or we should add more rechecks? | 16:53 |
dansmith | abhishekk: I dunno, I'd do a bunch because they're easy, but sounds like the wheels are moving anyway, so might as well be sure | 16:54 |
abhishekk | yeah, I will add one more round of recheck | 16:54 |
abhishekk | thinking of signing out early today, but if we have a fix then you can ninja approve it | 16:55 |
kukacz | hi, having cinder as glance backend, I am dealing with issue that volumes are byte-streamed from images on creation, instead of being just thin cloned on the cinder backend (powerflex). what might be wrong? | 16:57 |
dansmith | abhishekk: ack | 16:58 |
dansmith | abhishekk: struggling with the image factory auth layer thing. causes a ton of fails because something else is going on, so don't think I'm not working on that :) | 16:58 |
abhishekk | dansmith, no issues | 16:59 |
abhishekk | kukacz, I don't think cinder has that support | 17:00 |
abhishekk | you are thinking similar to copy_on_write ? | 17:00 |
abhishekk | dansmith, I will scan through glance code and see whether we skipped any get_repo or similar calls to avoid policy enforcement (may be tomorrow) | 17:02 |
dansmith | okay | 17:02 |
kukacz | abhishekk: yes, I expect a snapshot based copy to be instantly created since both glance and cinder have the same backend in the end | 17:03 |
abhishekk | cool, signing out for the day, have a good day | 17:03 |
abhishekk | whoami-rajat, could you please confirm whether we this support is there ? | 17:03 |
kukacz | abhishekk: looking at this blueprint I thought the support might be in place: https://specs.openstack.org/openstack/cinder-specs/specs/liberty/clone-image-in-glance-cinder-backend.html | 17:04 |
abhishekk | kukacz, need to have a look at it | 17:05 |
gmann | dansmith: abhishekk can either of you check this too, glance policy scope configuration setting are moved to devstack side (depends-on merged) which avoid restarting the api services - https://review.opendev.org/c/openstack/glance/+/778952 | 17:06 |
dansmith | abhishekk: omg.. I think the auth layer is the only thing that actually sets image.owner! so removing that makes all images owner=None. | 17:07 |
dansmith | gmann: will have to queue, buried atm | 17:07 |
gmann | dansmith: sure, not urgent when you have time | 17:07 |
abhishekk | dansmith, similar thing I faced in namespace | 17:07 |
dansmith | abhishekk: where is my lighter... | 17:08 |
dansmith | https://pics.me.me/burn-it-burn-all-down-memes-com-16257715.png | 17:08 |
kukacz | abhishekk: thanks. I am talking about offloading the volume cloning to the real cinder backend, to be clear | 17:08 |
abhishekk | https://review.opendev.org/c/openstack/glance/+/799633/22/glance/api/v2/metadef_namespaces.py@153 | 17:08 |
abhishekk | lol | 17:09 |
dansmith | abhishekk: yeah, same thing.. wtf | 17:09 |
abhishekk | kukacz, ack, I will have a look possibly tomorrow, if it is urgent you can ping in cinder channel as well | 17:10 |
dansmith | today is definitely a "4 coffee" day | 17:11 |
abhishekk | +1 | 17:12 |
kukacz | abhishekk: thanks a lot. yes, it is quite urgent, I will try cinder chanel and get back to you tomorrow if needed | 17:12 |
abhishekk | kukacz, no problem, I hope rosmaita or whoami-rajat will be able to point it out | 17:13 |
abhishekk | s/hope/sure | 17:13 |
* abhishekk 5th check also cleared | 17:19 | |
dansmith | woot | 17:21 |
abhishekk | so till we have a fix or if it is going to take a time can we have flip on connection on the basis of python version ? | 17:26 |
dansmith | I guess we could disable the opportunistic checking on py36 only | 17:29 |
dansmith | gmann: is that legit/possible in our own repo? | 17:29 |
dansmith | abhishekk: we should have a bug that documents the offending patch, failures, etc right? just for the record I would think. | 17:30 |
abhishekk | dansmith, makes sense | 17:30 |
opendevreview | Dan Smith proposed openstack/glance master: Check add_image policy in the API https://review.opendev.org/c/openstack/glance/+/804800 | 17:31 |
opendevreview | Dan Smith proposed openstack/glance master: Refactor gateway auth layer for image factory https://review.opendev.org/c/openstack/glance/+/805065 | 17:31 |
abhishekk | \o/ | 17:31 |
dansmith | abhishekk: I'm pretty fried, so the above deserves plenty of checking for proper testing and such | 17:31 |
gmann | dansmith: it is functional job right not unit test? | 17:32 |
dansmith | gmann: yeah | 17:32 |
dansmith | gmann: seems the regression was this: https://review.opendev.org/c/openstack/glance/+/805035 | 17:32 |
gmann | dansmith: then you can either remvoe or make n-v openstack-tox-functional-py36 | 17:33 |
dansmith | abhishekk: I guess we could just merge that revert and keep the testing, since we have time before we need that SA 2.0 fix right? | 17:33 |
dansmith | gmann: okay I think making it n-v is probably bad, especially since this apparently really caught something.. probably better to just actually revert the problem patch and keep testing | 17:33 |
gmann | dansmith: yeah | 17:34 |
abhishekk | dansmith, sounds good | 17:34 |
whoami-rajat | kukacz, hi, what is the format of your image? (raw, qcow2) | 17:34 |
dansmith | abhishekk: if you're filing a bug, I can fix the redux to reference that and remove "Test" from the top | 17:35 |
abhishekk | I am on mobile atm | 17:36 |
dansmith | abhishekk: oh sure, good excuse to get out of filing a bug :P | 17:36 |
abhishekk | just give me some time, will file it :d | 17:36 |
dansmith | nah, I'm on it | 17:36 |
abhishekk | cool, thank you | 17:37 |
abhishekk | you can assume my +2 on the patch/revert | 17:37 |
dansmith | ack | 17:38 |
kukacz | whoami-rajat: I have tried both. now I am testing with raw, after noting it was a prerequisite in the blueprint doc | 17:40 |
opendevreview | Dan Smith proposed openstack/glance master: Revert "Resolve compatibility with oslo.db future (redux)" https://review.opendev.org/c/openstack/glance/+/805035 | 17:40 |
whoami-rajat | kukacz, and are you using multiple stores or single store? | 17:43 |
dansmith | croelandt: abhishekk is mobile on his badass hog, so if you want to ack this just for coverage that'd be cool: https://review.opendev.org/c/openstack/glance/+/805035 | 17:45 |
dansmith | croelandt: tl;dr that's the cause of the timeouts we've been seeing | 17:45 |
kukacz | whoami-rajat: multiple stores, having cinder as the default set in glance-api.conf | 17:46 |
* dansmith hopes the "hog" slang term for motorcycle is accepted worldwide | 17:46 | |
abhishekk | :P | 17:47 |
abhishekk | let me check the patch, I think croelandt is not around today | 17:47 |
dansmith | oh okay, I ninja'd anyway | 17:48 |
abhishekk | ack | 17:48 |
whoami-rajat | kukacz, then we need to fix that optimization since we haven't made cinder side changes for multiple stores yet, https://review.opendev.org/c/openstack/cinder/+/755654 | 17:50 |
abhishekk | hopefully things will get moving now | 17:54 |
kukacz | whoami-rajat: great, thank you. is there a workaround, when using victoria? would it help perhaps to switch to single store, while using cinder as the only backend? | 17:54 |
whoami-rajat | kukacz, yes, if you use single store (and create a new image), this optimization should work | 18:00 |
whoami-rajat | kukacz, the main part is "enabled_backends" conf shouldn't be set | 18:00 |
kukacz | whoami-rajat: thanks, I will try reconfiguring it | 18:02 |
croelandt | dansmith: understanding why that patch caused this issue is gonna be *fun* | 18:08 |
croelandt | I +2/+1ed it | 18:08 |
croelandt | we should have 10 patches merged in the next 24 hours then :D | 18:08 |
dansmith | croelandt: seems likely to me that it's related to the fact that we're creating more connections explicitly, but yeah, haven't delved into the why yet | 18:09 |
croelandt | yeah but these connections are supposed to be short-lived, that's weird | 18:15 |
kukacz | whoami-rajat: what determines whether the configuration is multi store or single? I have commented out the enabled_backends and still have the issue. have the default_store param set to cinder, not sure if I can remove that one too | 18:25 |
whoami-rajat | kukacz, in single store, you have one group [glance_store] , you can refer to the config here https://paste.opendev.org/show/808184/ | 18:27 |
kukacz | whoami-rajat: thanks a lot. will try | 18:30 |
whoami-rajat | np | 18:31 |
kukacz | whoami-rajat: can I confirm from image parameters that it was correctly created using the single store config. or the image does not reflect the configuration method at all? | 18:36 |
whoami-rajat | kukacz, in my single store env, i don't see the location returned in image properties and i manually checked it inside the db, maybe dansmith or croelandt know some optional parameters to return the location as well with image properties | 18:38 |
dansmith | yeah there is a flag to enable showing locations | 18:39 |
dansmith | show_multiple_locations | 18:40 |
dansmith | show_image_direct_url | 18:40 |
kukacz | dansmith: thanks! and is the show_image_direct_url parameter important to make the image cloning to volume work? I am dealing with the case when image data is copied into volume instead of having a storage backend-level thin clone created | 18:44 |
whoami-rajat | dansmith, yes, enabling those does show the locations | 18:45 |
dansmith | kukacz: dunno, but it does make nova use the hot-clone thing for booting instances | 18:45 |
whoami-rajat | kukacz, In the cinder conf, we need to set "allowed_direct_url_schemes" to the value "cinder" and in glance conf, "show_image_direct_url" and "show_multiple_locations" needs to be set to "True" | 18:45 |
kukacz | whoami-rajat: you mean even when I am using single store? | 18:46 |
whoami-rajat | kukacz, yes | 18:46 |
kukacz | thank you guys, will try that after a short break | 18:47 |
* whoami-rajat signing out | 18:48 | |
kukacz | now it works with these parameters and single store. thank you! | 20:14 |
*** timburke_ is now known as timburke | 21:00 | |
croelandt | dansmith: well, the patch that fixes the timeout failed... with a timeout | 22:47 |
dansmith | yeah just saw | 22:58 |
dansmith | not awesome | 22:59 |
dansmith | I was trying to say "we need lots of rechecks on this" but it definitely seemed likely | 22:59 |
croelandt | so this patch may not be the culprit :/ | 23:04 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!