*** jmasud has quit IRC | 00:23 | |
*** jmasud has joined #oooq | 00:25 | |
*** jmasud has quit IRC | 01:11 | |
*** saneax has joined #oooq | 02:57 | |
*** jmasud has joined #oooq | 03:04 | |
*** saneax has quit IRC | 03:17 | |
*** skramaja has joined #oooq | 03:22 | |
*** saneax has joined #oooq | 03:42 | |
*** skramaja_ has joined #oooq | 04:17 | |
*** skramaja has quit IRC | 04:17 | |
*** ykarel has joined #oooq | 04:24 | |
*** skramaja_ has quit IRC | 04:40 | |
*** skramaja has joined #oooq | 04:45 | |
*** saneax has quit IRC | 04:49 | |
*** saneax has joined #oooq | 04:51 | |
*** jfrancoa has joined #oooq | 05:00 | |
*** ratailor has joined #oooq | 05:00 | |
*** ysandeep|away is now known as ysandeep | 05:16 | |
*** jpodivin has joined #oooq | 05:39 | |
*** udesale has joined #oooq | 05:54 | |
*** marios has joined #oooq | 05:59 | |
*** slaweq has joined #oooq | 06:33 | |
*** amoralej|off is now known as amoralej | 07:20 | |
*** jfrancoa has quit IRC | 07:22 | |
*** jfrancoa has joined #oooq | 07:23 | |
*** jmasud has quit IRC | 07:32 | |
*** tosky has joined #oooq | 07:38 | |
*** ysandeep is now known as ysandeep|lunch | 07:40 | |
*** skramaja has quit IRC | 07:44 | |
*** skramaja has joined #oooq | 07:44 | |
*** ChanServ has quit IRC | 07:54 | |
*** ChanServ has joined #oooq | 07:54 | |
*** services. sets mode: +o ChanServ | 07:54 | |
*** derekh has joined #oooq | 07:55 | |
*** jpena|off is now known as jpena | 07:56 | |
*** jbadiapa has joined #oooq | 07:57 | |
soniya29 | arxcruz, kopecmartin, ysandeep|lunch, please add/edit today's tempest agenda https://hackmd.io/fIOKlEBHQfeTZjZmrUaEYQ | 08:56 |
---|---|---|
kopecmartin | soniya29: i have a conflict , won't attend , if there are any questions on me i can answer them in the agenda during/after the meeting | 09:06 |
*** ysandeep|lunch is now known as ysandeep | 09:13 | |
ysandeep | pojadhav|rover, akahat|ruck Intermittently tempest tests in tripleo-ci-centos-8-containers-multinode-wallaby is failing for me with below error, Is it known? | 10:45 |
ysandeep | paramiko.ssh_exception.BadHostKeyException: Host key for server '192.168.24.102' does not match: got 'AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAZfUVUdvppIyu5pzZtZNK86vKeJ2rA0SiBDNuIRp4DJNv+V1FvWMUMCd9roJ+ZZKLrSaOzm9JR7brZYa6iqRYY=', expected 'AAAAC3NzaC1lZDI1NTE5AAAAIKsMDVTFcuwVHAtDCpLceTDjDwUEoUrpGPJclmzmYngi' | 10:45 |
ysandeep | 10:45 | |
ysandeep | https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f3e/786619/7/gate/tripleo-ci-centos-8-containers-multinode-wallaby/f3e7f65/logs/undercloud/var/log/tempest/tempest_run.log | 10:45 |
ysandeep | Other patches also hitting similar intermittently.. https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-8-containers-multinode-wallaby | 10:46 |
pojadhav|rover | ysandeep, it is not known | 10:46 |
ysandeep | pojadhav|rover, two more example on different patches:- | 10:47 |
ysandeep | https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e03/777106/30/check/tripleo-ci-centos-8-containers-multinode-wallaby/e03ba03/logs/undercloud/var/log/tempest/tempest_run.log | 10:47 |
ysandeep | https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_43f/777108/29/check/tripleo-ci-centos-8-containers-multinode-wallaby/43fb5bb/logs/undercloud/var/log/tempest/tempest_run.log | 10:47 |
ysandeep | pojadhav|rover, could you please report a bug and investigate. | 10:47 |
pojadhav|rover | ysandeep, if you see all logs having different tests | 10:48 |
pojadhav|rover | not the same one | 10:48 |
pojadhav|rover | which is failing | 10:48 |
ysandeep | pojadhav|rover, you want to chat meet.google.com/vmb-mxky-ube | 10:49 |
akahat|ruck | ysandeep, o/ can you please recheck it once? because job history is good. | 10:49 |
pojadhav|rover | ysandeep, ack | 10:49 |
ysandeep | akahat|ruck, 3 patches failed on same error.. i think we should report a bug | 10:49 |
akahat|ruck | ysandeep, okay | 10:53 |
ysandeep | akahat|ruck, thanks! pojadhav|rover and I spoke about this she is writing a bug for this.. akahat++ pojadhav++ | 10:58 |
pojadhav|rover | ysandeep, akahat|ruck : https://bugs.launchpad.net/tripleo/+bug/1928933 | 11:00 |
openstack | Launchpad bug 1928933 in tripleo "wallaby : tripleo-ci-centos-8-containers-multinode-wallaby randomly failing tempest tests with paramiko.ssh_exception.BadHostKeyException" [Undecided,New] | 11:00 |
akahat|ruck | ysandeep, pojadhav|rover ack. :) | 11:00 |
ysandeep | pojadhav|rover, thanks! | 11:01 |
akahat|ruck | pojadhav|rover, https://bugs.launchpad.net/tripleo/+bug/1928936 | 11:18 |
openstack | Launchpad bug 1928936 in tripleo "Tempest: neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle" [High,New] | 11:18 |
pojadhav|rover | akahat|ruck, ack | 11:20 |
soniya29 | kopecmartin, okay | 11:22 |
*** chem_ has joined #oooq | 11:26 | |
*** chem has quit IRC | 11:28 | |
*** jpena is now known as jpena|lunch | 11:31 | |
*** dviroel|away is now known as dviroel | 11:45 | |
weshay|ruck | ysandeep, not sure why your patch keeps failing | 11:57 |
weshay|ruck | https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/786619/ | 11:57 |
weshay|ruck | https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-8-containers-multinode-wallaby | 11:57 |
ysandeep | weshay|ruck, https://bugs.launchpad.net/tripleo/+bug/1928933 | 11:57 |
openstack | Launchpad bug 1928933 in tripleo "wallaby : tripleo-ci-centos-8-containers-multinode-wallaby randomly failing tempest tests with paramiko.ssh_exception.BadHostKeyException" [Undecided,New] | 11:57 |
weshay|ruck | it's not a general issue w/ the job | 11:57 |
ysandeep | weshay|ruck, intermittent issue , you can find more patches failing on BadHostKeyException | 11:58 |
weshay|ruck | soniya29, ^ | 11:58 |
weshay|ruck | that trace is scenario manager | 11:59 |
soniya29 | weshay|ruck, let me have a look | 11:59 |
weshay|ruck | any chance you can help us track down why it's failing intermittently? | 11:59 |
*** pojadhav|rover is now known as pojadhav|mtg | 11:59 | |
soniya29 | weshay|ruck, it seens an ssh connection failure | 12:02 |
weshay|ruck | pojadhav|mtg, skip this tempest test in all branches, on all jobs {0} tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details [359.963192s] ... FAILED | 12:07 |
weshay|ruck | {0} tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details [359.963192s] ... FAILED | 12:07 |
weshay|ruck | soniya29, mark it a promotion blocker, assign it to pcci tempest... | 12:08 |
weshay|ruck | sorry.. pojadhav|mtg ^ | 12:08 |
pojadhav|mtg | weshay|ruck, ack | 12:08 |
*** amoralej is now known as amoralej|lunch | 12:09 | |
weshay|ruck | soniya29, any chance we can have some try/except that makes that more reliable? or makes us more confident it's a connectivity issue? | 12:09 |
soniya29 | weshay|ruck, yeah, we can have try/except | 12:12 |
soniya29 | weshay|ruck, i will dig into this issue | 12:12 |
weshay|ruck | soniya29++ | 12:13 |
weshay|ruck | if ssh claims it can't connect.. it would be nice to see verbose ssh logs.. or a ping.. etc | 12:13 |
*** ratailor has quit IRC | 12:22 | |
*** jpena|lunch is now known as jpena | 12:22 | |
*** ysandeep is now known as ysandeep|afk | 12:26 | |
dviroel | folks, i'm going to do a medical exam now, will be back in around 1h .. 1h30 | 12:28 |
*** dviroel is now known as dviroel|away | 12:28 | |
*** chem_ has quit IRC | 12:29 | |
weshay|ruck | bhagyashris, akahat|ruck pojadhav|mtg meet.google.com/kyw-bcnr-roi | 12:29 |
*** chem_ has joined #oooq | 12:31 | |
akahat|ruck | bhagyashris, ping | 12:32 |
akahat|ruck | bhagyashris, meeting | 12:32 |
*** ysandeep|afk is now known as ysandeep | 13:01 | |
soniya29 | weshay|ruck, akahat|ruck, weshay|ruck tempest meeting? | 13:01 |
*** amoralej|lunch is now known as amoralej | 13:04 | |
weshay|ruck | ysandeep, ok... ready nodes in sf | 13:31 |
weshay|ruck | [whayutin@localhost tmp]$ cat /tmp/test | wc -l | 13:31 |
weshay|ruck | 43 | 13:31 |
weshay|ruck | :) | 13:31 |
ysandeep | ack o/ | 13:31 |
weshay|ruck | ysandeep, ack to proceed to putting 16.2 check.. let's say 3 repos .. then let's attach osp-13 | 13:32 |
weshay|ruck | attack | 13:32 |
ysandeep | aye o/ | 13:33 |
*** pojadhav|mtg is now known as pojadhav|rover | 13:35 | |
arxcruz | ysandeep: weshay|ruck https://bugzilla.redhat.com/show_bug.cgi?id=1957532 | 13:38 |
openstack | bugzilla.redhat.com bug 1957532 in cloud-init "[AWS][cloud-init] From RHEL 82+ cloud-init no longer displays sshd keys fingerprints from instance launched from a backup image" [High,Assigned] - Assigned to eesposit | 13:38 |
arxcruz | it might be related | 13:38 |
*** ysandeep is now known as ysandeep|mtg | 13:38 | |
weshay|ruck | I blame canonical | 13:39 |
pojadhav|rover | weshay|ruck, ysandeep|mtg : https://review.opendev.org/c/openstack/openstack-tempest-skiplist/+/792149 | 13:47 |
*** tosky has quit IRC | 13:56 | |
*** tosky has joined #oooq | 13:59 | |
*** saneax has quit IRC | 14:01 | |
*** dviroel|away is now known as dviroel | 14:03 | |
*** skramaja has quit IRC | 14:09 | |
ysandeep|mtg | weshay|ruck, soniya29: fyi.. internal html rendering fixed.. thanks to jpena | 14:13 |
soniya29 | ysandeep|mtg, that's great | 14:13 |
soniya29 | jpena, thank you | 14:13 |
ysandeep|mtg | weshay|ruck, internal hitting retry_limit https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/status | 14:25 |
ysandeep|mtg | might be related to nodepool increase | 14:25 |
*** ysandeep|mtg is now known as ysandeep | 14:25 | |
weshay|ruck | ysandeep, yes | 14:30 |
weshay|ruck | it might be | 14:30 |
* ysandeep pinging rhos-ops | 14:30 | |
ysandeep | pojadhav|rover, wasn't that failing with different error? | 14:34 |
pojadhav|rover | ysandeep, check rabi's comment - https://bugs.launchpad.net/tripleo/+bug/1928916 | 14:35 |
openstack | Launchpad bug 1928869 in tripleo "duplicate for #1928916 featureset001 - mysql fails to start - WSREP: failed to open gcomm backend connection: 131: No address to connect (FATAL)" [High,In progress] | 14:35 |
soniya29 | weshay|ruck, https://review.opendev.org/c/openstack/tempest/+/792178 | 14:36 |
weshay|ruck | soniya29++ | 14:39 |
ysandeep | pojadhav|rover, ack o/ | 14:41 |
pojadhav|rover | ysandeep, :) | 14:41 |
*** pojadhav|rover is now known as pojadhav|afk | 14:58 | |
weshay|ruck | review please https://review.opendev.org/c/openstack/tripleo-quickstart/+/792186 | 14:58 |
ykarel | duplicate | 15:00 |
ykarel | https://review.opendev.org/c/openstack/tripleo-quickstart/+/790239 | 15:00 |
*** jmasud has joined #oooq | 15:00 | |
weshay|ruck | pojadhav|afk, put this review in that test project job https://review.opendev.org/c/openstack/tripleo-common/+/792136 | 15:02 |
*** jpodivin has quit IRC | 15:02 | |
weshay|ruck | pojadhav|afk, nevermind.. I did it | 15:03 |
*** dviroel is now known as dviroel|luch | 15:18 | |
*** dviroel|luch is now known as dviroel|lunch | 15:18 | |
ysandeep | weshay|ruck, do you have for few mins? need some help with enabling downstream check jobs. | 15:28 |
ysandeep | https://code.engineering.redhat.com/gerrit/c/openstack/sf-config/+/242568/1//COMMIT_MSG | 15:28 |
ysandeep | do you few mins?* | 15:28 |
*** ykarel has quit IRC | 15:29 | |
marios | tripleo-ci please add to your reviews thanks https://review.opendev.org/c/openstack/tripleo-repos/+/792126 | 15:45 |
*** marios has quit IRC | 15:56 | |
*** ysandeep is now known as ysandeep|away | 16:03 | |
*** dviroel|lunch is now known as dviroel | 16:14 | |
*** dviroel is now known as dviroel|away | 16:20 | |
*** cgoncalves has quit IRC | 16:32 | |
*** cgoncalves has joined #oooq | 16:33 | |
*** udesale has quit IRC | 16:41 | |
*** saneax has joined #oooq | 16:44 | |
*** cgoncalves has quit IRC | 16:52 | |
*** cgoncalves has joined #oooq | 16:53 | |
*** saneax has quit IRC | 16:58 | |
*** derekh has quit IRC | 17:02 | |
*** rlandy has joined #oooq | 17:05 | |
*** jpena is now known as jpena|off | 17:10 | |
rlandy | weshay|ruck: hello | 17:14 |
rlandy | weshay|ruck: miss anything? | 17:14 |
weshay|ruck | heh | 17:14 |
weshay|ruck | rlandy, hi | 17:14 |
weshay|ruck | well.. confirmed the issue you and ysandeep|away found re: updating containers | 17:15 |
rlandy | not updating | 17:15 |
weshay|ruck | ya | 17:15 |
rlandy | hmmm... do we know why? | 17:15 |
weshay|ruck | probably should chat about that | 17:15 |
weshay|ruck | rlandy, no not yet | 17:15 |
weshay|ruck | rlandy, centos9 is a lot closer than we thought as well | 17:15 |
rlandy | weshay|ruck: k - so where do we go first? | 17:16 |
rlandy | weshay|ruck; what is available for centos 9? | 17:16 |
weshay|ruck | components I think | 17:16 |
weshay|ruck | rlandy, nothing.. but it's getting close | 17:16 |
rlandy | weshay|ruck: k - best place to put my time now?? | 17:16 |
rlandy | getting rhel puppet up and running? | 17:16 |
rlandy | getting a centos 9 node? | 17:17 |
weshay|ruck | rlandy, keeping notes here https://docs.google.com/document/d/1ngmliMp_uLS7RYORg4iaui9xgqMAiYWBecKXjTuMe-A/edit#heading=h.9e21hx8yze7p | 17:17 |
weshay|ruck | rlandy, ya.. we need to get the components fixed | 17:17 |
rlandy | weshay|ruck: meaning I should look into why components are not updating? | 17:17 |
weshay|ruck | rlandy, I am as well | 17:18 |
rlandy | weshay|ruck; I'll put today into that | 17:18 |
weshay|ruck | this log is useless atm tripleo-container-image-prepare.log.txt.gz | 17:18 |
rlandy | tomorrow will go back to rhel/centos 9 | 17:18 |
weshay|ruck | doesn't show updates even when there are updated containers | 17:18 |
rlandy | I think we need to compare the actual process rather than the logs | 17:18 |
weshay|ruck | rlandy, we're ahead of the game on el9 | 17:18 |
weshay|ruck | can wait until components are fixed imho | 17:18 |
weshay|ruck | I'm in the code now | 17:18 |
rlandy | weshay|ruck: we are never ahead of the game :)) | 17:18 |
weshay|ruck | going to add some debug | 17:18 |
rlandy | k - will look at component | 17:19 |
rlandy | ping if you find something | 17:19 |
weshay|ruck | rlandy, let me show you something though.. or go try that tool I emailed you | 17:19 |
weshay|ruck | will save u time | 17:19 |
rlandy | I saw you email | 17:19 |
weshay|ruck | k | 17:19 |
rlandy | will check it out | 17:19 |
rlandy | - actually wanted to look at the code path | 17:19 |
weshay|ruck | I think I'm missing a requirements.txt | 17:20 |
rlandy | rather than the outcome | 17:20 |
weshay|ruck | rlandy, yes.. but you have to start w/ a component that you now has an udpate | 17:20 |
weshay|ruck | and then pick the right container | 17:20 |
rlandy | which one is a candidate now? | 17:20 |
weshay|ruck | don't understand the question | 17:21 |
weshay|ruck | I'm looking at | 17:21 |
weshay|ruck | https://logserver.rdoproject.org/openstack-component-network/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-network-train/dca369f/logs/undercloud/var/log/extra/podman/containers/ovn_metadata_agent/podman_info.log.txt.gz | 17:21 |
weshay|ruck | python3-networking-ovn.noarch 7.4.2-0.20210518193941.4c5cb11.el8 @network | 17:21 |
weshay|ruck | python3-networking-ovn-metadata-agent.noarch 7.4.2-0.20210518193941.4c5cb11.el8 @network | 17:21 |
weshay|ruck | nothing useful in https://logserver.rdoproject.org/openstack-component-network/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-network-train/dca369f/logs/undercloud/var/log/tripleo-container-image-prepare.log.txt.gz | 17:21 |
weshay|ruck | so.. we have nothing to go on.. afaict.. until we get more info | 17:22 |
rlandy | weshay|ruck: are we sure upstream components are updating? | 17:22 |
weshay|ruck | I just proved it w/ that log | 17:22 |
rlandy | and I assume the patch we merged did nothing? | 17:22 |
weshay|ruck | ya.. didn't fix internal :( | 17:22 |
weshay|ruck | rlandy, this stuff has changed a lot since we started this w/ Ian | 17:23 |
rlandy | python3-networking-ovn.noarch 7.4.2-0.20210518193941.4c5cb11.el8 @network | 17:23 |
rlandy | python3-networking-ovn-metadata-agent.noarch 7.4.2-0.20210518193941.4c5cb11.el8 @network | 17:23 |
rlandy | ok good | 17:23 |
weshay|ruck | but the prepare.log shows NOTHING | 17:23 |
rlandy | k - let's see where this goes | 17:23 |
rlandy | after today - will move on | 17:24 |
weshay|ruck | rlandy, move on to what | 17:25 |
rlandy | getting rhel 9 puppet jobs running | 17:26 |
rlandy | or at least rhel 8 | 17:26 |
rlandy | and centos downstream | 17:26 |
weshay|ruck | rlandy, no.. hold on components w/ me until its fixed | 17:26 |
weshay|ruck | this is a major hole | 17:27 |
weshay|ruck | please | 17:27 |
rlandy | k | 17:29 |
*** Goneri has joined #oooq | 17:31 | |
*** Goneri has quit IRC | 17:35 | |
weshay|ruck | rlandy, do you have a test-project handy for any component job? | 17:39 |
weshay|ruck | rlandy, I'd like to see the results of https://review.opendev.org/c/openstack/ansible-role-tripleo-modify-image/+/792218 | 17:39 |
rlandy | getting | 17:40 |
rlandy | https://code.engineering.redhat.com/gerrit/c/testproject/+/189440/101/.zuul.yaml | 17:40 |
rlandy | weshay|ruck: ^^ feel free to edit/change that | 17:40 |
* weshay|ruck see's what's updatable atm | 17:40 | |
weshay|ruck | k | 17:40 |
rlandy | weshay|ruck: should we expect delorean-current downstream? | 17:43 |
weshay|ruck | no.. doesn't really make sense downstream | 17:44 |
rlandy | 2021-05-14 17:39:48,753 46880 ERROR tripleo_common.image.image_export [ ] [tripleorhos-16-2/openstack-swift-account] HTTP error: 401 Client Error: Unauthorized for url: https://docker-registry.upshift.redhat.com/v2/tripleorhos-16-2/openstack-swift-account/blobs/sha256:2ae2b76f9673ee54885d944834eab1f6a76da5eb8e55f0beac05d6e63ae11d80 | 17:48 |
rlandy | k - so taking that out of the updates | 17:48 |
rlandy | also we have URL access errors | 17:48 |
* rlandy tests | 17:48 | |
rlandy | weshay|ruck: ^^ you using that testproject? | 17:48 |
rlandy | np if yes, I'll create another one | 17:48 |
weshay|ruck | rlandy, ya.. but I see the 404's in working jobs too | 17:51 |
weshay|ruck | yes.. I just kicked it | 17:51 |
rlandy | np - will kick another | 17:51 |
*** amoralej is now known as amoralej|off | 18:02 | |
*** jmasud has quit IRC | 18:20 | |
*** dviroel|away is now known as dviroel | 18:26 | |
*** jbadiapa has quit IRC | 18:30 | |
frenzy_friday | elastic-recheck-query command is suddenly failing for this query https://opendev.org/openstack/tripleo-ci-health-queries/src/branch/master/output/elastic-recheck/1449136.yaml It passed every time till the last patch pushed. The same query is working dorectly on logstash dashboard as well | 18:49 |
weshay|ruck | frenzy_friday, if it finds 0 results does that cause an error? | 18:50 |
frenzy_friday | No, I tried a random string , it returned 0 hits | 18:50 |
weshay|ruck | hrm | 18:51 |
frenzy_friday | https://0050cb9fd8118437e3e0-3c2a18acb5109e625907972e3aa6a592.ssl.cf5.rackcdn.com/790065/7/check/openstack-tox-py38/4968a73/tox/test_results/1449136.yaml.log this is the log | 18:51 |
weshay|ruck | so where is it getting < | 18:51 |
frenzy_friday | No clue. This is where it is hitting the exception https://opendev.org/opendev/elastic-recheck/src/branch/master/elastic_recheck/cmd/query.py#L58 | 18:52 |
*** jmasud has joined #oooq | 19:08 | |
frenzy_friday | I think in the data that it is returning there is somewhere a list which ER is not expecting. Is the bug 1449136 relevant right now? Or can we revert https://review.opendev.org/c/openstack/tripleo-ci-health-queries/+/787569 and get the patch which copies test_results to the node https://review.opendev.org/c/openstack/tripleo-ci-health-queries/+/790065 merged? | 19:10 |
openstack | bug 1449136 in tripleo "Pip fails to find distribution for package" [Critical,Incomplete] https://launchpad.net/bugs/1449136 | 19:10 |
*** jfrancoa has quit IRC | 19:24 | |
*** jmasud has quit IRC | 20:02 | |
*** jmasud has joined #oooq | 20:09 | |
weshay|ruck | rlandy, https://logserver.rdoproject.org/openstack-periodic-integration-main/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-build-containers-ubi-8-push/e07b7d7/logs/container-builds/7fbe2db7-7279-4db2-935f-4c405865fb6c/base/ovn-base/ | 20:15 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!