openstackgerrit | Merged opendev/system-config master: Retire pabelanger as infra-root https://review.opendev.org/668192 | 00:01 |
---|---|---|
*** iurygregory has quit IRC | 00:49 | |
*** Meiyan has joined #opendev | 00:55 | |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: dib-lint: use yamllint to parse YAML files https://review.opendev.org/730690 | 02:05 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: package-installs : allow a list of parameters https://review.opendev.org/730691 | 02:05 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Revert "Revert "ubuntu-minimal : only install 16.04 HWE kernel on xenial"" https://review.opendev.org/730692 | 02:05 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: package-installs : allow a list of parameters https://review.opendev.org/730691 | 02:16 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Revert "Revert "ubuntu-minimal : only install 16.04 HWE kernel on xenial"" https://review.opendev.org/730692 | 02:16 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: package-installs : allow a list of parameters https://review.opendev.org/730691 | 02:33 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Revert "Revert "ubuntu-minimal : only install 16.04 HWE kernel on xenial"" https://review.opendev.org/730692 | 02:33 |
*** cloudnull has quit IRC | 03:02 | |
*** ykarel|away is now known as ykarel | 03:35 | |
*** ykarel is now known as ykarel|afk | 03:54 | |
*** ysandeep is now known as ysandeep|brb | 04:08 | |
*** ysandeep|brb is now known as ysandeep | 04:38 | |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: test-base-roles: update include to import_playbook https://review.opendev.org/730674 | 05:05 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Make gentoo jobs nv https://review.opendev.org/728640 | 05:23 |
*** ykarel|afk is now known as ykarel | 05:25 | |
openstackgerrit | Merged zuul/zuul-jobs master: test-base-roles: update include to import_playbook https://review.opendev.org/730674 | 05:33 |
openstackgerrit | Sagi Shnaidman proposed zuul/zuul-jobs master: WIP Add ansible collection roles https://review.opendev.org/730360 | 05:49 |
*** slaweq has joined #opendev | 06:52 | |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Make gentoo multinode job nv https://review.opendev.org/728640 | 07:11 |
*** iurygregory has joined #opendev | 07:22 | |
*** tobiash has quit IRC | 07:28 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Remove install-* roles https://review.opendev.org/719322 | 07:35 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: bindep: update include to import_tasks https://review.opendev.org/730660 | 07:37 |
*** tosky has joined #opendev | 07:38 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: tox: update include to import_tasks https://review.opendev.org/730673 | 07:39 |
*** openstackstatus has quit IRC | 07:39 | |
*** openstackstatus has joined #opendev | 07:39 | |
*** ChanServ sets mode: +v openstackstatus | 07:39 | |
*** DSpider has joined #opendev | 07:40 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-bazel: update include to include_tasks https://review.opendev.org/730672 | 07:40 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-package-repositories: update include to include_tasks https://review.opendev.org/730671 | 07:41 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Rename test install role to ensure- https://review.opendev.org/730720 | 07:42 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: add-build-sshkey: update include to include_tasks https://review.opendev.org/730670 | 07:45 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Make gentoo multinode job nv https://review.opendev.org/728640 | 07:45 |
*** rpittau|afk is now known as rpittau | 07:46 | |
ianw | clarkb: if you could look at https://review.opendev.org/730690 and https://review.opendev.org/730691 that should allow us to get focal support (arm64 + amd64) into the last dib 2.0 release | 07:50 |
frickler | AJaeger: should we also drop the tumbleweed jobs until someone fixes image builds? see e.g. https://nb01.opendev.org/opensuse-tumbleweed-0000226275.log . the existing image still seems to refer to a now non-existing openstack.org mirror | 07:58 |
openstackgerrit | Bogdan Dobrelya (bogdando) proposed zuul/zuul-jobs master: Support overrideable package_mirror https://review.opendev.org/730602 | 07:59 |
AJaeger | frickler: either fix or remove | 07:59 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: tox: update include to include_tasks https://review.opendev.org/730673 | 07:59 |
ianw | frickler: oh, that's a shame. 15 is working in the dib gate, but not tumbleweed i guess | 08:00 |
*** moppy has quit IRC | 08:01 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: bindep: update include to include_tasks https://review.opendev.org/730660 | 08:01 |
ianw | same failure https://zuul.opendev.org/t/openstack/build/82663349aa314c0ea7aa4105659993ff/log/nodepool/builds/test-image-0000000003.log#3190 | 08:01 |
*** moppy has joined #opendev | 08:01 | |
AJaeger | frickler: http://mirror.us.leaseweb.net/opensuse/tumbleweed/repo/oss/x86_64/?C=M&O=D looks recent - isn't that our mirror? | 08:01 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: add-build-sshkey: update include to include_tasks https://review.opendev.org/730670 | 08:04 |
AJaeger | cmorpheus: do you know who can look at those nodepool failures for tumbleweed? ^ | 08:04 |
AJaeger | frickler: let me send a patch and WIP it for a day or two... | 08:04 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-subunit-output: update include to import_tasks https://review.opendev.org/730668 | 08:05 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-subunit-output: update include to import_tasks https://review.opendev.org/730668 | 08:06 |
AJaeger | ianw, frickler : when did we last build tumbleweed with success? | 08:07 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-virtualenv: update include to inlude_tasks https://review.opendev.org/730669 | 08:07 |
*** tobiash has joined #opendev | 08:07 | |
openstackgerrit | yatin proposed openstack/project-config master: Add publish-to-pypi to ansible-config_template https://review.opendev.org/730726 | 08:08 |
AJaeger | found it - looks two weeks ago ;( | 08:08 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-podman: update include to include_tasks https://review.opendev.org/730667 | 08:08 |
frickler | AJaeger: ianw: seems the normal tumbleweed sync is working fine, but the /update part seems to be failing http://paste.openstack.org/show/793971/ | 08:09 |
*** hashar has joined #opendev | 08:09 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: use-docker-mirror: update include to include_tasks https://review.opendev.org/730664 | 08:10 |
AJaeger | frickler: thanks, writing an email now... | 08:11 |
openstackgerrit | Merged zuul/zuul-jobs master: Make gentoo multinode job nv https://review.opendev.org/728640 | 08:12 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: configure-mirrors: update include to include_tasks https://review.opendev.org/730666 | 08:12 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: persistent-firewall: update include to include_tasks https://review.opendev.org/730665 | 08:14 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: multi-node-bridge: update include to include_tasks https://review.opendev.org/730662 | 08:16 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Remove tumbleweed from testing https://review.opendev.org/730727 | 08:16 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-pip: update include to import_tasks https://review.opendev.org/730661 | 08:19 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-pip: update include to include_tasks https://review.opendev.org/730661 | 08:20 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Fix deprecation warning from multinode tests https://review.opendev.org/730479 | 08:21 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-bazel: replace ignore_errors with failed_when https://review.opendev.org/730733 | 08:39 |
*** ykarel is now known as ykarel|lunch | 08:44 | |
*** priteau has joined #opendev | 08:54 | |
*** tobiash has quit IRC | 09:02 | |
*** tobiash_ has joined #opendev | 09:02 | |
*** dtantsur|afk is now known as dtantsur | 09:05 | |
*** tkajinam has quit IRC | 09:05 | |
*** ysandeep is now known as ysandeep|lunch | 09:08 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-bazel: replace ignore_errors with failed_when https://review.opendev.org/730733 | 09:16 |
*** iurygregory has quit IRC | 09:17 | |
*** iurygregory has joined #opendev | 09:18 | |
*** SotK has quit IRC | 09:24 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: multi-node-bridge: update include to include_tasks https://review.opendev.org/730662 | 09:37 |
*** hashar has quit IRC | 09:48 | |
*** tosky__ has joined #opendev | 09:50 | |
*** tosky is now known as Guest68658 | 09:50 | |
*** tosky__ is now known as tosky | 09:50 | |
*** ykarel|lunch is now known as ykarel | 09:51 | |
*** Meiyan has quit IRC | 09:52 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Add ensure-dnf-copr https://review.opendev.org/730743 | 09:59 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Add ensure-dnf-copr https://review.opendev.org/730743 | 10:02 |
*** ysandeep|lunch is now known as ysandeep | 10:05 | |
*** rpittau is now known as rpittau|bbl | 10:20 | |
*** priteau has quit IRC | 10:35 | |
*** avass has quit IRC | 10:39 | |
openstackgerrit | Sagi Shnaidman proposed zuul/zuul-jobs master: WIP Add ansible collection roles https://review.opendev.org/730360 | 11:18 |
zbr | tristanC: clarkb: https://review.opendev.org/#/c/729974/ please. | 11:21 |
*** sshnaidm is now known as sshnaidm|afk | 11:54 | |
*** hashar has joined #opendev | 11:55 | |
openstackgerrit | Merged zuul/zuul-jobs master: Remove install-* roles https://review.opendev.org/719322 | 12:05 |
openstackgerrit | Merged zuul/zuul-jobs master: Add option to prefer https/ssl in configure-mirrors https://review.opendev.org/729407 | 12:09 |
mordred | frickler: morning! if you have a spare second, https://review.opendev.org/#/c/730483/ | 12:10 |
hrw | mordred: if you have spare: https://review.opendev.org/#/c/728810/ | 12:12 |
*** cloudnull has joined #opendev | 12:14 | |
openstackgerrit | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Add container and pod log in the test for ensure-kubernetes role https://review.opendev.org/727929 | 12:15 |
mordred | hrw: done | 12:18 |
hrw | mordred: thanks | 12:19 |
*** rpittau|bbl is now known as rpittau | 12:22 | |
*** ykarel is now known as ykarel|afk | 12:23 | |
*** SotK has joined #opendev | 12:26 | |
openstackgerrit | Bogdan Dobrelya (bogdando) proposed zuul/zuul-jobs master: Support overrideable package_mirror https://review.opendev.org/730602 | 12:27 |
openstackgerrit | Bogdan Dobrelya (bogdando) proposed zuul/zuul-jobs master: Support overrideable package_mirror https://review.opendev.org/730602 | 12:28 |
openstackgerrit | Merged opendev/base-jobs master: add arm64 nodesets https://review.opendev.org/728810 | 12:35 |
openstackgerrit | Merged zuul/zuul-jobs master: packer: namespace test jobs correctly https://review.opendev.org/730500 | 12:41 |
openstackgerrit | Merged zuul/zuul-jobs master: ensure-pip: update include to include_tasks https://review.opendev.org/730661 | 12:43 |
openstackgerrit | Merged zuul/zuul-jobs master: ensure-package-repositories: fix loopvar collision https://review.opendev.org/730477 | 12:51 |
openstackgerrit | Merged zuul/zuul-jobs master: Do not interpolate values from tox --showconfig https://review.opendev.org/729520 | 12:51 |
openstackgerrit | Merged zuul/zuul-jobs master: bindep: update include to include_tasks https://review.opendev.org/730660 | 12:51 |
openstackgerrit | Merged zuul/zuul-jobs master: Add python3-devel to bindep https://review.opendev.org/728708 | 12:51 |
openstackgerrit | Merged zuul/zuul-jobs master: ensure-bazel: update include to include_tasks https://review.opendev.org/730672 | 12:51 |
openstackgerrit | Merged zuul/zuul-jobs master: Add container and pod log in the test for ensure-kubernetes role https://review.opendev.org/727929 | 12:55 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: multi-node-bridge: update include to include_tasks https://review.opendev.org/730662 | 12:56 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: ensure-package-repositories: update include to include_tasks https://review.opendev.org/730671 | 12:58 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: multi-node-bridge: update include to include_tasks https://review.opendev.org/730662 | 12:59 |
openstackgerrit | Merged opendev/system-config master: Update username in Zuul executor initscript https://review.opendev.org/730483 | 13:00 |
*** ykarel|afk is now known as ykarel | 13:01 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: multi-node-bridge: update include to include_tasks https://review.opendev.org/730662 | 13:02 |
*** sshnaidm|afk is now known as sshnaidm | 13:10 | |
*** redrobot has joined #opendev | 13:20 | |
*** ykarel is now known as ykarel|afk | 13:24 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: multi-node-bridge: update include to include_tasks https://review.opendev.org/730662 | 13:26 |
openstackgerrit | Merged zuul/zuul-jobs master: tox: update include to include_tasks https://review.opendev.org/730673 | 13:29 |
openstackgerrit | Merged zuul/zuul-jobs master: add-build-sshkey: update include to include_tasks https://review.opendev.org/730670 | 13:29 |
openstackgerrit | Javier Peña proposed openstack/project-config master: Remove tox-py27 job for x/packstack https://review.opendev.org/730813 | 13:30 |
*** tkajinam has joined #opendev | 13:37 | |
*** tobiash_ is now known as tobiash | 13:37 | |
openstackgerrit | Andreas Jaeger proposed openstack/project-config master: Fix sphinx playbook: install-if-python was renamed https://review.opendev.org/730818 | 13:43 |
AJaeger | config-core, we missed the change above , please review to unbreak sphinx publishing ^ | 13:43 |
AJaeger | thanks, mordred ! | 13:48 |
*** sgw has quit IRC | 13:52 | |
*** dtantsur is now known as dtantsur|brb | 13:53 | |
openstackgerrit | Merged zuul/zuul-jobs master: Fix deprecation warning from multinode tests https://review.opendev.org/730479 | 13:56 |
openstackgerrit | Merged zuul/zuul-jobs master: tox: empty envlist should behave like tox -e ALL https://review.opendev.org/730322 | 13:56 |
openstackgerrit | Merged zuul/zuul-jobs master: ensure-podman: update include to include_tasks https://review.opendev.org/730667 | 13:56 |
openstackgerrit | Merged zuul/zuul-jobs master: ensure-virtualenv: update include to inlude_tasks https://review.opendev.org/730669 | 13:56 |
*** yoctozepto8 has joined #opendev | 14:00 | |
*** roman_g has joined #opendev | 14:00 | |
*** yoctozepto has quit IRC | 14:01 | |
*** yoctozepto8 is now known as yoctozepto | 14:01 | |
*** rpittau is now known as rpittau|brb | 14:10 | |
openstackgerrit | Javier Peña proposed openstack/project-config master: Remove tox-py27 job for x/packstack https://review.opendev.org/730813 | 14:14 |
openstackgerrit | Merged zuul/zuul-jobs master: ensure-package-repositories: update include to include_tasks https://review.opendev.org/730671 | 14:14 |
openstackgerrit | Merged openstack/project-config master: Fix sphinx playbook: install-if-python was renamed https://review.opendev.org/730818 | 14:16 |
*** ykarel|afk is now known as ykarel | 14:21 | |
*** tkajinam has quit IRC | 14:36 | |
*** rpittau|brb is now known as rpittau | 14:39 | |
openstackgerrit | Javier Peña proposed openstack/project-config master: Remove tox-py27 job for x/packstack https://review.opendev.org/730813 | 14:42 |
openstackgerrit | Javier Peña proposed openstack/project-config master: Remove check/gate jobs for x/packstack https://review.opendev.org/730813 | 14:43 |
*** iurygregory has quit IRC | 14:45 | |
*** dtantsur|brb is now known as dtantsur | 14:50 | |
*** iurygregory has joined #opendev | 14:53 | |
*** sgw has joined #opendev | 14:54 | |
openstackgerrit | Merged zuul/zuul-jobs master: fetch-subunit-output: update include to import_tasks https://review.opendev.org/730668 | 15:06 |
*** mlavalle has joined #opendev | 15:09 | |
*** cmorpheus is now known as cmurphy | 15:15 | |
cmurphy | AJaeger: i don't know who could help with tumbleweed problems besides dirk | 15:16 |
cmurphy | i don't see any recent open bugs that look related on opensuse's bugzilla | 15:18 |
*** ysandeep is now known as ysandeep|afk | 15:19 | |
AJaeger | cmurphy: ok, let's see - I might move forward with the zuul-jobs change and we can revert once dirk is back from vacation. | 15:24 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Remove tumbleweed from testing https://review.opendev.org/730727 | 15:30 |
*** priteau has joined #opendev | 15:34 | |
*** dtantsur is now known as dtantsur|afk | 15:34 | |
*** sshnaidm is now known as sshnaidm|afk | 15:34 | |
openstackgerrit | Clark Boylan proposed opendev/base-jobs master: Test mirrors with ssl https://review.opendev.org/730861 | 15:43 |
openstackgerrit | Clark Boylan proposed opendev/base-jobs master: Use mirrors with ssl globally https://review.opendev.org/730862 | 15:43 |
clarkb | infra-root ^ that first change should be completely safe to land and I'll resurrect my testing job in zuul-jobs for base-test | 15:43 |
clarkb | if that looks good I think we can land the second job | 15:43 |
clarkb | *land the second job update (first is base-tes second is base) | 15:44 |
openstackgerrit | Merged zuul/zuul-jobs master: persistent-firewall: update include to include_tasks https://review.opendev.org/730665 | 15:50 |
openstackgerrit | Merged zuul/zuul-jobs master: use-docker-mirror: update include to include_tasks https://review.opendev.org/730664 | 15:50 |
*** rpittau is now known as rpittau|afk | 15:51 | |
*** ykarel is now known as ykarel|away | 15:51 | |
AJaeger | zuul-jobs-maint, a trivial change for review, please https://review.opendev.org/730720 | 15:52 |
clarkb | corvus: nb01 and nb02 use letsencrypt for the webserver where they serve their images logs and image files | 15:54 |
clarkb | on the zuul runs of our production playbooks are we still needing to debug problems? | 15:55 |
*** ysandeep|afk is now known as ysandeep | 15:56 | |
openstackgerrit | Merged zuul/zuul-jobs master: configure-mirrors: update include to include_tasks https://review.opendev.org/730666 | 15:57 |
*** ysandeep is now known as ysandeep|afk | 16:02 | |
mordred | clarkb: https://review.opendev.org/#/c/730483/ is the last known issue and is merged | 16:07 |
corvus | it does look like system-config-run-zuul has been successful recently | 16:09 |
corvus | mordred: can you review/approve https://review.opendev.org/729785 and its parent | 16:11 |
corvus | mordred: https://review.opendev.org/729786 is interesting; if you look at the output here: https://zuul.opendev.org/t/openstack/build/e1fdfaf85ebc447399daf073ad7fd258/log/zuul01.openstack.org/docker/zuul-scheduler_scheduler_1.txt | 16:21 |
corvus | mordred: it looks like we've managed to get the scheduler to nearly start -- basically, every connection configured is erroring out in some manner | 16:22 |
corvus | mordred: so on the one hand: failure -- we aren't able to use our production config with fake private data to get the system started. on the other hand: success -- we've at least gotten basic connectivity up. | 16:23 |
corvus | mordred: i think we're at a crossroads: should we try to get the 'gate-test zuul' more functional, or should we stop here and see if we can do some testinfra validation that all the processes have started and are communicating with each other? | 16:24 |
corvus | though, actually, i'm not 100% sure we have yet gotten to the 'gearman has started' stage.... | 16:25 |
*** hashar is now known as hasharAway | 16:36 | |
mordred | corvus: might be easier to test all the way to comnnectivity if we ran gearman as a separate process (then it wouldn't need the scheduler to start it) | 16:37 |
mordred | corvus: but - I agree about the crossroads - I kinda feel like if we can get testinfra to verify services are runing that may be "god enough" for this? | 16:39 |
corvus | mordred: i believe gearman is running and the scheduler has connected to it; it's unclear if the merger has. i can't see why it wouldn't, but i don't see a confirming log message. but maybe it just isn't emitted. | 16:40 |
corvus | i think the main thing i'm worried about is that right now, the scheduler happens to be starting and busy-waiting because of the particular combination of bad data we've given it. a subtle change in zuul behavior in the future could cause it to fail to start, and therefore fail any "is it running" level tests we set up... :/ | 16:41 |
mordred | corvus: https://zuul.opendev.org/t/openstack/build/e1fdfaf85ebc447399daf073ad7fd258/log/zuul01.openstack.org/docker/zuul-scheduler_scheduler_1.txt#68 seems like a real error | 16:41 |
clarkb | corvus: you might be able to confirm it was connected by forcing it to disconnect (eg stop the gearman process) | 16:41 |
mordred | corvus: yeah | 16:41 |
corvus | mordred: it's a real error due to bad fake data | 16:42 |
corvus | that's a private key which is incorrect in the gate | 16:42 |
corvus | basically, every single connection is failing for a similar reason | 16:42 |
mordred | ah - got it. I was reading that as EACCESS | 16:43 |
corvus | nope, that's "your key is not a key" | 16:43 |
corvus | mordred: if we give it a slightly better, but still wrong fake private key, zuul actually stops the scheduler | 16:43 |
corvus | so we're in kind of a weird place here. it's possible zuul improvements could make our test result worse. | 16:43 |
mordred | hah | 16:44 |
clarkb | can we feed it a different set of connection data for testing? | 16:44 |
mordred | yeah - it's like the uncanney valley of fake deployments | 16:44 |
clarkb | use a local git dir and run noop jobs or something | 16:44 |
mordred | clarkb: but then we're not testing that we put all the keys in place properly | 16:44 |
clarkb | mordred: yes, but it tells us the zuul is functional | 16:44 |
corvus | clarkb: yeah, that would be the most robust thing i think. but then things like mordred's "did we write the github key with the correct perms" would be opaque to our testing. | 16:44 |
clarkb | I think for now properly testing that we've got the github credentials correct is tricky and somewhat orthogonal to "is zuul working with massive internal connectivity changes" | 16:45 |
clarkb | both are valuable but right now we are most worried about the second thign right? | 16:45 |
mordred | we almost need to legitimately also spin up a full gerrit that looks like review.o.o | 16:46 |
corvus | i honestly don't know which is better, but i think i worry that zuul will make the decision for us (because, frankly, i'm never going to say "no" to a patch to zuul entitled "correctly report key format error") | 16:46 |
corvus | mordred: and a github? :) | 16:46 |
mordred | corvus: well - yeah - that's an issue :) - but we could probably do that with an explicit testinfra test - I think clarkb's #2 is the thinng we most want yes? | 16:47 |
clarkb | ya and thinking ahead I think the transition to more state in zk will mean we want the internal connectivity testing again there as well | 16:47 |
corvus | okay, i'll look into alternative connection data for gate | 16:49 |
*** mnasiadka has quit IRC | 16:49 | |
*** vblando has quit IRC | 16:50 | |
*** ysandeep|afk is now known as ysandeep|away | 16:52 | |
*** vblando has joined #opendev | 16:57 | |
*** mnasiadka has joined #opendev | 16:58 | |
hrw | morning | 17:02 |
hrw | fungi: had you had time to look at https://review.opendev.org/730342 https://review.opendev.org/730323 patches? (wheel builds in a need of AFS volumes) | 17:03 |
fungi | hrw: it's on my list for today, hopefully in the next couple of hours before the weekly meeting | 17:04 |
hrw | fungi: thanks | 17:05 |
fungi | my pleasure | 17:05 |
clarkb | I want to say the big issue with wheels for arm64 was lack of reliable afs on arm64 with various platforms? | 17:05 |
*** vblando has quit IRC | 17:05 | |
hrw | clarkb: hope it got better | 17:06 |
clarkb | hrw: I doubt ti did for those older platforms like centos7 | 17:06 |
clarkb | buster, focal, and bionic will probably work though? | 17:06 |
hrw | I never used afs so hard to tell | 17:06 |
*** mnasiadka has quit IRC | 17:07 | |
*** Open10K8S has quit IRC | 17:08 | |
*** Open10K8S has joined #opendev | 17:13 | |
fungi | yeah, generating them is probably fine because i think we copy the files through an intermediary anyway | 17:14 |
fungi | serving them is the trick, since the "mirror" hosts are local to each cloud environment and the arm64/aarch64 environments we have are homogenous so we need openafs or kafs kernel modules which work on those architectures | 17:16 |
clarkb | fungi: I didn't think we copied through the executor, its directly off the build host | 17:16 |
fungi | build host being the job node? | 17:16 |
clarkb | ya | 17:17 |
clarkb | I'm sure ianw can fill us in if there was something missing. I thought it was afs on the nodes building the wheels but maybe it was something else or simply never done | 17:18 |
fungi | well, at one point it was afs on the nodes building the wheels because we did it with cron jobs and not zuul | 17:21 |
fungi | i'm not sure if that's still a problem though | 17:21 |
*** mnasiadka has joined #opendev | 17:22 | |
*** vblando has joined #opendev | 17:22 | |
fungi | also fun, looks like all our wheel build jobs have been failing for a while | 17:23 |
fungi | https://zuul.opendev.org/t/openstack/build/5ee8721a0c5345349b9010fdf798ddb8 | 17:23 |
openstackgerrit | Merged opendev/system-config master: Correct the test gearman certs https://review.opendev.org/729771 | 17:25 |
openstackgerrit | Merged opendev/system-config master: Fix whitespace in zuul-executor PPAs https://review.opendev.org/729785 | 17:25 |
clarkb | fungi: openstack/project-config/roles/copy-wheels seems to do the copying and it seems to run agains the remote nodes | 17:26 |
fungi | now that we don't copy everything but only what we've actually built, i wonder if we could just copy through the executors and then don't even need secrets on the nodes | 17:28 |
fungi | man that job produces some large logs | 17:30 |
clarkb | ya my browser is basically giving up on that | 17:30 |
fungi | i recently culled 99.9% of my open tabs, so browser is slightly more responsive at least | 17:31 |
fungi | but yeah, the log prettifier is struggling still | 17:31 |
clarkb | fungi: it failed to rm the wheels we didn't want because there were no args given | 17:32 |
fungi | clarkb: yes, because of an earlier failure | 17:32 |
fungi | i'm trying to find where though | 17:32 |
clarkb | http://paste.openstack.org/show/794013/ | 17:32 |
fungi | yeah, look just above that though, tox exited nonzero because... reasons | 17:33 |
clarkb | ya echo '*** FAILED BUILDS FOR BRANCH stable/ussuri' | 17:34 |
fungi | we could make the remove wheels step a little more robust against that and have it emit a clear error when the step which generates remove-wheels.txt finds no matches | 17:34 |
fungi | right, i'm currently reviewing the ussuri python3 build log | 17:34 |
clarkb | fungi: I think we collect log files for every wheel we build | 17:34 |
clarkb | which should hopefully tell us why the thing that failed failed (assuming we can also browse those logs) | 17:35 |
fungi | right, i don't see any problem with python3 so maybe we recently broke ussuri python2 | 17:35 |
fungi | (which wouldn't surprise me at all) | 17:36 |
clarkb | fungi: https://765a239ecc34c44a5b00-7f41bd38e61f614f484bdc6903cb8f38.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/requirements/master/publish-wheel-mirror-ubuntu-bionic/5ee8721/python2/failed.txt | 17:38 |
clarkb | fungi: I think the problem is we expect python2 to work at all anymore :) | 17:38 |
clarkb | we may need to update it to be best effort for python2 | 17:39 |
fungi | yeah, i kinda figured | 17:39 |
fungi | i think there may be patches in flight for that... | 17:39 |
fungi | AJaeger: ^ do you recall? | 17:39 |
fungi | prometheanfire: ^ maybe you | 17:39 |
clarkb | basically build as many wheels as we can, remove any that are redundant and do best effort | 17:39 |
clarkb | https://765a239ecc34c44a5b00-7f41bd38e61f614f484bdc6903cb8f38.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/requirements/master/publish-wheel-mirror-ubuntu-bionic/5ee8721/python2/build/ussuri/1/tempest%3D%3D%3D24.0.0/stderr | 17:40 |
clarkb | and ya its basically the problem of "things have moved on" | 17:40 |
openstackgerrit | James E. Blair proposed opendev/system-config master: WIP: fake zuul_connections for gate https://review.opendev.org/730929 | 17:44 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-subunit-output: keep subunit2html.py in the role https://review.opendev.org/730930 | 17:51 |
*** mlavalle has quit IRC | 17:54 | |
openstackgerrit | Sagi Shnaidman proposed zuul/zuul-jobs master: WIP Add ansible collection roles https://review.opendev.org/730360 | 17:55 |
*** slittle1 has quit IRC | 18:00 | |
*** slittle1 has joined #opendev | 18:02 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-subunit-output: keep subunit2html.py in the role https://review.opendev.org/730930 | 18:03 |
AJaeger | fungi: no idea, sorry | 18:03 |
clarkb | fungi: where I've gotten to is we need to download the files the job generates in order to run the find + grep for wheel downloads to see why that was empty | 18:09 |
openstackgerrit | Merged openstack/project-config master: Add base replication jobs for oslo-metrics https://review.opendev.org/728820 | 18:09 |
clarkb | fungi: unfortunately there are 29k files so it might be a little while | 18:09 |
clarkb | (also I tried to do this by hand via the browser but then I realized how big the scope was) | 18:10 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-subunit-output: keep subunit2html.py in the role https://review.opendev.org/730930 | 18:10 |
clarkb | looking at the script I think it may be fine with python2 errors | 18:10 |
clarkb | the problem is that the removal list is empty when it shouldn't be | 18:11 |
clarkb | (and I've confirmed that things like six should be removed but everywhere I see it in the logs it says not downloading because its already in my cache, need to find where it is initially downloaded and ensure that our find + grep works against that) | 18:11 |
clarkb | I guess I can just do a local pip install too to check if the format of its output has changed | 18:12 |
*** mlavalle has joined #opendev | 18:13 | |
clarkb | ' Downloading openstacksdk-0.46.0-py3-none-any.whl (1.3 MB)' is what a local pip command seems to output | 18:17 |
*** slittle1 has quit IRC | 18:18 | |
clarkb | which won't match sed -n 's,.*Downloading from URL .*/\([^/]*\.whl\)#.*,\1,p' | 18:18 |
clarkb | I think that is the issue | 18:18 |
fungi | oh! i wonder if the log format changed out from under us | 18:21 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-tox-output: empty envlist should behave like tox -e ALL https://review.opendev.org/730334 | 18:23 |
openstackgerrit | Clark Boylan proposed openstack/project-config master: Update pip output parsing to fix wheel mirror builds https://review.opendev.org/730933 | 18:24 |
clarkb | fungi: ^ I think that will fix it, but some independent confirmation of the behavior change would be good | 18:25 |
fungi | well, also if we're running different versions of pip on different platforms the message format might differ too | 18:30 |
clarkb | ya, though we ahould be able to run an up to date or at least consistent pip everywhere? | 18:40 |
clarkb | 20.something for python2 and 3 support | 18:40 |
clarkb | I think it is all done in virtualenvs too? | 18:40 |
fungi | we're using volumes named like mirror.wheel.bionicx64 for the x86-64/amd64 wheels, what should i call the corresponding aarch64/arm64 volumes? mirror.wheel.bionicaa64 or something else | 18:41 |
openstackgerrit | Ghanshyam Mann proposed opendev/irc-meetings master: Add Secure Default policies popup team meeting https://review.opendev.org/730935 | 18:49 |
clarkb | fungi: the changes hrw pushed made a choice I think | 18:50 |
clarkb | fungi: since the volume name is used to replicate | 18:50 |
fungi | clarkb: what version of pip? seems like 19.2.3 still gives me the old output like: Downloading https://files.pythonhosted.org/packages/e1/e5/df302e8017440f111c11cc41a6b432838672f5a70aa29227bf58149dc72f/urllib3-1.25.9-py2.py3-none-any.whl (12 | 18:50 |
fungi | 6kB) | 18:50 |
fungi | i'll test again with upgraded pip | 18:51 |
clarkb | fungi: 20.1.1 is what I used | 18:51 |
clarkb | fungi: we run `build_env/bin/pip install --upgrade pip` in the build so they should all be using latest currently | 18:52 |
clarkb | we'll likely need to do <21.0.0 to continue to support python2 (but that can be a separate thing) | 18:52 |
hrw | clarkb: just tried to follow with naming | 18:53 |
prometheanfire | fungi: hi? | 18:53 |
clarkb | hrw: ya I think your choices were fine, but fungi's volume creation should match that change | 18:53 |
hrw | clarkb: refreshing patch is a moment once names are known | 18:54 |
prometheanfire | we are talking about getting rid of py27 from gate infra? iirc reqs doesn't use it at all, only swift does as far as I can tell | 18:54 |
fungi | no biggie, i missed that we had volume names there already. happy to just use them | 18:54 |
fungi | prometheanfire: more about how to make sure we still generate whatever py2 wheels are needed in ussuri, or whether there are patches in flight to stop trying to build py2 wheels with ussuri constraints | 18:55 |
hrw | py2 wheels may be useful for stable branches | 18:55 |
prometheanfire | ya, useful for stable branches I can see | 18:57 |
fungi | we iterate over the stable branches already | 18:58 |
fungi | different constraints in them anyway | 18:58 |
prometheanfire | building them from master I'm not sure about, except for swift | 18:58 |
fungi | i think their py2 constraints moved in-repo | 18:59 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-subunit-output: keep subunit2html.py in the role https://review.opendev.org/730930 | 18:59 |
*** diablo_rojo has joined #opendev | 18:59 | |
fungi | enabling the requirements repo to drop all the python2 conditionals | 18:59 |
fungi | from master | 18:59 |
clarkb | remember we are only building things for which there aren't wheels on pypi | 19:01 |
clarkb | swift may need its liberasurecode wheels build though | 19:01 |
AJaeger | fungi: requirements has already dropped py2 conditionals from master | 19:01 |
openstackgerrit | James E. Blair proposed opendev/system-config master: WIP: fake zuul_connections for gate https://review.opendev.org/730929 | 19:01 |
fungi | AJaeger: yeah, in which case it probably makes sense for us to stop trying to build py2 wheels from ussuri constraints | 19:03 |
prometheanfire | fungi: we don't have py27 signified here https://github.com/openstack/requirements/blob/master/upper-constraints.txt | 19:03 |
prometheanfire | same with gr | 19:04 |
* prometheanfire removed that a week or two ago | 19:04 | |
*** dpawlik has joined #opendev | 19:04 | |
* prometheanfire should read a more recent backlog | 19:04 | |
AJaeger | fungi: ussuri still has py27 constraints, only master dropped AFIAU | 19:04 |
clarkb | right fungi is pointing out that you also need to update the wheel building | 19:05 |
clarkb | and that some things in ussuri like tempest seem to not work either | 19:05 |
hrw | clarkb: you run 'pip install a lot of packages' and those which are on pypi are just fetched anyway | 19:08 |
*** roman_g has quit IRC | 19:08 | |
clarkb | hrw: yes, but we don't want to publish those in our local wheel mirror we want them to be fetched from pypi instead | 19:09 |
clarkb | so we install/build wheels for everything then only copy out the subset that isn't already available on pypi | 19:09 |
hrw | clarkb: yep | 19:09 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: WIP Move users into a base subdir https://review.opendev.org/730937 | 19:24 |
*** dpawlik has quit IRC | 19:35 | |
markmcclain | I'm trying to diagnose a POST_FAILURE on tarball upload: https://zuul.opendev.org/t/openstack/build/f5fd340f0158428e85e4b13eb85f6bc2/log/job-output.txt#1004 | 19:41 |
clarkb | markmcclain: pypi doesn't allow you to replace a release iirc | 19:43 |
clarkb | markmcclain: you can remove a release or provide a newer versioned release, but you can't replace an existing release | 19:43 |
clarkb | (this way if you've vetted a release you're not getting something different later on) | 19:43 |
fungi | more specifically it doesn't let you upload a file with the same filename as any file which has been previously uploaded (even if it's since been deleted) | 19:44 |
fungi | as a safety precaution | 19:44 |
markmcclain | hmm... PYPI shows .whl releaese for 2018.2.8 on Apr 4 and May 20th | 19:44 |
markmcclain | both from opstackci Apr-4 from 23.253.136.207 and May-20 from 104.130.127.102 | 19:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-subunit-output: keep subunit2html.py in the role https://review.opendev.org/730930 | 19:45 |
fungi | what's the project name on pypi? sorry, in the middle of a meeting so can't comb through logs just now | 19:46 |
markmcclain | x/networking-arista | 19:46 |
fungi | so looking at https://pypi.org/project/networking-arista/2018.2.8/#files then? | 19:47 |
markmcclain | actually scratch that... there's only one attempt on May 20th | 19:47 |
markmcclain | was looking at 2017.2.8 | 19:48 |
fungi | oh, above you said 2018.2.8, sorry | 19:49 |
markmcclain | the view is essentially the same with the tarball there | 19:50 |
fungi | yeah, https://pypi.org/project/networking-arista/2017.2.8/#files | 19:50 |
markmcclain | upload date is May 20th, but zuul log shows it failed | 19:50 |
clarkb | the wheel actually uploaded fine earlier in the log | 19:51 |
clarkb | it makes me wonder if twin uploaded bioth when it uploaded the wheel | 19:52 |
clarkb | then it failed when we try to upload the tar.gz because it is already there | 19:52 |
markmcclain | ultimately I'm trying to figure out why versioned tarballs stopped publishing to tarballs.o.o in late Feb, but this was first error I encountered sifting through logs | 19:53 |
fungi | appears there have been a bunch of similar failures: https://zuul.opendev.org/t/openstack/builds?job_name=release-openstack-python&project=x/networking-arista | 19:54 |
*** dpawlik has joined #opendev | 19:55 | |
markmcclain | right.. this impacting all branches and most recently last week's releases | 19:56 |
clarkb | looking at pypi I think the job did what we wanted | 19:56 |
clarkb | it just failed to record that properly (and maybe that affected tarball uploads too) | 19:56 |
clarkb | my hunch is twine uploaded both artifacts at the same time | 19:57 |
clarkb | then tried to do that again and failed | 19:57 |
ianw | hrw: i can look in on those arm64 wheel builds | 19:59 |
hrw | ianw: thanks | 19:59 |
mordred | corvus: when you were doing the iptables stuff based on group membership - did you have any issues finding things for hosts not included in the current playbook? | 19:59 |
corvus | mordred: i don't think so, because i was accessing inventory variables | 20:00 |
corvus | mordred: we suspected that may be different than facts) | 20:00 |
mordred | gotcha - so the list of hosts in a group worked fine | 20:00 |
corvus | yep | 20:00 |
mordred | asking because we use iptables in base | 20:01 |
corvus | oh wait | 20:01 |
ianw | clarkb / modred: could i ask you to look over dib changes https://review.opendev.org/#/c/730690/1 https://review.opendev.org/#/c/730691/3 https://review.opendev.org/#/c/730692/3 which fixes my screwup with the ubuntu kernel installs | 20:01 |
fungi | markmcclain: yeah, starts up here: https://zuul.opendev.org/t/openstack/build/f5fd340f0158428e85e4b13eb85f6bc2/log/job-output.txt#956 | 20:01 |
corvus | mordred: let me clarify: i believe that getting the list of hosts in a group is fine -- it is worth noting that in gate tests, we don't have the full inventory available, only the inventory used for that job. | 20:02 |
fungi | markmcclain: clarkb: i agree, looks like twine may have started uploading everything on invocation | 20:02 |
corvus | mordred: so "what are the zuul hosts?" will always work. "what are the zuul hosts?" when run on, say, "system-config-run-meetpad" will return the empty set. but on "infra-prod-meetpad" it would return the actual zuul hosts. | 20:03 |
clarkb | markmcclain: fungi though looking at the logs more closely the file sizes it prints don't support this theory | 20:03 |
fungi | also output may be duplicated in that log, i don't see the same in the console view broken down by task | 20:04 |
mordred | corvus: nod. | 20:05 |
clarkb | ianw: looks like centos7 functtests failed on the first one though taht should be separate of the change (so I +2'd) | 20:05 |
clarkb | fungi: markmcclain is it possible that we are running the job multiple times? | 20:06 |
fungi | or two similar jobs, but yeah i'm trying to pull up the buildset for that | 20:07 |
fungi | i guess we don't have an easy link from a build to its corresponding buildset | 20:08 |
fungi | https://zuul.opendev.org/t/openstack/builds?project=x%2Fnetworking-arista&pipeline=release | 20:08 |
corvus | fungi: try the link after "buildset" on the build summary page | 20:09 |
fungi | that doesn't seem to indicate any jobs i would expect to race | 20:09 |
clarkb | fungi: could be a separate pipeline maybe? | 20:09 |
fungi | corvus: oh, thanks, i'm blind | 20:09 |
fungi | i was sure there was one and then wasn't able to spot it for some reason | 20:10 |
fungi | clarkb: not that i can see from https://zuul.opendev.org/t/openstack/builds?project=x%2Fnetworking-arista# | 20:10 |
clarkb | quickly checking zuul configs for it I don't see any overlapping jobs | 20:11 |
clarkb | ya it really does seem like maybe twine is uploading both or pypi is returning an error improperly | 20:11 |
clarkb | basically we upload the tar.gz to pypi and there is no conflict but it reports there is (while still accepting the upload) | 20:11 |
*** cloudnull has quit IRC | 20:12 | |
clarkb | I need to pop out for a bike ride now, but if that hasn't been sorted out yet when I'm back I'll try to take a second look with fresh eyes | 20:12 |
fungi | looks like we're using twine 1.15.0 there | 20:14 |
clarkb | ianw: small thing on the second change but worth addressing early I think | 20:16 |
fungi | which is interesting... latest release of twine is 3.1.1 | 20:16 |
fungi | 1.15.0 was from september | 20:17 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: package-installs : allow a list of parameters https://review.opendev.org/730691 | 20:18 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Revert "Revert "ubuntu-minimal : only install 16.04 HWE kernel on xenial"" https://review.opendev.org/730692 | 20:18 |
ianw | clarkb: ^ thanks, updated | 20:18 |
fungi | aha... "Twine now requires Python 3.6 or later. Use pip 9 or pin to “twine<2” to install twine on older Python versions." https://twine.readthedocs.io/en/latest/changelog.html | 20:19 |
fungi | unfortunately still no idea why it's complaining about an existing file | 20:20 |
fungi | but i guess it does suggest we're running with older python in that job | 20:20 |
clarkb | fungi: that pin may be pre bionic | 20:21 |
clarkb | we can probably unpin it now? | 20:21 |
clarkb | then see if we get different results | 20:21 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: Clean up base playbook https://review.opendev.org/730985 | 20:23 |
mordred | clarkb, fungi : yeah - maybe old twine is confused by latest pypi | 20:24 |
mordred | clarkb, corvus: ^^ that's not b) yet - that's just a pre-cursor I think we're going to want first for either b) or c) | 20:24 |
fungi | clarkb: we're not pinning, other than to skip a known broken version | 20:25 |
fungi | i think we're just running it with older python | 20:25 |
fungi | so pip is selecting the most recent release which supports python2 | 20:25 |
fungi | (because we "Use pip 9") | 20:25 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: WIP Move users into a base subdir https://review.opendev.org/730937 | 20:29 |
corvus | mordred: istr you were looking into the idea of gate-only data overriding public hostvars and identified that it's tricky -- because the public hostvars are in the playbook-adjacent inventory directory, and that seems to take precedence over the /etc/ansible/hosts hostvars? | 20:29 |
mordred | corvus, clarkb: ^^ also - base.users doesn't work - but base/users does in local testing | 20:29 |
mordred | corvus: yes - I believe that's correct. | 20:30 |
mordred | corvus: you konw ... | 20:30 |
fungi | #status log ze12 war rebooted around 18:50z due to a hypervisor host problem in the provider, ticket 200526-ord-0001037 | 20:30 |
mordred | corvus: perhaps we combine this topic with the one from the meeting about splutting out 2 sets of public host vars | 20:30 |
openstackstatus | fungi: finished logging | 20:30 |
mordred | corvus: what if we moved hostvars out of playbooks/ entirely... | 20:31 |
fungi | looks like the zuul-executor process did not start when ze12 was rebooted, and i'm unable to start it manually (though i did confirm the initscript patch got applied at least) | 20:31 |
mordred | corvus: and made *2* inventory directories - one that has the openstack.yaml file and the base hostvars | 20:31 |
mordred | corvus: and one that includes groups.yaml and includes the per-service hostvars | 20:31 |
mordred | corvus: then I believe the order in which the inventory sources are defined in ansible.cfg determines precedence | 20:31 |
mordred | corvus: and we can then have an understandable inventory override in the gate jobs | 20:32 |
fungi | aha! /var/log/zuul/* are owned by user 3000 | 20:32 |
fungi | infra-root: ^ we probably need to fix this across all our providers | 20:32 |
fungi | er, executors | 20:32 |
corvus | fungi: on it | 20:32 |
corvus | mordred: sounds promising | 20:32 |
fungi | thanks corvus! | 20:32 |
mordred | corvus, fungi we shoudl audit uid - I think I remember seeting the zuul user being 3000 on some hosts and 10001 on others | 20:33 |
mordred | (now the zuuld user) | 20:33 |
mordred | corvus: I'll poke at that idea too | 20:33 |
fungi | a find in relevant subtrees looking for -uid=3000 is probably prudent | 20:33 |
mordred | corvus: you know nothing I like more than patches that re-arrange ansible yaml files | 20:33 |
fungi | oh, or comparing the uids in /etc/passwd too, yep | 20:34 |
mordred | fungi: may want to do an ansible grep of zuuld: in /etc/passwd | 20:34 |
mordred | yeah | 20:34 |
fungi | zuuld is uid 10001 and gid 10001 on ze12 | 20:34 |
fungi | same on ze01, but on ze02 for example it's uid 3000 gid 10001 | 20:35 |
fungi | so yeah, not consistent | 20:35 |
*** priteau has quit IRC | 20:36 | |
corvus | oh i thought that had been done | 20:36 |
fungi | maybe ansible couldn't update the uid because the executor processes had to be offline? | 20:36 |
corvus | well yeah, i mean that's what we were expecting to maybe have problems with | 20:36 |
corvus | but i thought someone over the weekend checked the uids and fixed it | 20:37 |
corvus | but i guess i misunderstood | 20:37 |
fungi | it was the username change ansible was complaining about over the weekend (user was named zuul in /etc/password and usermod refused to update that to zuuld) | 20:38 |
fungi | mordred ran a one-off play to sed the passwd file | 20:38 |
corvus | how is the uid 3000 on ze02 if the playbook ran successfully and says that the uid should be 10001? | 20:39 |
fungi | i wasn't able to start the executor i had stopped over the weekend, spotted that the username was still wrong in the initscript, and pushed up a patch for that but it didn't land until today | 20:39 |
corvus | huh, apparently that task failed | 20:41 |
corvus | oh yeah, lots of failures in that playbook | 20:41 |
corvus | how is it that playbook failed and the job succeeded? | 20:41 |
corvus | oh it didn't! | 20:42 |
corvus | it's still all kinds of failing | 20:42 |
corvus | okay, i guess i will continue to whack the moles there then | 20:42 |
corvus | https://zuul.opendev.org/t/openstack/builds?job_name=infra-prod-service-zuul | 20:42 |
mordred | corvus: ooh! this "reorganize hostvars" has another nice benefit - it means we can put groups of playbooks in subdirs too - because we don't have to worry about playbooks being adjacent to hostvars | 20:44 |
* mordred has nice patch coming | 20:44 | |
corvus | i think i need to shut down most of the executors to do the re-iding | 20:45 |
corvus | i'll try the process on ze02 now | 20:46 |
corvus | oh wait | 20:46 |
corvus | fungi: you said ze12 is down? | 20:47 |
fungi | corvus: yes, and probably 01 as well | 20:47 |
* fungi checks | 20:47 | |
corvus | okay, i'll start there | 20:47 |
fungi | yeah, 01 is the one i stopped over the weekend to see how far we could get with ansible | 20:47 |
fungi | but couldn't start it again because of (at least) the incorrect initscript | 20:48 |
corvus | what's the correct way to run ansible on a host from bridge? https://docs.openstack.org/infra/system-config/sysadmin.html#force-configuration-run-on-a-server is out of date | 20:48 |
fungi | and now 12 is down since a couple hours ago when rackspace had to do a reboot migration on it | 20:48 |
fungi | and the executor failed to start (presumably because it couldn't append to the logs) | 20:48 |
mordred | corvus: cd ~/src/opendev.org/opendev/system-config | 20:49 |
mordred | corvus: then run ansible-playbook playbooks/foo.yaml | 20:50 |
corvus | as zuul | 20:50 |
corvus | or cd ~zuul and run as root | 20:50 |
mordred | sorry - ~zuul | 20:50 |
mordred | running as root works fine | 20:50 |
corvus | k, that's what i thought; running now | 20:50 |
corvus | okay, we're not doing any recursive chowning of logfiles, etc | 20:52 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: Split inventory into multiple dirs and move hostvars https://review.opendev.org/730991 | 20:54 |
mordred | corvus: I *think* that should work ^^ ... although it's sure going to run all of the system-config test jobs | 20:55 |
openstackgerrit | Colleen Murphy proposed openstack/diskimage-builder master: Pre-install xz package in opensuse chroot https://review.opendev.org/730992 | 20:55 |
corvus | i'm running this on ze12: chown -R --from=3000 zuuld /var/log/zuul; chown -R --from=3000 zuuld /var/lib/zuul | 20:55 |
mordred | corvus: ++ | 20:55 |
fungi | you can probably path multiple paths to chown, but yeah that looks fine | 21:00 |
corvus | starting up ze12 | 21:01 |
corvus | seems to be happy | 21:01 |
corvus | i'll move on to ze01 now | 21:01 |
fungi | awesome | 21:02 |
corvus | ownership looks ok on ze01 | 21:02 |
corvus | ze01 was stopped by request on 5-23 | 21:03 |
fungi | yes, that's the one i stopped manually but couldn't start again because the initscript was wrong at the time | 21:03 |
corvus | k, it's up and running now | 21:04 |
fungi | initscript fix merged earlier today i just hadn't gotten a chance to try it yet | 21:04 |
corvus | ze03 is correct and running | 21:05 |
corvus | ze05 same | 21:05 |
corvus | ze07 same | 21:06 |
corvus | so 1,3,5,7,12 are good -- i'm going to stop 2,4,6,8,9,10,11 all at the same time and run the corrective steps | 21:06 |
*** factor has joined #opendev | 21:06 | |
mordred | https://www.youtube.com/watch?v=hFZFjoX2cGg <-- next time anyone needs a brainhole break - I cannot recommend this youtube video highly enough | 21:11 |
fungi | so roughly half the executors, sounds good | 21:17 |
corvus | i ran a manual usermod and that chown, now i'm running the playbook just to make sure we got everything | 21:21 |
*** hasharAway has quit IRC | 21:23 | |
corvus | no errors; starting back up now | 21:25 |
corvus | okay, 12 executors running builds now | 21:27 |
mordred | \o/ | 21:29 |
corvus | i think we expect the next instance of prod-zuul to succeed | 21:29 |
*** slaweq has quit IRC | 21:34 | |
jrosser | when i look at zuul status, some of my jobs say (2nd attempt) - what makes this happen? | 21:41 |
clarkb | jrosser: network connectivity issues will do that. Likely due to the effort corvus was doing on executors above | 21:42 |
jrosser | https://review.opendev.org/729878 would be an example of this just now | 21:42 |
clarkb | zuul will restart jobs like that if it detects a failure outside of the job cintent | 21:43 |
*** cloudnull has joined #opendev | 21:43 | |
jrosser | aaah ok, thanks | 21:43 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: Clean up base playbook https://review.opendev.org/730985 | 21:47 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: Split inventory into multiple dirs and move hostvars https://review.opendev.org/730991 | 21:47 |
*** dpawlik has quit IRC | 21:56 | |
clarkb | fungi: markmcclain's job ran on a bionic node | 21:57 |
clarkb | bionic has python3.6, does that mean we are running twine under python2? | 21:58 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: Move base roles into a base subdir https://review.opendev.org/730937 | 21:58 |
clarkb | the command is python3 -m pip install twine!=1.12.0 | 21:59 |
fungi | clarkb: clarkb 3.5 | 21:59 |
fungi | twine 2.0 requires python 3.6 or later | 22:00 |
fungi | so pip>=9 on python<3.6 will install twine<2 | 22:00 |
clarkb | fungi: bionic is 3.6 | 22:01 |
clarkb | where is 3.5 coming from? | 22:01 |
melwitt | does anyone know if/how one can re-create the same env that a zuul job creates locally from a proposed review? context is my colleague is working on this change https://review.opendev.org/730143 and it's failing upstream zuul but passing in a downstream-created environment. is there a way he could re-create the env zuul is using locally to try and debug? | 22:02 |
clarkb | melwitt: tripleo has a zuul reproducer tool, but its fairly involved (you basically have to run a mini zuul I think) | 22:03 |
clarkb | melwitt: is the pep8 issue a problem or just the devstack failure? | 22:03 |
melwitt | the devstack failure | 22:04 |
fungi | it's challenging, because zuul exists specifically to do things which are so complex that you can't really do them locally (like assembling job configuration from distributed sources, integrating changes from lots of different repos, et cetera) | 22:04 |
corvus | melwitt: i'm assuming your colleague has run devstack, tempest, etc with the change in place and that's suceeding. so is it the case there's a suspected nuance of, say, the images used in opendev that's affecting it? | 22:05 |
fungi | we do make the images available for download, if that helps | 22:05 |
corvus | melwitt: or does your colleague need a way to "run devstack like it runs in the gate?" | 22:06 |
melwitt | fungi: that might help if you couldn't point me to it | 22:06 |
fungi | melwitt: https://nb01.opendev.org/images/ | 22:07 |
clarkb | and that job ran on ubuntu-bionic so you want the ubuntu-bionic image. It will configure the root ssh keys if given config drive metadata | 22:07 |
fungi | you'll probably need to supply a configdrive with ssh keys if you need to log into it | 22:07 |
ianw | fungi: i'll be happy to pickup the wheel work ... it's on my todo list, esp. new builders | 22:07 |
melwitt | corvus: he's run the tests he's working on through a tripleo deployed env downstream and things pass there. it's a feature in libvirt that became available in a certain version and zuul is installing the needed version, so it's so far a mystery why it's failing in the upstream zuul case | 22:08 |
clarkb | melwitt: and yall have seen https://zuul.opendev.org/t/openstack/build/df8489e5f9ea46cb988911fb51137b26/log/compute/logs/screen-n-cpu.txt#21493 ? | 22:08 |
clarkb | melwitt: the error seems to happen in nova not libvirt fwiw (though that could be bubbled out of libvirt I suppose) | 22:09 |
fungi | ianw: oh, thanks, i got the additional volumes for 730323 vos created and added the read-only sites for them, but i think there may be a problem with one of the afs servers because i can't do the vos release step prior to mounting them into the tree | 22:09 |
fungi | ianw: Failed to start transaction on 536871099; Possible communication failure; Could not release lock on the VLDB entry for volume 536871098; Error in vos release command. | 22:09 |
fungi | (et cetera) | 22:09 |
corvus | melwitt: it's probably worth running it in a devstack deployment since that's what that job does | 22:09 |
clarkb | fungi: I can't see how we'd be using python3.5 on ubuntu bionic. We may be using python2 though | 22:09 |
melwitt | clarkb: yes. admittedly I don't know the detail of how this feature works but it sounds like it happens if libvirt isn't doing a thing we expect it to given the version. and we're wondering is this some difference between distro version of libvirt somehow or what | 22:10 |
fungi | clarkb: yeah, i'm baffled too, but... https://zuul.opendev.org/t/openstack/build/f5fd340f0158428e85e4b13eb85f6bc2/log/job-output.txt#913 | 22:10 |
fungi | clarkb: i wonder if we used python3.5 on the nodepool builder to create the virtualenv somehow? | 22:11 |
fungi | though no, that's no virtualenv/venv | 22:11 |
melwitt | corvus: yeah, makes sense | 22:12 |
clarkb | fungi: ya thats not a virtualenv, super weird | 22:12 |
fungi | clarkb: so why is pip referencing packages from /usr/local/lib/python3.5 on a bionic node?!? | 22:12 |
clarkb | melwitt: does the feature require kvm and not just qemu? | 22:13 |
clarkb | melwitt: devstack does not do nested virt by default because it is so flaky | 22:13 |
melwitt | clarkb: that I don't know. I'll ask | 22:14 |
fungi | melwitt: this is also a good reference to pass along https://docs.opendev.org/opendev/infra-manual/latest/testing.html | 22:15 |
ianw | fungi: ok, i see cent8 and focal rw volumes in the volume list, but not R/O | 22:16 |
melwitt | thanks for the info, this is all very helpful | 22:16 |
fungi | ianw: i did the `vos addsite afs01.dfw.openstack.org a mirror.wheel.focalx64` (and afs02.dfw) for each of the volumes | 22:17 |
fungi | i didn't hit any errors, but yeah i suppose that's where the problem lies and why vos release isn't working for them | 22:17 |
fungi | possibly hung addsite transactions | 22:18 |
clarkb | fungi: I've ssh'ed into a random limestone bionic node and there is no python3.5 that I can see | 22:18 |
openstackgerrit | Ghanshyam Mann proposed opendev/irc-meetings master: Add Secure Default policies popup team meeting https://review.opendev.org/730935 | 22:18 |
fungi | ianw: maybe we're hitting a limit somewhere? | 22:18 |
fungi | clarkb: and no /usr/local/lib/python3.5 either i guess | 22:18 |
clarkb | there is 3.6 and 2.7 in /usr/local/lib as expected | 22:18 |
clarkb | fungi: orrect | 22:18 |
clarkb | oh! | 22:18 |
clarkb | I see now | 22:18 |
fungi | is that on the executor? | 22:18 |
clarkb | fungi: ya | 22:19 |
melwitt | confirmed that no nested virt is involved. thank you for the ideas, we have a path forward now, thanks so much clarkb corvus fungi | 22:19 |
clarkb | we do the wheel build on the bionic node, then copy it to the executor and attempt to twine it from there | 22:19 |
fungi | melwitt: we're here if you have more questions, just let us know | 22:19 |
ianw | fungi: hrmmm ... ls: cannot access 'centos-8-x86_64': Connection timed out after i tried to mount it | 22:19 |
fungi | clarkb: yeah, okay, so that much makes sense (and explains why we get older twine) | 22:20 |
melwitt | thanks all ++ :) | 22:20 |
fungi | ianw: i didn't even get as far as trying to mount because our instructions say to vos release first | 22:20 |
fungi | and that was breaking | 22:20 |
ianw | yeah, i'd image that because the r/o mirrors aren't showing up (?) vos release will not be happy | 22:21 |
ianw | other than the creation logs, nothing of interest in afs server logs i can see | 22:23 |
*** DSpider has quit IRC | 22:23 | |
fungi | i wonder if we've still got something hung from the afs02 outage last month | 22:23 |
fungi | more stale transaction records or something? | 22:23 |
ianw | vicepa plenty of space, nothing in dmesg type logs | 22:23 |
clarkb | melwitt: the job appears to be using libvirt 4.0.0 https://libvirt.org/formatdomain.html#elementsVideo says that you need 4.6.0 to use model type none | 22:25 |
fungi | clarkb: okay, so mystery of the older twine is out of the way, but i'm still no closer to figuring out why x/networking-arista seems to consistently hit file-exists errors for sdist uploads to pypi but other projects like openstack/python-ironic-inspector-client here are just peachy: https://zuul.opendev.org/t/openstack/build/7171b1ae6b5f42eb8055c539db8cbe4f | 22:26 |
ianw | fungi: you working on mirror-update.opendev.org? | 22:27 |
melwitt | clarkb: oh geez.. let me look into that | 22:27 |
fungi | ianw: nope | 22:27 |
fungi | ianw: i was just using my fungi/admin kerberos account locally | 22:27 |
fungi | ianw: ooh, could it be i'm using too-new openafs? | 22:28 |
fungi | that wouldn't surprise me | 22:28 |
ianw | ok, that seems to have 1.8.5 openafs packages, but with an uptime of 175 days that would presumably still be running the older 1.8.3 kernel module | 22:28 |
fungi | yeah, i'm using 1.8.6~pre1 | 22:28 |
fungi | i bet that's it | 22:28 |
fungi | we probably have to cancel the transactions for my addsite commands | 22:29 |
clarkb | fungi: did you want to push the new patchset for https://review.opendev.org/#/c/730933/ to get the regex to match old and new pip? | 22:29 |
clarkb | ianw: ^ is related to mirror owrk | 22:29 |
fungi | clarkb: sure, i can do that, just a jiffy | 22:29 |
ianw | fungi: i don't know, stuff like this shouldn't break | 22:29 |
fungi | ianw: oh, was it using new server with old clients that was the problem not the other way around? | 22:29 |
clarkb | fungi: actually one sec, your regex won't match the new case | 22:30 |
fungi | clarkb: huh, i thought i tested it locally against that | 22:30 |
clarkb | fungi: you need to match the (5MB) suffix | 22:30 |
clarkb | without a # | 22:30 |
melwitt | clarkb: looks like that should totally be the problem 😬 | 22:30 |
clarkb | oh wait I see the # is optional so I think we just need to remove the |$ case? | 22:30 |
clarkb | fungi: I think yours will work but we should just remove the $ case I think | 22:31 |
clarkb | we shouldn't ever hit that branch in the regex | 22:31 |
clarkb | fungi: re why other packages don't hit the dup error, could it be related to upload sizes somehow (eg a race in pypi) | 22:32 |
melwitt | clarkb: sorry for the noise :( | 22:32 |
clarkb | melwitt: no worries | 22:32 |
*** tobiash has quit IRC | 22:33 | |
fungi | clarkb: ahh, yeah i added the $ case just in case future iterations ended the line immediately after the .whl | 22:33 |
*** tobiash has joined #opendev | 22:34 | |
fungi | but it would be fine to only insist on # or space for patterns we've seen thus far | 22:34 |
fungi | was just trying to make the match as accepting as possible | 22:34 |
clarkb | got it in that case your proposed version is probably fine | 22:34 |
ianw | fungi: i think to start, we should reboot mirror-update.opendev.org to make sure it's kernel openafs module is in sync with the userspace tools | 22:35 |
fungi | we'll still need to update it if they change case on Downloading or use a different word or wrap the filename in brackets or... | 22:35 |
clarkb | fungi: ianw https://review.opendev.org/#/c/730861/ should be a quick review if you hvae amoment, that is for testing base-test with ssl'd mirrors | 22:35 |
clarkb | fungi: well we'll be pinning pip soon enough I bet to accomodate python2 there | 22:35 |
clarkb | fungi: and if we do that the output should be pretty stable | 22:35 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: Split inventory into multiple dirs and move hostvars https://review.opendev.org/730991 | 22:36 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: Move base roles into a base subdir https://review.opendev.org/730937 | 22:36 |
mordred | clarkb: I don't have this totally working yet - but does that idea make sense of where that's going? ^^ | 22:36 |
clarkb | mordred: I'm not getting how the inventory split helps. We already organize group_vars roughly by service, and host_vars are not collapsed any further than they are already? | 22:38 |
clarkb | mordred: the roles side makes sense to me | 22:40 |
openstackgerrit | Monty Taylor proposed opendev/system-config master: Split out a base playbook for the zuul service https://review.opendev.org/730999 | 22:43 |
mordred | clarkb: the inventory split will eventually let us have 2 different files for each host/group - one for variables that are needed for base roles and one for variables that are needed by service-specific roles | 22:44 |
clarkb | mordred: gotcha so the split is service/ vs base/ not between services as much | 22:45 |
mordred | clarkb: that way we can file-match inventory/service/host_vars/zuul01.openstack.org.yaml in infra-prod-service-zuul and inventory/base/host_vars/zuul01.openstack.org.yaml in base-zuul | 22:45 |
clarkb | but that then helps to know when to run base or not | 22:45 |
mordred | yeah | 22:45 |
mordred | a side-benefit is it will also let us override hostvars in gate-specific vars | 22:46 |
openstackgerrit | Jeremy Stanley proposed openstack/project-config master: Update pip output parsing to fix wheel mirror builds https://review.opendev.org/730933 | 22:46 |
mordred | which we cant' do when the hostvars are just adjacent to playbooks, since that's a higher-priority location | 22:46 |
clarkb | ianw: ^ is worth a review given the mirror work | 22:46 |
mordred | clarkb: granted - I'm not 100% sure this is good/better yet | 22:46 |
openstackgerrit | Merged opendev/base-jobs master: Test mirrors with ssl https://review.opendev.org/730861 | 22:46 |
clarkb | I've restored https://review.opendev.org/#/c/680178/ to test ^ | 22:47 |
clarkb | completely unrelated, gitea has a few commits on the 1.12 branch after the rc tag | 22:51 |
clarkb | they are largely bug fixes so we may want to wait for another rc or release before deploying that | 22:52 |
*** tosky has quit IRC | 22:52 | |
*** tkajinam has joined #opendev | 22:57 | |
openstackgerrit | Clark Boylan proposed zuul/zuul-jobs master: DO NOT MERGE test base-test with no virtualenv perms modifications https://review.opendev.org/680178 | 23:00 |
ianw | server afs01.dfw.openstack.org partition /vicepa RO Site -- Not released | 23:07 |
ianw | server afs02.dfw.openstack.org partition /vicepa RO Site -- Not released | 23:07 |
ianw | fungi: ^ so they are there is vldb | 23:07 |
ianw | i mean in listvldb | 23:07 |
fungi | yeah, i wonder why the vos release hangs then | 23:08 |
ianw | i don't see it hang on afs01 | 23:09 |
ianw | but it doens't work | 23:09 |
ianw | fungi: http://paste.openstack.org/show/794022/ | 23:10 |
fungi | VOLSER: volume is busy | 23:11 |
fungi | huh | 23:11 |
fungi | and it's afs01.dfw which is struggling apparently? | 23:11 |
clarkb | infra-root config-core I think https://review.opendev.org/#/c/730862/1 is ready as base-test testing shows apt and pypi things working with ssl | 23:12 |
ianw | fungi: "vos status" does not show anything going on, afaics | 23:12 |
clarkb | does listvldb show any locks? | 23:13 |
clarkb | (if it does maybe we can work back from that to find what locked it?) | 23:13 |
ianw | no, doesn't show locks | 23:14 |
fungi | clarkb: looking through the networking-arista project journal on pypi (logged in with our openstackci creds), i see february 22 is the last time it had three log entries for a new release ("new release" and "add py2.py3 file" for the wheel and then "add source file" for the sdist). the next release on april 5 just has the "add py2.py3 file" entry as does each release after it, presumably also corresponding | 23:22 |
fungi | to the sdist upload failures | 23:22 |
ianw | fungi: ok, i ran a salvage on it, it seemed to do something, and it's still nto happy | 23:22 |
fungi | :/ | 23:22 |
fungi | ianw: could we delete and try to recreate the sites and volumes? it's not like they have any content whatsoever | 23:23 |
clarkb | fungi: does the journal not show the sdist for the newer releases? beause there are sdists available | 23:24 |
fungi | clarkb: it doesn't log them being uploaded, no, only the wheels | 23:25 |
fungi | it also doesn't log the release creation any longer | 23:25 |
fungi | i have no idea if this is a behavior change in pypi or what | 23:25 |
clarkb | weird, it definitely lists the sdists | 23:25 |
fungi | the security history log still shows events for each release creation though | 23:26 |
fungi | timestamps are a bit weird too | 23:27 |
fungi | interesting, for 2017.2.8 the security history shows a release created event at May 20, 2020, 9:47:08 PM by user openstack-arista but then the journal shows the wheel upload at May 20, 2020, 9:58:00 PM by user openstackci (from the ip address of ze06) | 23:30 |
clarkb | ooh interesting, is it possible they did an sdist upload out of band then we did the wheel upload? | 23:31 |
fungi | markmcclain: ^ you might ask whoever is handling your release process what steps they're following... it shouldn't be necessary to manually create a release on pypi. are they doing that by uploading an sdist? | 23:31 |
clarkb | markmcclain: ^ if you are still around maybe you know? | 23:31 |
fungi | looking at the security history, the last working releases were when openstackci created the releases | 23:32 |
fungi | the releases starting in april where openstack-arista created them prior to upload correspond to the build failures we've been seeing | 23:32 |
fungi | every release starting from april was created by the openstack-arista account rather than by zuul uploading release artifacts | 23:34 |
fungi | that's got to have something to do with it | 23:34 |
clarkb | ++ | 23:34 |
fungi | far too coincidental of a match | 23:34 |
fungi | that also explains why i'm not seeing this problem for other repos, they show openstackci creating the corresponding releases at time of package upload | 23:36 |
ianw | number of sites -> 3 | 23:45 |
ianw | server mirror-update01.opendev.org partition /vicepa RW Site | 23:45 |
ianw | server mirror-update01.opendev.org partition /vicepa RO Site | 23:45 |
ianw | server afs02.dfw.openstack.org partition /vicepa RO Site | 23:45 |
ianw | i feel sure that very odd server name has something to do with all this | 23:45 |
clarkb | wait did mirror-update get set up as a server? | 23:46 |
ianw | no ... hence the very odd bit :) | 23:46 |
fungi | i just double-checked and none of the commands i ran to create these volumes/sites had "mirror-update" anywhere in them | 23:47 |
fungi | so it's presumably not from today at least | 23:47 |
ianw | i'm running syncserv on mirror-update now, to see if that brings it back inline | 23:48 |
ianw | fungi: i have no idea how things got so messed up :/ i had to zap a volume, and then vos delentry ... but i think after recreating it's working | 23:59 |
clarkb | the existing meetpad server is a 8vcpu 8gb memory server. jvb scaling seems to be more cpu dependent than memory dependent so I'll probably use a flavor with at least 8vcpu (which I'm guessing the smaller is also with 8gb memory) for adding a jvb server | 23:59 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!