*** Dmitrii-Sh has quit IRC | 00:04 | |
*** DSpider has quit IRC | 00:07 | |
*** moppy has quit IRC | 00:10 | |
*** Dmitrii-Sh has joined #opendev | 00:10 | |
*** moppy has joined #opendev | 00:10 | |
*** olaph has joined #opendev | 00:42 | |
*** Dmitrii-Sh has quit IRC | 00:46 | |
*** Dmitrii-Sh has joined #opendev | 00:55 | |
openstackgerrit | Merged opendev/system-config master: Replace OVH CI mirrors https://review.opendev.org/727388 | 01:12 |
---|---|---|
openstackgerrit | Merged openstack/diskimage-builder master: block device: update variable name https://review.opendev.org/727431 | 01:28 |
*** redrobot has quit IRC | 01:34 | |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: bindep: use virtualenv_command from ensure-pip https://review.opendev.org/727561 | 01:37 |
fungi | the deploy pipeline has almost reached infra-prod-service-mirror | 02:21 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 02:31 |
*** kevinz has joined #opendev | 02:35 | |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 02:39 |
fungi | must be almost finished, the cachedirs exist on the new mirrors as of a few seconds ago | 02:39 |
fungi | then i can move them into the logical volumes and we're ready to test | 02:39 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 02:44 |
ianw | fungi: ++ thanks! | 02:46 |
fungi | i'm nearly done transplanting, then will reboot them both | 02:52 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 02:54 |
fungi | and rebooting | 02:55 |
fungi | assuming it checks out, i'll un-wip 727393 | 02:57 |
ianw | fungi: did you do the swizzle in the mirror setup? | 03:04 |
fungi | yeah | 03:05 |
fungi | assuming you mean 727392 | 03:05 |
ianw | https://review.opendev.org/#/c/727392/1 yep | 03:05 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 03:08 |
fungi | i couldn't access afs even though the openafs lkm was loaded per lsmod and ps showed the openafs-client daemon running | 03:10 |
fungi | but there was also a running rmmod process since around boot time | 03:10 |
fungi | so i'm rebooting them a second time to see if that persists | 03:10 |
fungi | these take a long while to boot | 03:10 |
fungi | still the same after another reboot | 03:11 |
fungi | root 795 0.0 0.0 4648 872 ? Ds 03:09 0:00 /sbin/rmmod openafs | 03:13 |
fungi | that is really strange | 03:13 |
fungi | i wonder if it's dkms's doing somehow | 03:13 |
fungi | or something to do with the openafs-client systemd unit? | 03:13 |
fungi | it's parented to init | 03:14 |
fungi | i'm also well past my bedtime. need to be up extra early to catch the release excitement | 03:15 |
fungi | found this in syslog: | 03:18 |
fungi | May 13 03:12:12 mirror01 kernel: [ 12.988169] openafs: module verification failed: signature and/or required key missing - tainting kernel | 03:19 |
fungi | then later: | 03:19 |
fungi | May 13 03:14:28 mirror01 afsd[778]: afsd: Error enabling dynroot support. | 03:19 |
fungi | May 13 03:14:28 mirror01 afsd[778]: afsd: Error enabling fakestat support. | 03:19 |
fungi | then a whole bunch of "Adding cell 'rl.ac.uk': error -1" but for various cells | 03:20 |
fungi | eventually "openafs-client.service: State 'stop-post' timed out. Terminating." | 03:21 |
fungi | so i guess that explains the rmmod | 03:21 |
fungi | however the attempted rmmod then raises a kernel error on its own | 03:21 |
fungi | possibly because it's still busy | 03:21 |
fungi | using the same kernel and openafs package versions as the most recently-built mirror.us-east.openedge.opendev.org which is working fine | 03:23 |
ianw | hrm | 03:23 |
fungi | ianw: if you get bored, that could use looking into. i'm going to have to call it a night but can resume after tomorrow's release activities cease | 03:24 |
ianw | that signature thing is always there but yeah ... it should work | 03:24 |
ianw | all the hosts? | 03:24 |
fungi | the two new mirrors i just built | 03:24 |
fungi | mirror01.bhs1.ovh.opendev.org and mirror02.gra1.ovh.opendev.org | 03:25 |
fungi | doesn't seem to be a problem on our other mirror servers | 03:25 |
clarkb | maybe try rebuild it with dkms? | 03:26 |
clarkb | (paclage reinstall I guess) | 03:26 |
ianw | we have had issues with ordering before, i think there's breadcrumbs in the comments of the roles | 03:26 |
ianw | there is an openafs mod installed | 03:27 |
fungi | yeah, and a defunct rmmod process for it too | 03:27 |
fungi | i think because systemd is trying to clean up after failing to start the openafs-client service | 03:27 |
fungi | anyway, if it turns out to be something simple and you can verify the afs mirror urls are working (the apache proxy cache sites and https certs already seem to check out fine), feel free to delete my wip vote on 727393 and approve it | 03:29 |
fungi | okay, really heading to sleep now | 03:29 |
ianw | # dkms status | 03:30 |
ianw | openafs, 1.8.3, 4.15.0-99-generic, x86_64: installed | 03:30 |
ianw | fungi: thanks ... i'll see if i can find anything | 03:30 |
ianw | i rebuilt it with dkms and am seeing what happens on reboot | 03:44 |
ianw | [0m] A start job is running for OpenAFS client (31s / 1min 33s) | 03:48 |
ianw | it's clearly still unhappy | 03:48 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [dnm] trigger openafs client test https://review.opendev.org/727627 | 03:56 |
ianw | ^ that appears to pass on the same kernel | 04:13 |
*** ykarel|away is now known as ykarel | 04:26 | |
*** diablo_rojo has quit IRC | 04:44 | |
*** mtreinish has quit IRC | 05:01 | |
*** dpawlik has quit IRC | 05:04 | |
ianw | i have no idea ... | 05:06 |
ianw | going to try tracing afsd as it starts | 05:06 |
ianw | also, i've uploaded 1.8.5, same as focal, for bionic to the ppa ... see if it builds | 05:07 |
ianw | ok, for the record, i disabled openafs-client and rebooted. i ran /usr/share/openafs/openafs-client-precheck which seems to load the module | 05:17 |
ianw | then i ran strace -f -o /tmp/foo.txt /sbin/afsd -afsdb -dynroot -fakestat -verbose -debug which is in /home/ianw/strace-afsdb.txt | 05:18 |
ianw | that command appears to die in an unkillable state | 05:20 |
ianw | the last thing it floods out is "SScall(183, 28, -1301744064)=0" | 05:20 |
ianw | it also does a lot of stuff in the cache directory, which is populated | 05:30 |
ianw | 1.8.5 has built for bionic, so i'm trying that now. it seems if we have to debug this further, we might as well be using that | 05:31 |
ianw | (from 1.8.3 ... 1.8.4 has various fixes, 1.8.5 itself is security) | 05:32 |
*** mtreinish has joined #opendev | 05:34 | |
ianw | well well well | 05:42 |
ianw | root@mirror01:/var/cache/openafs# ls | 05:42 |
ianw | 05:42 | |
ianw | that appears pretty dead | 05:42 |
ianw | ... ok ... weirdness. i recreated the file-system on /dev/mapper/main-openafs which went just fine and now ... it works | 05:47 |
*** ysandeep|away is now known as ysandeep | 05:50 | |
*** DSpider has joined #opendev | 05:55 | |
ianw | and the same on mirror02 ... formatting the cache partition has made it work | 05:58 |
AJaeger | \o/ | 05:59 |
ianw | https:// sites on both work for me | 06:01 |
ianw | i'm not going to take the risk of switching this in now as i'm about done for today | 06:01 |
AJaeger | ianw: thanks, have a good night! | 06:09 |
*** dpawlik has joined #opendev | 06:25 | |
*** lpetrut has joined #opendev | 06:42 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Policy rule for ownership between remote and executor https://review.opendev.org/724855 | 07:20 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Add linting rule to enforce no-same-owner policy https://review.opendev.org/727642 | 07:20 |
*** rpittau|afk is now known as rpittau | 07:29 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Add linting rule to enforce no-same-owner policy https://review.opendev.org/727642 | 07:29 |
*** tosky has joined #opendev | 07:35 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Policy rule for ownership between remote and executor https://review.opendev.org/724855 | 07:41 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Add linting rule to enforce no-same-owner policy https://review.opendev.org/727642 | 07:41 |
*** moppy has quit IRC | 08:01 | |
*** moppy has joined #opendev | 08:01 | |
openstackgerrit | Sorin Sbarnea (zbr) proposed zuul/zuul-jobs master: bindep: Add missing virtualenv and fixed repo install https://review.opendev.org/693637 | 08:08 |
*** ykarel is now known as ykarel|lunch | 08:30 | |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Fix nodejs-npm-run-test https://review.opendev.org/727670 | 09:03 |
*** dtantsur|afk is now known as dtantsur | 09:13 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Don't require tox_envlist https://review.opendev.org/726829 | 09:14 |
openstackgerrit | Merged zuul/zuul-jobs master: Fix nodejs-npm-run-test https://review.opendev.org/727670 | 09:20 |
*** ykarel|lunch is now known as ykarel | 09:24 | |
*** ysandeep is now known as ysandeep|lunch | 10:09 | |
fungi | ianw: thanks for finding that! all i did was format the volume, set permissions to match the /var/cache/openafs and then move the files into it | 10:14 |
fungi | i wonder if openafs is picky about those files getting moved | 10:14 |
fungi | ianw: when you say "formatting" you mean rerunning mkfs.ext4 on them? | 10:20 |
fungi | if so, maybe it's one of the tuning options i applied following our document on adding logical volumes | 10:20 |
*** dpawlik has quit IRC | 10:21 | |
*** hrw has quit IRC | 10:28 | |
*** ysandeep|lunch is now known as ysandeep | 10:55 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: intercept-job -- self-service SSH access https://review.opendev.org/679306 | 10:56 |
*** rpittau is now known as rpittau|bbl | 11:04 | |
*** dpawlik has joined #opendev | 11:32 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Add linting rule to enforce no-same-owner policy https://review.opendev.org/727642 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-coverage-output: do not synchronize owner https://review.opendev.org/727717 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-javascript-content-tarball: do not synchronize owner https://review.opendev.org/727718 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-javascript-output: do not synchronize owner https://review.opendev.org/727719 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-javascript-tarball: do not synchronize owner https://review.opendev.org/727720 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-markdownlint: do not synchronize owner https://review.opendev.org/727721 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-phoronix-results: do not synchronize owner https://review.opendev.org/727722 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-puppet-module-output: do not synchronize owner https://review.opendev.org/727723 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-python-sdist-output: do not synchronize owner https://review.opendev.org/727724 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-sphinx-output: do not synchronize owner https://review.opendev.org/727725 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-sphinx-tarball: do not synchronize owner https://review.opendev.org/727726 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-tox-output: do not synchronize owner https://review.opendev.org/727727 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-translation-output: do not synchronize owner https://review.opendev.org/727728 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: fetch-subunit-output: do not synchronize owner https://review.opendev.org/727729 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: prepare-workspace: do not synchronize owner https://review.opendev.org/727730 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: publish-artifacts-to-fileserver: do not synchronize owner https://review.opendev.org/727731 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: upload-logs: do not synchronize owner https://review.opendev.org/727732 | 11:45 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Add linting rule to enforce no-same-owner policy https://review.opendev.org/727642 | 11:49 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: tarball-post.yaml: do not synchronize owner https://review.opendev.org/727735 | 11:49 |
*** sshnaidm|afk is now known as sshnaidm | 12:16 | |
*** ysandeep is now known as ysandeep|brb | 12:17 | |
*** rpittau|bbl is now known as rpittau | 12:19 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Drop support for ansible 2.7 https://review.opendev.org/727410 | 12:20 |
*** tkajinam has quit IRC | 12:24 | |
*** lpetrut has quit IRC | 12:46 | |
AJaeger | config-core, two small project-config changes, please review: https://review.opendev.org/726461 and https://review.opendev.org/727178 | 13:07 |
*** ykarel is now known as ykarel|afk | 13:09 | |
smcginnis | Anyone know if the jitsi instance is falling down? | 13:09 |
*** spotz has joined #opendev | 13:13 | |
AJaeger | infra-root, do we want to try swift upload on OVH again? https://review.opendev.org/#/c/726028/ | 13:13 |
fungi | smcginnis: no idea, i can ssh into the server and look. i'm not able to really do much with it myself since i don't have a mic and cam yet | 13:14 |
smcginnis | Can probably still see the problem if you try to join a meeting on there. | 13:15 |
smcginnis | Just immediately goes to a message stating you have been disconnected and periodically tries to rejoin. | 13:15 |
fungi | yeah, that happens for any room name i try to give it | 13:17 |
fungi | it looks like the container is running and was last restarted on thursday | 13:18 |
fungi | no signs of any memory pressure or cpu utilization | 13:18 |
spotz | Chris thought maybe java needed a kick | 13:19 |
fungi | i'll see if i can remember how to look at the container's log | 13:19 |
* fungi is not all that good at these container things just yet | 13:19 | |
fungi | spotz: smcginnis: was it working earlier? | 13:21 |
spotz | This was our first time testing it | 13:22 |
smcginnis | fungi: I think I had connected to one a week or so ago. | 13:23 |
fungi | looks like the jvb container is regularly reporting INFO: Performed a successful health check in PT0.005S. Sticky failure: false | 13:23 |
fungi | but `docker-compose logs` isn't reporting much else that i can see | 13:24 |
fungi | ahh, that container just drowns out the logs from the other containers. if i add --tail=50 i can see output from the others | 13:26 |
fungi | the jicofo container last logged activity on friday | 13:27 |
spotz | hehe, troubleshooting how you really learn something:) | 13:28 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: fetch-sphinx-tarball: introduce zuul_use_fetch_output https://review.opendev.org/681870 | 13:28 |
fungi | same for the prosody container | 13:29 |
fungi | nothing they logged look like errors to my untrained eye, but i also need to read up on what these various components do and how they fit together. i'm hesitant to restart anything without knowing what evidence of the problem i might be destroying | 13:30 |
fungi | spotz: earlier in a privmsg you said it was prompting users to connect microphones, so this current behavior seems different | 13:30 |
smcginnis | I believe if there is a mic and camera present, the browser will ask for permission to access them. But I think that's a normal and separate thing. | 13:32 |
fungi | the corresponding etherpad for the room i'm testing seems to be loading fine, so i doubt anything about the etherpad server is triggering this behavior | 13:32 |
fungi | must be one of the jitsi-meet services | 13:32 |
*** redrobot has joined #opendev | 13:32 | |
spotz | Yeah but after that it's a rejoin loop | 13:33 |
fungi | got it, so same condition, and not sure how long ago it got into this state or whether it just started, i guess | 13:34 |
spotz | Unfortunately never tried logging in before so incomplete data | 13:36 |
fungi | i know we were testing it out as recently as friday | 13:36 |
fungi | clarkb: said it was working as recently as monday: http://eavesdrop.openstack.org/meetings/infra/2020/infra.2020-05-12-19.01.log.html#l-117 | 13:41 |
spotz | I wonder if existing rooms are fine | 13:45 |
corvus | i see a 405 when posting to /http-bind; that's supposed to connect to prosody, but i don't see any messages in the prosody log | 13:45 |
spotz | On the plus side the Community meeting isn't using it | 13:46 |
corvus | jicofo is normally chatty, but i don't see anything there | 13:46 |
corvus | my inclination would be to restart jicofo, then if that doesn't work, restart prosody | 13:46 |
corvus | we might at least narrow down the component that's misbehaving | 13:47 |
corvus | fungi: sound good? | 13:48 |
fungi | sounds reasonable, i'm still reading up on how these bits fit together | 13:48 |
fungi | i just wanted to make sure i didn't lose any logs which could tell us what went wrong | 13:49 |
corvus | i'll save the docker log output real quick | 13:49 |
corvus | in ~root/logs | 13:50 |
fungi | thanks | 13:50 |
corvus | jicofo restart seems insufficient; restarting prosody | 13:51 |
corvus | restarting jvb | 13:52 |
corvus | okay, next possibility: we track new upstream images for all of the components except web, and they were updated 5 days ago | 13:53 |
corvus | perhaps our old web image is incompatible, or perhaps the other components upgraded in a way that needs intervention | 13:54 |
corvus | how about we either try rolling the non-web components back, or the web component forward? | 13:54 |
fungi | do we have an updated web image, or do we need to trigger a build? | 13:55 |
corvus | we would either need to trigger a build, or, just for testing purposes, use the upstream image (without the etherpad auto-open) | 13:56 |
corvus | i'll try rolling the other components back first | 13:57 |
fungi | okay, i suppose that entails a restart of everything besides the jotsi-meet-web container... which of those didn't yet get restarted? | 13:58 |
corvus | fungi: i restarted all 3 | 13:58 |
fungi | prosody, jicofo and jvb. yep okay | 13:59 |
fungi | i see web is the only other one listed in the docker-compose | 13:59 |
*** ykarel|afk is now known as ykarel | 13:59 | |
corvus | okay, all restarted on stable-4548 | 14:00 |
spotz | No joy:( | 14:01 |
fungi | yep, still seeing the same "you have been disconnected" error | 14:01 |
corvus | i'll try one more old version | 14:01 |
corvus | also no joy | 14:03 |
corvus | how about we try latest everything upstream | 14:03 |
fungi | wfm | 14:03 |
fungi | and then if that solves it we can update web with the patch | 14:03 |
fungi | and if not, the problem must be elsewhere | 14:03 |
fungi | looks like firewall rules haven't changed for that server since march | 14:06 |
openstackgerrit | Monty Taylor proposed zuul/zuul-jobs master: Combine javascript deployment and deployment-tarball jobs https://review.opendev.org/727370 | 14:08 |
openstackgerrit | Monty Taylor proposed zuul/zuul-jobs master: Set node_version in js-build base job https://review.opendev.org/727774 | 14:08 |
corvus | hrm, i'm stumped. | 14:09 |
openstackgerrit | Monty Taylor proposed zuul/zuul-jobs master: Combine javascript deployment and deployment-tarball jobs https://review.opendev.org/727370 | 14:09 |
corvus | i'll change prosody to debug log | 14:10 |
fungi | nothing relevant in /etc has changed in the past few days | 14:14 |
corvus | it looks like one of the services has a docker volume; i've deleted it and restarted everything | 14:15 |
corvus | i'm going to back out the hyphen change | 14:19 |
fungi | dpkg.log says lots of package updates at 06:45z today | 14:20 |
corvus | that was the problem | 14:20 |
corvus | https://meetpad.opendev.org/http-bind | 14:20 |
corvus | that was getting redirected | 14:20 |
corvus | let me restart all the containers at the version from this morning | 14:21 |
fungi | dbus initramfs-tools libc-bin libnss-systemd libpam-systemd libsystemd0 libudev1 man-db open-iscsi systemd-sysv systemd udev | 14:21 |
corvus | fungi: ^ problem identified | 14:21 |
fungi | corvus: so "http-bind" is a thing we're not supposed to redirect? | 14:22 |
corvus | fungi: correct, it's referenced elsewhere in that config file | 14:23 |
fungi | ahh, i wonder if we need specific exclusions, or can get away with changing the matching order | 14:23 |
fungi | taking a look | 14:23 |
corvus | smcginnis, spotz: ^ found the problem | 14:23 |
spotz | Sweet | 14:24 |
smcginnis | Thanks! | 14:24 |
spotz | Yep I'm in! Thanks corvus!!! | 14:24 |
fungi | strangely, that patch was working earlier, i wonder if it relied on folks already having cached connections or something | 14:25 |
corvus | spotz, smcginnis: i'm going to restart once more to reset the log levels | 14:25 |
corvus | may be a brief hiccup | 14:25 |
spotz | No worries, our testing window for today is over but we'll give it a shot maybe tomorrow | 14:25 |
corvus | fungi: or we didn't restart the container after applying the change? | 14:26 |
corvus | fungi: i'll try your reordering idea | 14:27 |
corvus | fungi: nope | 14:28 |
*** lpetrut has joined #opendev | 14:29 | |
smcginnis | Back to getting the disconnected/rejoin loop. | 14:29 |
corvus | oh sorry i thought you were done testing | 14:30 |
corvus | i put it back | 14:30 |
corvus | smcginnis: should be gtg | 14:30 |
smcginnis | Oh, no. No worries. I was just testing. If you need to do more, please go ahead. | 14:30 |
openstackgerrit | Jeremy Stanley proposed opendev/system-config master: Move jitsi-meet pattern redirect to end https://review.opendev.org/727780 | 14:30 |
corvus | smcginnis: nah, i think we need to read the nginx manual now anyway :) | 14:31 |
fungi | corvus: so something like that ^ won't work? | 14:31 |
smcginnis | Oh no, not RTFM! | 14:31 |
corvus | fungi: i put the http-bind entry at the top and that didn't work | 14:31 |
fungi | got it. yeah, now reading how location matches are parsed | 14:32 |
corvus | i don't understand why the config.js and similar things work though. | 14:32 |
fungi | they have a "." in them | 14:32 |
corvus | ooh | 14:33 |
fungi | which i don't think we want to rely on as a protection (likely folks could want "." in room/pad names too) | 14:33 |
fungi | or in the case of /etherpad/ they have a / in them | 14:33 |
fungi | which is probably safer to rely on | 14:33 |
corvus | maybe we put "^~" next to the jitsi-meet block | 14:36 |
corvus | ^~: If a carat and tilde modifier is present, and if this block is selected as the best non-regular expression match, regular expression matching will not take place. | 14:37 |
corvus | that sounds like what we want | 14:37 |
*** ysandeep|brb is now known as ysandeep | 14:38 | |
*** mlavalle has joined #opendev | 14:42 | |
openstackgerrit | James E. Blair proposed opendev/system-config master: Exclude some regex matches in jitsi-meet web https://review.opendev.org/727786 | 14:42 |
corvus | fungi: should i try that on the server ^ ? | 14:43 |
fungi | corvus: looks like a worthwhile test | 14:44 |
fungi | sorry, distracted by lingering openstack release activities and now a dockerhub proxy outage reported in #openstack-infra | 14:45 |
fungi | and there's an openstack "community meeting" conference call happening at the same time. when it rains it pours | 14:45 |
spotz | yeah I'm doing 4 things at once | 14:46 |
clarkb | I'm starting to be here now and catching up | 14:47 |
clarkb | fungi: I can look at the dockerhub proxy thing | 14:47 |
fungi | clarkb: awesome, thanks, i was just trying to see if i can get it to reproduce on other mirror servers | 14:48 |
fungi | also i need to prepare to turn the ovh regions back on | 14:48 |
corvus | smcginnis: i'm going to restart the web server one more time; may be a blip | 14:48 |
corvus | all right, https://meetpad.opendev.org/http-bind still works | 14:49 |
fungi | excellent! i wonder if we should add "." to the allowed pad character redirect pattern? | 14:49 |
corvus | and the actual service looks like it still works | 14:49 |
smcginnis | 👍 | 14:49 |
clarkb | fungi: ianw looks like it was the afs cache local fs that made openafs services unhappy? | 14:50 |
clarkb | smcginnis: my font doesn't have a glyph for whatever that was. I'll assume it was a thumbs up and not a pile of poo :) | 14:50 |
corvus | clarkb, mordred: https://review.opendev.org/727786 is tested in prod if you want to +3 i can remove meetpad from emergency | 14:50 |
fungi | clarkb: yes, not sure if it was our recommended ext4 tuning (not clear in what way he reformatted the volume) or if it was me moving the cache files from the rootfs into it which caused it to break consistently on both servers | 14:51 |
clarkb | corvus: looking | 14:51 |
smcginnis | clarkb: Haha, yes, thumbs up. | 14:51 |
fungi | clarkb: my terminal showed a thumbs-up glyph, i guess i should be either proud or embarrassed | 14:51 |
fungi | at my font coverage | 14:52 |
corvus | mine showed a weird character and deleted the line above it | 14:52 |
corvus | that feels like the proper unix behavior | 14:52 |
fungi | it does match my expectations, yes | 14:52 |
fungi | cool-retro-term is probably just too fancy since it actually knows how to display that | 14:53 |
clarkb | fungi: it is cool afterall | 14:59 |
fungi | but not all that "retro" behavior-wise | 14:59 |
clarkb | fwiw I think what my terminal is suppsoed to do is find a glyph in other fonts for that character if it doesn't have it in my current font. But it doesn't seem to do that properly | 14:59 |
fungi | yeah, that's what crt is doing (via qterm internals, since it's built on qterm's backend) | 15:00 |
*** priteau has joined #opendev | 15:01 | |
*** lpetrut has quit IRC | 15:08 | |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Install correct node version in ensure-yarn https://review.opendev.org/727789 | 15:09 |
openstackgerrit | Monty Taylor proposed zuul/zuul-jobs master: Pass node_version explicitly https://review.opendev.org/727774 | 15:11 |
openstackgerrit | Monty Taylor proposed zuul/zuul-jobs master: Combine javascript deployment and deployment-tarball jobs https://review.opendev.org/727370 | 15:11 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Install correct node version in ensure-yarn https://review.opendev.org/727789 | 15:13 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Install correct node version in ensure-yarn https://review.opendev.org/727789 | 15:16 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Install correct node version in ensure-yarn https://review.opendev.org/727789 | 15:18 |
clarkb | #status log Restarted apache2 on mirror.org.rax.opendev.org. It had an apache worker from April 14 that was presumed to be the problem with talking to dockerhub after independently verifying all round robin backends using s_client. | 15:23 |
openstackstatus | clarkb: finished logging | 15:23 |
*** sshnaidm is now known as sshnaidm|afk | 15:28 | |
*** ykarel is now known as ykarel|away | 15:35 | |
*** dtantsur is now known as dtantsur|afk | 15:36 | |
AJaeger | config-core, two small project-config changes, please review: https://review.opendev.org/726461 and https://review.opendev.org/727178 | 15:53 |
fungi | config-core: also https://review.opendev.org/727393 which turns ovh back on again. i'm around to watch for possible fallout from the mirror replacements | 15:59 |
clarkb | fungi: AJaeger done. I can also help with ovh as it comes back online | 16:00 |
AJaeger | regarding OVH: do we want to try swift upload on OVH again? https://review.opendev.org/#/c/726028/ | 16:00 |
clarkb | I tried to keep a reasonably clear todo list for today in case the release needed help | 16:00 |
AJaeger | clarkb: thanks | 16:00 |
fungi | AJaeger: yeah, we never heard back from rledisez on the keystone situation there | 16:01 |
clarkb | they did mention the issue was identified at least | 16:01 |
fungi | yep, so odds are it was fixed within hours | 16:01 |
fungi | i'm okay giving it a shot | 16:01 |
clarkb | maybe lets recheck some chagnes that repartent to base-test? | 16:01 |
clarkb | then if those are happy approve it? | 16:01 |
* clarkb looks for changes like that | 16:02 | |
fungi | oh, right, i forgot we split them to base-test to investigate | 16:02 |
clarkb | https://review.opendev.org/#/c/680178/ | 16:03 |
clarkb | I restored that chagne. unittests are expected to fail but not have post failures | 16:03 |
clarkb | basically if all of the logs load and a non zero number of them uploaded to ovh I think we've confirmed it doesn't just fail | 16:03 |
clarkb | (all of them should upload to ovh though due to that earlier change) | 16:04 |
AJaeger | good idea | 16:06 |
*** Burke9077 has joined #opendev | 16:10 | |
openstackgerrit | Merged openstack/project-config master: Retire x/pbrx - part 1 https://review.opendev.org/726461 | 16:11 |
openstackgerrit | Merged openstack/project-config master: Remove noop-jobs from oslo.tools https://review.opendev.org/727178 | 16:11 |
openstackgerrit | Merged openstack/project-config master: Revert "Temporarily disable OVH" https://review.opendev.org/727393 | 16:11 |
openstackgerrit | Monty Taylor proposed zuul/zuul-jobs master: Set node_version in js-build base job https://review.opendev.org/727774 | 16:13 |
openstackgerrit | Monty Taylor proposed zuul/zuul-jobs master: Combine javascript deployment and deployment-tarball jobs https://review.opendev.org/727370 | 16:13 |
AJaeger | infra-root, looking at http://grafana.openstack.org/d/T6vSHcSik/zuul-status?panelId=21&fullscreen&orgId=1 , we seem to miss 100 nodes. Looking at other graphs, we have 10 in vexxhost - http://grafana.openstack.org/d/nuvIH5Imk/nodepool-vexxhost?orgId=1 ; 50 in rax-ord http://grafana.openstack.org/d/8wFIHcSiz/nodepool-rackspace?orgId=1; and 10 in openedge | 16:15 |
AJaeger | http://grafana.openstack.org/d/dYykaX_Wz/nodepool-openedge?orgId=1 | 16:15 |
AJaeger | Anything to be concerned about? | 16:15 |
clarkb | 680178 has uploads to bhs and gra that have run successfully | 16:15 |
clarkb | AJaeger: we have a number of held nodes (but on the order of 10 not 100) | 16:16 |
clarkb | I think that nodepools understanding of quotas can play a role in that too | 16:17 |
AJaeger | clarkb: 20 nodes is still more than normal | 16:17 |
clarkb | it will depress under the artificial max if some quota metric won't allow us to grow furhter | 16:17 |
AJaeger | clarkb: ok | 16:17 |
clarkb | fungi: maybe you want to approve https://review.opendev.org/#/c/726028/1 if 680178 looks good to you? | 16:18 |
AJaeger | mordred: want to approve pbrx retirement change https://review.opendev.org/726462 and abandon all open reviews? | 16:19 |
*** Dmitrii-Sh has quit IRC | 16:21 | |
*** Dmitrii-Sh has joined #opendev | 16:21 | |
*** rpittau is now known as rpittau|afk | 16:21 | |
openstackgerrit | Merged opendev/system-config master: Exclude some regex matches in jitsi-meet web https://review.opendev.org/727786 | 16:27 |
clarkb | infra-root maybe this afternoon would be a good time to restart zuul to switch back to python3.7 and pull in the don't reconfigure on tags change? | 16:29 |
clarkb | there were also some changes to the merger code so we will want to restart those when we get a chance (this afternoon is looking good since openstack reelase went smoothly) | 16:29 |
clarkb | (by this afternoon I mean after 2000UTC time) | 16:30 |
openstackgerrit | Merged opendev/base-jobs master: Revert "Temporarily disable OVH Swift uploads" https://review.opendev.org/726028 | 16:33 |
clarkb | that change should be immediate for any jobs starting after it merged | 16:33 |
clarkb | or maybe not starting but created | 16:33 |
*** Burke9077 has left #opendev | 16:34 | |
*** Burke9077 has joined #opendev | 16:34 | |
AJaeger | mmh, http://grafana.openstack.org/d/BhcSH5Iiz/nodepool-ovh?orgId=1 shows that ovh is deleting 1 node/7 nodes for quite some time. | 16:39 |
AJaeger | Is nodepool confused? Do we need to do anything? | 16:39 |
fungi | clarkb: back to 3.7 to be consistent with what ansible can work with on the executors? | 16:39 |
fungi | (even though the executors weren't upgraded) | 16:40 |
clarkb | fungi: ya, upstream zuul switched the images back to python3.7 for this reason | 16:40 |
openstackgerrit | Merged zuul/zuul-jobs master: Set node_version in js-build base job https://review.opendev.org/727774 | 16:40 |
clarkb | fungi: so zuul-scheduler et al will be on 3.7 now even though only the executors technically need it | 16:40 |
fungi | makes sense to me, just being sure we didn't have a problem with our current deployment | 16:40 |
fungi | AJaeger: could be that there are some nodes stuck in deleting state. happens from time to time | 16:40 |
clarkb | ya no known issues with our deployment. More just we expect the images to take us there anyway so may as well switch sooner than later | 16:41 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Combine javascript deployment and deployment-tarball jobs https://review.opendev.org/727370 | 16:48 |
fungi | today sounds good for that, sure | 16:49 |
*** ysandeep is now known as ysandeep|away | 17:12 | |
clarkb | infra-root I've also noticed that we've got a few servers in the emergency file that maybe we can start to remove? | 17:22 |
corvus | i can remove meetpad now | 17:23 |
corvus | done | 17:23 |
clarkb | nb01.openstack.org and nb012.openstack.org, and fungi's ovh mirror lists too | 17:23 |
clarkb | etherpad01.openstack.org also doesn't exist anymore I think | 17:23 |
clarkb | mordred: ^ | 17:23 |
clarkb | etherpad-dev01.openstack.org static.openstack.org files02.openstack.org status.openstack.org too (I'll do status once I confirm that all the work I did to migrate it to new server is done) | 17:24 |
AJaeger | clarkb: want to abandon https://review.opendev.org/680178 again? | 17:25 |
clarkb | AJaeger: yup | 17:25 |
clarkb | looks like old status is still in our inventory so I'll push up a change to remove it as well | 17:25 |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Remove old status server from inventory https://review.opendev.org/727843 | 17:28 |
clarkb | infra-root if we land ^ I can remove it from emergency.yaml too | 17:28 |
fungi | clarkb: two of the mirrors still need their servers deleted, but yeah they're out of the inventory now i guess | 17:32 |
clarkb | fungi: ya if they are out of the inventory I think its fine to remove from emergency | 17:32 |
clarkb | nb01 and nb02 are still in nova listings but not in inventory soI think they can be safely removed too | 17:33 |
fungi | i can clean them up now, or in a bit when i'm on bridge to actually delete the server instances | 17:33 |
clarkb | I think its fine to do when you remove the server instances. Just don't want to forget to do it | 17:34 |
clarkb | etherpad01.openstack.org is not in inventory or nova listing so should be safe | 17:34 |
clarkb | fungi: if you aren't going to edit it now I'll update the file to remove etherpad01 | 17:34 |
clarkb | (don't want conflicting writes) | 17:34 |
fungi | feel free | 17:34 |
clarkb | I removed etherpad01.openstack.org and etherpad-dev01.openstack.org as neither are in inventory or nova listing | 17:36 |
*** dpawlik has quit IRC | 17:36 | |
*** priteau has quit IRC | 17:38 | |
clarkb | I'll do status when the change above lands and ask ianw about those servers he was working on later today | 17:38 |
clarkb | that should get us back to a really minimal list | 17:38 |
*** jkt has left #opendev | 17:53 | |
fungi | looks like we've been more than two weeks without an oom on lists.o.o now | 17:53 |
fungi | where previously they were almost daily | 17:53 |
fungi | grafana afs dashboard says our ubuntu mirror is almost a week stale, but others (including debian which is also reprepro from the same server) are fine, odd | 17:54 |
fungi | i'll look into it now | 17:54 |
fungi | VLDB: vldb entry is already locked | 17:55 |
fungi | i don't see a stuck vos release for it | 17:56 |
clarkb | fungi: I'm guessing early distro release for focal churn caused us to timeout on a vos release? | 17:58 |
clarkb | fungi: may just need to grab the cron lock, rerun reprepro then vos releas by hand? | 17:59 |
fungi | oh! right, we still authenticate vos release without localauth there | 17:59 |
fungi | yeah, doing that now | 17:59 |
fungi | reran the mirror script just now, and am holding the cron lock for it in a root screen session on mirror-update.openstack.org while running vos release with localauth in a root screen session on afs01-dfw.openstack.org | 18:13 |
clarkb | ++ | 18:15 |
*** hashar has joined #opendev | 18:21 | |
clarkb | I'm popping out for a bike ride before rain arrives | 18:27 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: Add zuul_work_dir to run-test-command https://review.opendev.org/727855 | 18:31 |
openstackgerrit | Merged zuul/zuul-jobs master: ansible-lint-rules: Fix bad path and filename https://review.opendev.org/726449 | 18:33 |
fungi | looks like we're coming up on ~2 hours of jobs running in ovh now, seems like no complaints so far about the new mirrors | 18:50 |
AJaeger | \o/ | 18:50 |
fungi | i'm about to spend an hour-ish in the kitchen, but can delete the old mirror servers once i'm done assuming we're still in the clear there | 18:51 |
openstackgerrit | Merged zuul/zuul-jobs master: fetch-sphinx-output: introduce zuul_use_fetch_output https://review.opendev.org/681905 | 18:59 |
openstackgerrit | Merged zuul/zuul-jobs master: fetch-sphinx-tarball: add missing zuul_success default https://review.opendev.org/727272 | 19:04 |
openstackgerrit | Merged zuul/zuul-jobs master: Add zuul_work_dir to run-test-command https://review.opendev.org/727855 | 19:04 |
*** Burke9077 has quit IRC | 19:18 | |
*** hashar has quit IRC | 19:48 | |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Add infra-root-keys-2020-05-13 to rotate older ssh keys https://review.opendev.org/727865 | 20:17 |
openstackgerrit | Clark Boylan proposed openstack/project-config master: Use infra-root-keys-2020-05-13 in nodepool https://review.opendev.org/727867 | 20:20 |
clarkb | infra-root ^ fyi I told shrews and dmsimard I would do that rotation | 20:20 |
clarkb | fungi: thank you for taking care of the ovh thing | 20:25 |
clarkb | corvus: mordred if you have a quick moment for https://review.opendev.org/#/c/727843/ that is an easy cleanup change in system-config | 20:25 |
openstackgerrit | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Patch CoreDNS corefile The minikube v1.10.x is appending the systemd-resolved conf always. So to workaround this problem, do a patch after deployment. https://review.opendev.org/727868 | 20:30 |
openstackgerrit | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Patch CoreDNS corefile The minikube v1.10.x is appending the systemd-resolved conf always. So to workaround this problem, do a patch after deployment. https://review.opendev.org/727868 | 20:39 |
openstackgerrit | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Patch CoreDNS corefile The minikube v1.10.x is appending the systemd-resolved conf always. So to workaround this problem, do a patch after deployment. https://review.opendev.org/727868 | 20:42 |
corvus | clarkb: done | 20:46 |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Set connection limits on mirror apache workers https://review.opendev.org/727873 | 20:49 |
clarkb | and ^ is an attempt at forcing mirror nodes to recycle apache worker processes | 20:49 |
fungi | okay, so still no ovh complaints, i'll delete the old mirror servers and let amorin know we're all buttoned up | 20:52 |
corvus | clarkb: lgtm and i just learned an apache. | 20:55 |
corvus | apparently maxrequestsperchild is now called maxconnectionsperchild | 20:55 |
clarkb | corvus: ya I discovered that when looking this up too | 20:58 |
fungi | #status log deleted obsolete servers mirror01.bhs1.ovh.openstack.org and mirror01.gra1.ovh.openstack.org | 20:58 |
openstackstatus | fungi: finished logging | 20:58 |
openstackgerrit | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Patch CoreDNS corefile https://review.opendev.org/727868 | 20:58 |
fungi | i've now replied to amorin and cc'd the infra-root mailbox | 21:03 |
fungi | mirror.ubuntu vos release is still in progress | 21:04 |
clarkb | fungi: dno't forget to update emergency.yaml if you think those are all done with now | 21:04 |
openstackgerrit | Merged opendev/system-config master: Remove old status server from inventory https://review.opendev.org/727843 | 21:04 |
openstackgerrit | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Patch CoreDNS corefile https://review.opendev.org/727868 | 21:05 |
fungi | clarkb: yep! done just now | 21:05 |
fungi | thanks! | 21:05 |
clarkb | thanks! | 21:05 |
clarkb | for the ssh key rotation I've checked logs for the cloud launcher runs and it looks happy currently not sure what it was doignm when I saw it fail in the corner of my eye | 21:10 |
clarkb | but all is well now so good to go I think | 21:10 |
openstackgerrit | James E. Blair proposed opendev/system-config master: Run Zuul, Nodepool, and Zookeeper as the "container" user https://review.opendev.org/726958 | 21:11 |
openstackgerrit | David Hill proposed openstack/diskimage-builder master: Disable all repositories after attaching a pool https://review.opendev.org/727879 | 21:19 |
ianw | fungi: yeah, i'm not sure! i couldn't "ls" it, but i could cat from the device .... so it wasn't like a backend storage issue. very weird | 21:48 |
ianw | and yeah, i just did a straight "mkfs.ext4" ... deliberately avoiding the options from the documentation to take it back to nothing. but i've always used those flags (like mount count iirc) and never had issues | 21:50 |
fungi | strange | 21:51 |
fungi | anyway, thanks for working it out! | 21:51 |
fungi | i was fairly braindead by then | 21:51 |
clarkb | do we need to update our script that bootstraps lvm and the fs for volumes? | 21:52 |
fungi | it might be that i should have just started from a blank tree rather than moving the existing /var/cache/afs contents into the new filesystem tree | 21:53 |
fungi | er, /var/cache/openafs i mean | 21:54 |
*** DSpider has quit IRC | 21:56 | |
ianw | yeah ... i guess if that gets inconsistent it can kill afsd, and then it seems to maybe get worse as the module doesn't unload properly, then maybe you try to start it and now even more is wrong ... | 21:59 |
fungi | in theory basically nothing should have exercised the cache before then, but... basically nothing is not absolutely nothing | 22:00 |
clarkb | infra-root thinking out loud I think I'll go ahead and restart zuul-web now as that is really low impact and will act as a canary for memory use under python3.7 | 22:04 |
clarkb | then if that looks good proceed with the scheduler and mergers tomorrow morning? | 22:04 |
fungi | wfm, sure | 22:04 |
clarkb | just double checking the 3 hour old image is the python3.7 image and will restart | 22:07 |
clarkb | yup looks like it. Also if you want to cross check against docker hubs digests you need to do `docker images --digests` | 22:10 |
clarkb | otherwise you get image ids only which don't seem useful to cross check | 22:10 |
clarkb | ok restarting zuul web now | 22:10 |
clarkb | #status log Restarted zuul-web on zuul.opendev.org in order to switch back to the python3.7 based images. This will act as a canary for memory use. | 22:11 |
openstackstatus | clarkb: finished logging | 22:11 |
openstackgerrit | Oleksandr Kozachenko proposed zuul/zuul-jobs master: Patch CoreDNS corefile https://review.opendev.org/727868 | 22:13 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 22:17 |
*** tkajinam has joined #opendev | 22:46 | |
*** yuri has quit IRC | 22:52 | |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 22:57 |
*** tosky has quit IRC | 23:06 | |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 23:06 |
ianw | the ansible 2.9.8 update is green and the focal testing ontop of that @ https://review.opendev.org/#/c/726981/ if we'd like to do that | 23:10 |
clarkb | we are on 2.9.1 so that should be a really safe update | 23:12 |
clarkb | I have approed that change | 23:12 |
ianw | i think we should knock over these final openstack.org mirrors and get everything to ansible | 23:20 |
ianw | much as the kafs stuff is interesting i don't think we practically can use it just yet ... but we have a framework for testing | 23:21 |
clarkb | ++ htat would be great to make consistent. From memory I think we are about half converted at this point | 23:22 |
ianw | i'll start tracking it at https://etherpad.opendev.org/p/openstack.org-mirror-be-gone | 23:22 |
ianw | ok, it looks like nb01 is the only thing hitting mirror02.dfw.rax.openstack.org ... and i think i know why | 23:29 |
clarkb | ianw: semi related did you see in scrollback my questions about emergency.yaml? would be good to clean up your entries in there if we can (nb*, static, and files) | 23:30 |
clarkb | just trying to trim that down to reality so it is easier to think about what is getting ansibled | 23:30 |
clarkb | I'm going to remove status now | 23:30 |
clarkb | the change for that merged | 23:30 |
clarkb | so let me do that then you can edit the file too | 23:30 |
clarkb | all done, all yours | 23:31 |
ianw | ok will look in a sec | 23:31 |
openstackgerrit | Ian Wienand proposed openstack/project-config master: Switch nodepool builders to opendev.org mirrors https://review.opendev.org/690757 | 23:34 |
ianw | clarkb: ^ that responds to your ... mumble ... month old comment and drops the https from the switch, which you're right might cause deb issues | 23:35 |
ianw | i think we're past the point of no return on nb01/02.openstack.org servers, so i'll delete them | 23:37 |
clarkb | +2 | 23:37 |
clarkb | er to the change but also to removing the old builders | 23:37 |
ianw | #status log removed nb01/02.openstack.org servers and volumes | 23:41 |
openstackstatus | ianw: finished logging | 23:41 |
ianw | ok, two down | 23:42 |
ianw | static.openstack.org is gone per https://storyboard.openstack.org/#!/story/2006598 | 23:49 |
ianw | files02 as well | 23:50 |
ianw | that leaves the two kafs testing mirrors, per prior comments i think we should just get opendev.org transition done for now, will work on that | 23:50 |
openstackgerrit | Merged opendev/system-config master: Update to Ansible 2.9.8 https://review.opendev.org/726981 | 23:53 |
clarkb | ++ | 23:54 |
openstackgerrit | Merged opendev/system-config master: Add focal testing for mirror nodes https://review.opendev.org/726970 | 23:55 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: [wip] Unset bindep_command to exercise install paths https://review.opendev.org/727593 | 23:56 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!