Clark[m] | ianw ^ you might be interested in that | 00:00 |
---|---|---|
Clark[m] | Might be interesting to see if repasting the same data produces the same result | 00:05 |
Clark[m] | A trivial paste of "Test" does not have this problem | 00:05 |
Clark[m] | I need to get back to dinner though | 00:05 |
ianw | interesting, i wonder if it's a diff thing and it's trying to format it? | 00:07 |
ianw | Aug 4 00:10:40 paste01 docker-lodgeit[647]: [pid: 13|app: 0|req: 325241/325241] 127.0.0.1 () {42 vars in 957 bytes} [Wed Aug 4 00:10:40 2021] GET /show/807865/ => generated 11016 bytes in 11 msecs (HTTP/1.1 200) 2 headers in 82 bytes (1 switches on core 0) | 00:11 |
ianw | it does not show an error | 00:11 |
ianw | i dunno, i just pasted the raw content and it seems to show | 00:15 |
ianw | https://paste.opendev.org/show/807868/ | 00:15 |
fungi | should be able to query both pastes from the db and compare, maybe there's something "extra" in the one that's breaking and doesn't show in the raw view for some reason but causes the prettified version to choke | 00:35 |
fungi | like the paste type or something | 00:36 |
opendevreview | Merged opendev/system-config master: Update the python docker images again https://review.opendev.org/c/opendev/system-config/+/803394 | 00:51 |
ianw | select language from pastes where paste_id=807865; | 00:55 |
ianw | +----------+ | 00:55 |
ianw | | language | | 00:55 |
ianw | +----------+ | 00:55 |
ianw | | diff | | 00:55 |
ianw | +----------+ | 00:55 |
ianw | which is interesting because "diff" doesn't appear to be a language type in the list | 00:56 |
ianw | it does, however, seem to remember your last type. i wonder if timburke has an old cookie that remembers a language type it now doesn't know about | 00:57 |
ianw | although diff highlighting seems useful, maybe it's a missing package | 00:57 |
ianw | indeed, putting it in a "unified diff" creates this issue | 01:01 |
ianw | must trigger https://opendev.org/opendev/lodgeit/src/branch/master/lodgeit/lib/highlighting.py#L72 | 01:01 |
ianw | https://opendev.org/opendev/lodgeit/src/branch/master/lodgeit/lib/diff.py ... i guess that must not be failing, but not returning anything either | 01:02 |
ianw | fungi: heh, you touched it last; under the official rules of open source maintainership i believe that means you own it :) https://opendev.org/opendev/lodgeit/commit/1970b754158ec98a072e15cbfc15e42bfff7d1fc | 01:06 |
fungi | d'oh! | 01:13 |
fungi | and yeah, i've used the diff mode in lodgeit many times. i wonder what broke. does it rely on pygments for the diff colorization? | 01:16 |
fungi | seems so | 01:17 |
ianw | sort of, but it has it's own parsing | 01:17 |
ianw | i think https://opendev.org/opendev/lodgeit/src/branch/master/lodgeit/lib/highlighting.py#L121 is saying | 01:18 |
ianw | "if our parser can't handle it, run it through basic pygments" | 01:19 |
ianw | with # TODO: we do not yet support diffs made by GNU Diff! | 01:19 |
ianw | i presume that means context diff format | 01:19 |
fungi | yeah | 01:21 |
ianw | Aug 4 00:55:58 paste01 docker-lodgeit[647]: File "/usr/local/lib/python3.7/site-packages/lodgeit/lib/diff.py", line 131, in _parse_udiff | 01:21 |
ianw | Aug 4 00:55:58 paste01 docker-lodgeit[647]: line = lineiter.next() | 01:21 |
ianw | Aug 4 00:55:58 paste01 docker-lodgeit[647]: AttributeError: 'list_iterator' object has no attribute 'next' | 01:21 |
ianw | that actually *does* seem like the bit you touched :) | 01:21 |
ianw | that probably needs to be next() now | 01:23 |
fungi | because of newer python i guess? | 01:24 |
opendevreview | Ian Wienand proposed opendev/lodgeit master: diff parser: update calls to next() https://review.opendev.org/c/opendev/lodgeit/+/803408 | 01:26 |
Clark[m] | There was an error promoting python-base:3.9 this time around. I'm less concerned about that due to the previous promotes and not much if anything uses 3.9. also it may have failed in a way that actually promoted anyway? I'm not sure about that | 01:27 |
fungi | yeah, i guess we were running it with 2.7 previously | 01:27 |
Clark[m] | https://hub.docker.com/layers/opendevorg/python-base/3.9/images/sha256-5f50e3eac7af5c5a4e6a200595949d2bfb341139d41d5062b76d70c3a2b5d3f1?context=explore looks new to me | 01:28 |
Clark[m] | I suspect we're good with base images | 01:29 |
Clark[m] | ianw: that fix to lodgeit looks about right to me. I think next() works on python2 as well | 01:30 |
fungi | ianw: i have a feeling that may need deeper changes, "next" is also a function parameter | 01:31 |
ianw | heh of course it is | 01:32 |
Clark[m] | fungi: oh are we overriding a built in? | 01:32 |
fungi | DiffRenderer._highlight_line has a "next" parameter | 01:32 |
fungi | Clark[m]: shadowing it with a variable, yeah | 01:32 |
ianw | i think we lucked out that it doesn't use .next() in there, but i can update it | 01:34 |
opendevreview | Ian Wienand proposed opendev/lodgeit master: diff parser : update next variable https://review.opendev.org/c/opendev/lodgeit/+/803409 | 01:34 |
fungi | though only in the _highlight_line() method so it may not need more, yeah | 01:35 |
fungi | i just finished looking for other cases and that was the only place, which didn't need the next() builtin treatment | 01:36 |
fungi | and yeah, it uses nextline for the same thing in other places, so that's a good cleanup anyway, thanks! | 01:37 |
opendevreview | Ian Wienand proposed openstack/project-config master: Add CentOS 8 Stream wheel publish jobs https://review.opendev.org/c/openstack/project-config/+/803411 | 02:06 |
opendevreview | Ian Wienand proposed openstack/project-config master: Add CentOS 8 Stream wheel publish jobs https://review.opendev.org/c/openstack/project-config/+/803411 | 02:07 |
opendevreview | xinliang proposed zuul/zuul-jobs master: Fix install podman error on Ubuntu aarch64 Bionic https://review.opendev.org/c/zuul/zuul-jobs/+/803413 | 02:34 |
opendevreview | xinliang proposed openstack/diskimage-builder master: Introduce openEuler distro https://review.opendev.org/c/openstack/diskimage-builder/+/784363 | 03:08 |
opendevreview | xinliang proposed zuul/zuul-jobs master: Fix install podman error on Ubuntu aarch64 Bionic https://review.opendev.org/c/zuul/zuul-jobs/+/803413 | 03:22 |
opendevreview | xinliang proposed zuul/zuul-jobs master: Fix install podman error on Ubuntu aarch64 Bionic https://review.opendev.org/c/zuul/zuul-jobs/+/803413 | 03:24 |
opendevreview | Ian Wienand proposed openstack/project-config master: Add CentOS 8 Stream wheel publish jobs https://review.opendev.org/c/openstack/project-config/+/803411 | 03:33 |
opendevreview | xinliang proposed zuul/zuul-jobs master: Fix install podman error on Ubuntu aarch64 Bionic https://review.opendev.org/c/zuul/zuul-jobs/+/803413 | 03:45 |
opendevreview | Merged opendev/lodgeit master: diff parser: update calls to next() https://review.opendev.org/c/opendev/lodgeit/+/803408 | 03:50 |
opendevreview | Ian Wienand proposed opendev/zone-opendev.org master: Remove review-test https://review.opendev.org/c/opendev/zone-opendev.org/+/803415 | 03:54 |
opendevreview | Merged opendev/lodgeit master: diff parser : update next variable https://review.opendev.org/c/opendev/lodgeit/+/803409 | 03:59 |
opendevreview | Ian Wienand proposed opendev/zone-opendev.org master: Remove review01.opendev.org records https://review.opendev.org/c/opendev/zone-opendev.org/+/803416 | 04:09 |
opendevreview | Ian Wienand proposed opendev/system-config master: borg-backup-server: log prune output to file https://review.opendev.org/c/opendev/system-config/+/803417 | 04:48 |
ianw | i'm running the backup prune now | 04:48 |
ianw | #status cleaned up review-test and review01 servers and related volumes, databases, etc. | 04:48 |
opendevstatus | ianw: unknown command | 04:48 |
ianw | #status log cleaned up review-test and review01 servers and related volumes, databases, etc. | 04:48 |
opendevstatus | ianw: finished logging | 04:49 |
ianw | timburke_ fungi clarkb: i pulled those fixes and restarted paste and https://paste.opendev.org/show/807865/ now shows up as a diff. it lives to fight another day ... | 04:55 |
*** ykarel|away is now known as ykarel | 05:08 | |
opendevreview | Ian Wienand proposed opendev/system-config master: lodgeit: disable getRecent API endpoint https://review.opendev.org/c/opendev/system-config/+/803418 | 05:11 |
*** marios is now known as marios|ruck | 06:06 | |
*** jpena|off is now known as jpena | 07:01 | |
xinliang | ianw: please help to review these patches when you have time. https://review.opendev.org/c/zuul/zuul-jobs/+/803413 | 07:07 |
xinliang | plus this one https://review.opendev.org/c/openstack/diskimage-builder/+/784363 | 07:08 |
*** rpittau|afk is now known as rpittau | 07:13 | |
*** ksambor|away is now known as ksambor | 08:14 | |
*** ykarel is now known as ykarel|lunch | 08:43 | |
*** ykarel|lunch is now known as ykarel | 09:23 | |
*** diablo_rojo is now known as Guest3281 | 09:49 | |
*** Guest3281 is now known as diablo_rojo | 10:09 | |
*** dviroel|out is now known as dviroel | 11:16 | |
*** jpena is now known as jpena|lunch | 11:26 | |
*** marios is now known as marios|ruck | 11:27 | |
*** jpena|lunch is now known as jpena | 12:32 | |
opendevreview | Sorin Sbârnea proposed zuul/zuul-jobs master: Include podman installation with molecule https://review.opendev.org/c/zuul/zuul-jobs/+/803471 | 12:37 |
opendevreview | Sorin Sbârnea proposed openstack/project-config master: Allow elastic-recheck cores to create branches https://review.opendev.org/c/openstack/project-config/+/803473 | 13:15 |
opendevreview | Ananya proposed opendev/elastic-recheck master: Run elastic-recheck container https://review.opendev.org/c/opendev/elastic-recheck/+/729623 | 13:22 |
opendevreview | Jeremy Stanley proposed openstack/project-config master: Reintroduce x/tap-as-a-service shared ACL https://review.opendev.org/c/openstack/project-config/+/803480 | 13:41 |
fungi | i need to pop out to some errands, but should be back by ~16:00 utc | 14:54 |
clarkb | fungi: when you get back https://review.opendev.org/c/opendev/system-config/+/803366 should be a safe one to approve if it passes your review | 14:56 |
clarkb | I'm going to take a look at modifying the known hosts fixup to automatically add in the gitea ssh host keys | 14:57 |
clarkb | I think that won't work because ansible only knows about the port 22 host keys | 14:58 |
clarkb | But I'll look at the facts cache and see if I can say that for certain | 14:58 |
opendevreview | Merged openstack/project-config master: Reintroduce x/tap-as-a-service shared ACL https://review.opendev.org/c/openstack/project-config/+/803480 | 15:00 |
*** jpena is now known as jpena|off | 15:05 | |
opendevreview | Sorin Sbârnea proposed zuul/zuul-jobs master: Include podman installation with molecule https://review.opendev.org/c/zuul/zuul-jobs/+/803471 | 15:10 |
*** ykarel is now known as ykarel|away | 15:33 | |
yoctozepto | config-core: morning! please consider merging https://review.opendev.org/c/openstack/project-config/+/802744 :-) | 15:55 |
*** marios|ruck is now known as marios|out | 16:00 | |
opendevreview | Sorin Sbârnea proposed zuul/zuul-jobs master: Include podman installation with molecule https://review.opendev.org/c/zuul/zuul-jobs/+/803471 | 16:11 |
*** rpittau is now known as rpittau|afk | 16:14 | |
opendevreview | Merged openstack/project-config master: Allow kolla cores to edit kolla hashtags https://review.opendev.org/c/openstack/project-config/+/802744 | 16:17 |
fungi | clarkb: rackspace has opened a ticket saying they need to offline migrate elasticsearch07, and that they're planning to reboot it on saturday "at or after 21:52 utc" | 16:29 |
fungi | though we can reboot it ourselves any time before then to start the migration if we prefer | 16:29 |
clarkb | fungi: ok, in theory ES will handle that gracefully | 16:29 |
fungi | i assume we want to initiate the reboot ourselves, like todayish? | 16:30 |
clarkb | we can also do that. You want to check the cluster status and if it is green then we can reboot | 16:30 |
fungi | do we need to double-check all the shards are sane first? | 16:30 |
clarkb | if it isn't green we want to figure that out and get it green then we can reboot | 16:30 |
fungi | yeah, that | 16:30 |
clarkb | I'm deep into zuul sos reviews right now, but I can check that later today if we want. Or happy to help someone else check it | 16:30 |
clarkb | there is a base cluster status rest api request you can make and it will tell you red, yellow, green. If not green then probably one of the cluster members has died, you just ask system for which isn't running and restart it | 16:31 |
clarkb | is the usual thing | 16:31 |
fungi | http://logstash.openstack.org/elasticsearch/_status has a lot of detail | 16:31 |
fungi | but says all 70 shards are successful and none are failed | 16:31 |
fungi | i expect that's the same as "green" | 16:32 |
clarkb | fungi: http://logstash.openstack.org/elasticsearch/_cluster/health shows green so ya | 16:33 |
clarkb | in that case it should be completely safe to reboot es07. Just double check that elasticsearch has started again post reboot and start it if not (I can't remember if we allow it to auto start) | 16:33 |
fungi | ahh, perfect | 16:33 |
fungi | so i can just reboot the server, no need to gracefully stop the service on it | 16:33 |
clarkb | you'll see the cluster degrade to yellow while 07 is out of the cluster and then when it rejoins 07 and rebalances | 16:33 |
clarkb | fungi: shouldn't need to. systemd will ask elasticsearch to stop | 16:34 |
fungi | as far as it doesn't take so long that systemd gets impatient anyway | 16:34 |
opendevreview | Merged opendev/system-config master: Use the gitea api in the gitea renaming playbook https://review.opendev.org/c/opendev/system-config/+/803366 | 16:59 |
clarkb | I think there are a few more changes that have hit the node request orphaning issue. I'll dequeue and enqueue them now | 17:04 |
clarkb | because I'm paranoid I'm going to rereview my proposed user cleanups list now. But then if I don't find anything scary in it will proceed with retiring this batch. Final cleanup can happen in a few weeks as usual | 17:09 |
fungi | sounds good | 17:11 |
fungi | i did a test reboot of elasticsearch07 and the server's been up for half an hour now, but the cluster status is still "yellow" (12 unassigned shards). it looks like the service did not start on boot, and systemd reports elasticsearch.service is disabled, inactive and dead | 17:17 |
fungi | do we normally manually start these after reboot? | 17:17 |
clarkb | fungi: ya I wasn't sure how we had those set up which is why I emntioned we needed to check | 17:17 |
clarkb | it wouldn't surprise me if that was intentional in order to have control over the order of startups and cluster bring up | 17:18 |
clarkb | note it will still be yellow after starting 07's service again as the cluster has to rebalance itself | 17:18 |
fungi | sure | 17:20 |
fungi | that's what i thought was happening at first and after 30 minutes of yellow i decided to look closer | 17:20 |
fungi | i manually started the service unit and now it claims to be running | 17:20 |
fungi | and now there's no unassigned shards and 4 being initialized | 17:21 |
fungi | and the count is falling | 17:23 |
fungi | i assume once that reaches 0 it'll be back to green | 17:23 |
fungi | then i'll do another outage for 07 to actually perform the migration step | 17:23 |
clarkb | yup once initializing is done it should be green | 17:24 |
clarkb | you may see migrating shards after that but they don't effect cluster health that is just an optimization to balance disk use | 17:24 |
fungi | cool | 17:28 |
fungi | okay, cluster is green again, now i'm taking 07 down for the offline host migration | 17:58 |
clarkb | sounds good thanks | 18:02 |
*** ricolin_ is now known as ricolin | 18:02 | |
clarkb | I've reviewed my list of account retirements and decided to update one of them which I corss checked with fungi and we agree both are fine but I think one is more likely to be correct if this user shows up 8 years later | 18:03 |
clarkb | I'm going to take a break here and will run the retirment script against this list after lunch (I'm worried lunch will be late if I run the script first :) ) | 18:03 |
clarkb | 803039 maybe hit by the zuul node request issue too. But I suspect we're now getting close enough to potentially restarting zuul taht just waiting might be quicker for that one? | 18:07 |
fungi | yeah, 803407,2 still has a couple of jobs queued | 18:14 |
clarkb | All of the previous changes for the improvements to gitea project management have landed https://review.opendev.org/c/opendev/system-config/+/803367 is definitely the biggest in scope and probably has the most potential for breakage but it is tested | 18:20 |
clarkb | corvus: mordred: ^ you may be interested in that one in particular since you were involved in the original authorship there | 18:20 |
fungi | clarkb: is the queued state for the two remaining jobs on 803407,2 due to the problem it's intended to solve? irony if so... | 18:30 |
clarkb | fungi: could be | 18:31 |
clarkb | fungi: zuul doesn't require clean check so maybe if the unitest jobs pass we approve it and let it try again tehre? | 18:31 |
fungi | assuming the unit test jobs complete i'll go ahead and approve it, which will supersede in the gate anyway, yeah | 18:31 |
fungi | i finished reviewing it a few minutes ago and was just waiting to approve it since the check jobs looked close to completing, but now that it still hasn't gotten nodes assigned to a couple of them i expect that's not going to happen without some disruption anyway | 18:33 |
fungi | oh, though the py36 job just failed for it | 18:33 |
*** diablo_rojo is now known as Guest3316 | 18:34 | |
fungi | and now the py38 job has as well | 18:34 |
*** Guest3316 is now known as diablo_rojo | 18:35 | |
clarkb | I'm going to sort out lunch now but back in a bit to do user retirements and help wit hzuul if I can | 18:48 |
johnsom | Here is an odd one. https://zuul.opendev.org/t/openstack/build/b807a84de4f84106be78a1a11eb68410/logs undercloud/var/log/containers/octavia. The worker.log file. The raw link pulls up the log fine, but the "pretty" (and linkable) link throws a "This logfile could not be found" error. | 18:52 |
fungi | also very slow to browse... large log somewhere in that manifest maybe? | 19:01 |
johnsom | Probably, these tripleo jobs seem to have a ton of stuff captured | 19:01 |
fungi | yeah, might just be slow from sheer file count | 19:01 |
fungi | it's certainly reminding me how much i need a new workstation | 19:02 |
johnsom | lol | 19:03 |
johnsom | Even on mine the fans spin up | 19:03 |
fungi | johnsom: https://zuul.opendev.org/t/openstack/build/b807a84de4f84106be78a1a11eb68410/log/logs/undercloud/var/log/containers/octavia/worker.log *is* loading for me at least, albeit... ...very... ...slowly... | 19:03 |
johnsom | If I don't use the "raw" link I get: Network Error (Unable to fetch URL, check your network connectivity, browser plugins, ad-blockers, or try to refresh this page) https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b80/801400/3/check/puppet-octavia-tripleo-standalone/b807a84/logs/undercloud/var/log/containers/octavia/worker.log | 19:04 |
johnsom | But clicking that link also loads it fine. | 19:04 |
fungi | could be your browser (or a plugin/extension) blocking the fetch? | 19:04 |
fungi | or maybe you need bigger fans | 19:04 |
johnsom | Maybe I need a bigger fan | 19:05 |
fungi | my younger brother is all about evaporative liquid cooling these days | 19:05 |
johnsom | The octavia.log right above it opens fine. | 19:05 |
johnsom | I have not gone that far (yet). So far I have stuck to air cooling with massive (quiet) heatsinks | 19:06 |
fungi | another possibility is that ovh's swift endpoint is randomly failing/refusing requests | 19:06 |
fungi | but since i'm able to load the htmlified version of the worker.log i don't think it's failing for everyone | 19:07 |
fungi | elasticsearch cluster is back to "green" status following the elasticsearch07 offline migration, so i'm going to close that ticket | 19:09 |
fungi | #status log Offline migrated elasticsearch07.openstack.org in order to mitigate an unplanned service outage | 19:10 |
opendevstatus | fungi: finished logging | 19:10 |
johnsom | Developer tools give more info: Access to XMLHttpRequest at 'https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b80/801400/3/check/puppet-octavia-tripleo-standalone/b807a84/logs/undercloud/var/log/containers/octavia/worker.log' from origin 'https://zuul.opendev.org' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. | 19:11 |
fungi | interesting. we upload all those files to swift with the cors headers set | 19:12 |
fungi | my browser would also refuse to pull the file if it wasn't included | 19:13 |
fungi | so seems like it may be an ovh problem | 19:13 |
johnsom | Hmm, now it is loading for me. | 19:13 |
fungi | "eventually consistent" | 19:13 |
fungi | let me introduce you to openstack... | 19:14 |
johnsom | Oh, we go way back.... | 19:14 |
johnsom | It's probably DNS or the load balancer. GRIN | 19:14 |
fungi | yeah, who do i complain to about that again? | 19:14 |
johnsom | Not sure, let me get back to you. | 19:15 |
* johnsom thinks it's time for lunch | 19:15 | |
timburke_ | there may also be a bug in Swift's handling of non-2xx CORS responses. it wouldn't surprise me if we didn't gracefully handle CORS in 503 or ratelimited responses, for example | 19:15 |
johnsom | Yeah, so playing around a bit, I got a similar error on job-output.json.gz with dev tools loaded up. The response was a 404 without the CORS header | 19:21 |
johnsom | https://www.irccloud.com/pastebin/vByuEb8v/ | 19:22 |
fungi | neat | 19:23 |
fungi | yeah my reading of these tea leaves is that ovh's swift is maybe-eventually-but-not-presently consistent | 19:24 |
timburke_ | interesting -- we've actually got a test for that (https://github.com/openstack/swift/blob/master/test/cors/test-object.js#L83-L85) but maybe the LB is dropping the Access-Control-Allow-Origin header for non-2xx responses. i'd check in with my old OVH contacts, but they've moved on to other things :-( | 19:29 |
fungi | yeah, swift ops moving on to other things also possibly related to running outdated swift ;) | 19:35 |
fungi | which is cause and which is effect could be left open to interpretation | 19:36 |
clarkb | user retirements are running now | 20:19 |
clarkb | I'll get the log posted when done | 20:19 |
fungi | thanks! | 20:30 |
*** dviroel is now known as dviroel|out | 20:38 | |
clarkb | ok one failed becausae I had network blip and need to rerun for that. Otherwise two things I noticed as I paid attention to it going by. One of the accounts is frickler's I think and the other was for Mellanox Cinder CI | 20:42 |
clarkb | in the case of the mellanox cinder ci account it doesn't seem to have left any comments in years so I'm fairly certain that one is fine but heads up if mellanox or cinder wonder about that | 20:42 |
clarkb | the one that failed has been rerun and the log is in the typical place. Now to pm frickler with details in case we need to revert that one | 20:45 |
clarkb | ok thats all done. I think we are done for now and can followup with the second pass of cleanups on these accounts if no one complains | 20:49 |
*** timburke_ is now known as timburke | 20:57 | |
opendevreview | Clark Boylan proposed opendev/system-config master: Cleanup unused puppet modules from modules.env https://review.opendev.org/c/opendev/system-config/+/803518 | 21:10 |
clarkb | we're making progress ^ :) | 21:10 |
clarkb | ianw: responded to https://review.opendev.org/c/opendev/system-config/+/802922 the two different sshds does indeed confuse things | 21:23 |
ianw | 1007G 299G 709G 30% /opt/backups-202010 | 22:18 |
ianw | prune worked well | 22:18 |
opendevreview | Ian Wienand proposed openstack/project-config master: Add CentOS 8 Stream wheel publish jobs https://review.opendev.org/c/openstack/project-config/+/803411 | 22:20 |
clarkb | nice | 22:20 |
ianw | ahh indeed i forgot about the 222 thing | 22:24 |
ianw | speaking of ssh, the last thing I have on the cleanup list now is "decide on sshfp records" | 22:35 |
ianw | i'm mostly of the mind to just keep it as is, with no sshfp records for review. bridge / admins want the sshfp records for the port 22 ssh. normal people want it for gerrit ssh. i don't want to access it via a separate name | 22:37 |
opendevreview | Merged opendev/zone-opendev.org master: Remove review-test https://review.opendev.org/c/opendev/zone-opendev.org/+/803415 | 22:37 |
opendevreview | Merged opendev/zone-opendev.org master: Remove review01.opendev.org records https://review.opendev.org/c/opendev/zone-opendev.org/+/803416 | 22:38 |
ianw | Zuul encountered a syntax error while parsing its configuration in the | 22:40 |
ianw | repo openstack/project-config on branch master. : line 8293 : Unknown projects: x/tap-as-a-service | 22:40 |
ianw | do we know about that? | 22:40 |
ianw | https://review.opendev.org/c/openstack/project-config/+/803411 | 22:40 |
fungi | ianw: that got renamed over the weekend, i bet that requires a full-reconfigure | 22:42 |
fungi | but we're angling to restart zuul soon anyway for the reissued node request bug | 22:42 |
clarkb | fungi: ianw that issue won't be fixed by the restart, its in the config and we need to remove it | 22:48 |
clarkb | er update it to say openstack/tap-as-a-service I guess | 22:48 |
clarkb | infra-root I've just noticed that mysql dump for zanata doesn't seem to be doing anything useful right now on translate01 | 22:52 |
clarkb | we get a small file with only comments on it | 22:52 |
clarkb | Noticed this when looking into an error reported to oepnstack-discuss | 22:52 |
clarkb | oh except we do the borg stream thing now so the local backups being sad may not be an issue | 22:53 |
clarkb | yup that has contents that look like successful mysql dumps, nevermind | 22:56 |
clarkb | In the thread I suggest that maybe the solution here is to delete the user and start over again and I didn't want to do that if we didn't have working backups | 22:56 |
opendevreview | Clark Boylan proposed openstack/project-config master: Rename x/tap-as-a-service to openstack/tap-as-a-service https://review.opendev.org/c/openstack/project-config/+/803524 | 23:01 |
clarkb | fungi: ianw ^ that should fix the project-config error | 23:01 |
ianw | yeah weirdly i've pushed several changes and that's the first time it's come up | 23:02 |
fungi | aha | 23:02 |
fungi | we missed that too when renaming | 23:02 |
clarkb | ianw: zuul will only check it if you modify the zuul config iirc | 23:02 |
clarkb | otherwise it knows that its ok to keep running with the broken config | 23:03 |
clarkb | "ok" | 23:03 |
clarkb | fungi: yes I think we should go to split changes for renames again | 23:03 |
ianw | yeah it didn't make this comment on either of the two previous revisions @ https://review.opendev.org/c/openstack/project-config/+/803411 | 23:04 |
clarkb | huh that is curious | 23:04 |
clarkb | ianw: thanks for the +2 on the known_hsots chagne. At this point in the day my priority is helpign with that zuul fix and restart. I'll try to update hostvars tomorrow and land that chagne though | 23:11 |
opendevreview | Clark Boylan proposed opendev/system-config master: Cleanup unused puppet modules from modules.env https://review.opendev.org/c/opendev/system-config/+/803518 | 23:14 |
clarkb | apparently the httpd module uses the firwall module I did not expect that | 23:14 |
opendevreview | Merged openstack/project-config master: Rename x/tap-as-a-service to openstack/tap-as-a-service https://review.opendev.org/c/openstack/project-config/+/803524 | 23:17 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!