Wednesday, 2026-04-01

@fungicide:matrix.orgswap was just about exhausted on lists01 and the slowness had all the apache worker slots in use, so i'm restarting services on it again to hopefully free up some of the memory pressure temporarily10:58
@fungicide:matrix.orgas for the debian-security mirror, at least one package has a bad checksum (likely truncated when volume quota was exceeded) so some reprepro surgery is going to be required. i'll try to fiddle with it more while i'm monitoring the openstack release process11:00
@fungicide:matrix.org`WRONG SIZE of '/afs/.openstack.org/mirror/debian-security/pool/main/l/linux/linux-image-6.12.74+deb13+1-rt-amd64-dbg_6.12.74-2_amd64.deb': expected 1038755676 found 1006632960`11:03
@fungicide:matrix.orgyeah, the file is shorter than it should be11:03
@fungicide:matrix.orgi've manually replaced it with a download from an official mirror, will see if that was the only problem11:14
@fungicide:matrix.orgseems like that did it, reprepro update succeeded this time and vos release is underway finally11:22
@mnasiadka:matrix.orggood, some Kolla-Ansible jobs failed on apt (Mirror sync in progress?) - but otherwise I assume it's fine :)11:30
@fungicide:matrix.orgwell, our mirrors should always be consistent if we're pulling from official mirrors that are11:31
@fungicide:matrix.orgwhen we update our mirrors we do so atomically, and keeping old references around for a period of time so old package files don't disappear out from under running jobs11:32
@fungicide:matrix.orgfurther, the indices on our mirrors are built from the package files we have, so should never be out of sync11:33
@fungicide:matrix.orgmnasiadka: can you link a failed build?11:34
@mnasiadka:matrix.org@fungi: https://zuul.opendev.org/t/openstack/build/2f9e926bf4374313a28a8c5354685dfb/log/primary/logs/ansible/bootstrap-servers#1385011:35
@mnasiadka:matrix.orgE:Failed to fetch http://mirror.iad3.raxflex.opendev.org:8080/docker/debian/dists/trixie/stable/binary-amd64/Packages.bz2  File has unexpected size (30344 != 30221). Mirror sync in progress?11:36
@fungicide:matrix.orgmmm, that's also complaining abour debian, while i'm working on debian-security (a different volume mounted at a different part of the tree)11:38
@fungicide:matrix.orgdebian-security vos release succeeded, i'm rerunning the mirror script now to make sure it's a no-op and then we should be in the clear there11:50
@fungicide:matrix.orgyep, all good now11:51
@fungicide:matrix.orgvolume is at 88% quota now that the dust has settled11:52
@fungicide:matrix.orgmnasiadka: your failures look like something changed in your apt configuration to start looking for secure-apt signed indices, which our mirrors have never provided (intentionally)12:01
@mnasiadka:matrix.orgthat's just a warning, we've always had that - but it worked, the error message is in the end - things are working now, so that had to be the mirror sync12:03
@fungicide:matrix.orgah, okay i misread12:03
@fungicide:matrix.orgoh!12:04
@fungicide:matrix.orgyour error is about a docker mirror?12:04
@mnasiadka:matrix.orgoh right12:05
@fungicide:matrix.orgthat :8080/docker is proxied to https://download.docker.com/linux/ so the broken index is coming from there12:09
@fungicide:matrix.orghttps://opendev.org/opendev/system-config/src/commit/dd2f00a/playbooks/roles/mirror/templates/mirror.vhost.j2#L317-L32012:10
@fungicide:matrix.orgin case it happens again, that's one not under our control, we just cache requests for it to reduce bandwidth/speed up downloads12:11
@mnasiadka:matrix.orgah right12:14
@mnasiadka:matrix.orgAlthough there's deb-docker on the mirror12:14
@mnasiadka:matrix.orghttps://mirror.bhs1.ovh.opendev.org/deb-docker/12:15
@fungicide:matrix.orgat a different path12:15
@mnasiadka:matrix.orgSo we could switch to that12:15
@fungicide:matrix.orgin theory, if it has what you need12:15
@fungicide:matrix.orgthat should be more coherent12:15
@fungicide:matrix.orgin unrelated news, apache on lists01 is already back to being entirely unresponsive again12:15
@fungicide:matrix.orgi feel like we should just go ahead with https://review.opendev.org/c/opendev/system-config/+/981932 at this point to see if it helps at all12:17
@harbott.osism.tech:regio.chat+2, feel free to approve yourself. if anubis really helps us, maybe the foundation could actually invest a bit into sponsoring them?12:23
@fungicide:matrix.orgyeah, that's certainly a discussion worth having12:23
@fungicide:matrix.orgsince we've got 4 root sysadmins in favor (assuming Clark is in favor of his own change), i'm approving it. easy enough to revert if it causes more problems than it solves, but at the moment things can't get much worse on that server12:25
@vurmil:matrix.orgInformation: Today, from time to time, I get the following error:12:27
fatal: unable to access 'https://opendev.org/openstack/ansible-collection-kolla/': LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to opendev.org:443
@vurmil:matrix.orgI haven’t changed anything, and I’ve never seen this issue before. I don’t know exactly how the backend is configured, but one of the backends might be causing the problem.12:28
@fungicide:matrix.orgyeah, ssl is terminated on the backend servers directly12:31
@fungicide:matrix.orgwas this an error from git or a browser?12:32
@gthiemonge:matrix.orgI got the same error with curl ^12:33
@vurmil:matrix.orggit curl12:34
@fungicide:matrix.orgload average in the past 5-10 minutes has definitely spiked on all 6 backends, up around 15-2012:35
@vurmil:matrix.organsible-lint roles/test-ovn-vpnaas12:35
ERROR Cloning into 'ansible-local-96095peafoi8a/tmpah283gux/ansible-collection-kolla3u6t7opg'...
fatal: unable to access 'https://opendev.org/openstack/ansible-collection-kolla/': LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to opendev.org:443
[ERROR]: Failed to clone a Git repository from `https://opendev.org/openstack/ansible-collection-kolla`: Command '['/usr/bin/git', 'clone', 'https://opendev.org/openstack/ansible-collection-kolla', 'ansible/tmp/ansible-local-96095peafoi8a/tmpah283gux/ansible-collection-kolla3u6t7opg']' returned non-zero exit status 128.
Failed to clone a Git repository from `https://opendev.org/openstack/ansible-collection-kolla`.
<<< caused by >>>
Command '['/usr/bin/git', 'clone', 'https://opendev.org/openstack/ansible-collection-kolla', 'ansible/tmp/ansible-local-96095peafoi8a/tmpah283gux/ansible-collection-kolla3u6t7opg']' returned non-zero exit status 128.
@fungicide:matrix.orgload average on gitea13 is at almost 3012:35
@vurmil:matrix.org* ansible-lint 12:36
ERROR Cloning into 'ansible-local-96095peafoi8a/tmpah283gux/ansible-collection-kolla3u6t7opg'...
fatal: unable to access 'https://opendev.org/openstack/ansible-collection-kolla/': LibreSSL SSL\_connect: SSL\_ERROR\_SYSCALL in connection to opendev.org:443
\[ERROR\]: Failed to clone a Git repository from `https://opendev.org/openstack/ansible-collection-kolla`: Command '\['/usr/bin/git', 'clone', 'https://opendev.org/openstack/ansible-collection-kolla', 'ansible/tmp/ansible-local-96095peafoi8a/tmpah283gux/ansible-collection-kolla3u6t7opg'\]' returned non-zero exit status 128.
Failed to clone a Git repository from `https://opendev.org/openstack/ansible-collection-kolla`.
\<\<\< caused by >>>
Command '\['/usr/bin/git', 'clone', 'https://opendev.org/openstack/ansible-collection-kolla', 'ansible/tmp/ansible-local-96095peafoi8a/tmpah283gux/ansible-collection-kolla3u6t7opg'\]' returned non-zero exit status 128.
@fungicide:matrix.orgapache access log shows massive amounts of obviously crawler activity masquerading as human browsers12:37
@fungicide:matrix.orgi guess if anubis works on lists01 we can roll it out to the gitea backends fairly quickly too12:38
@fungicide:matrix.orgi think haproxy ended up removing all the gitea servers from the pool temporarily, but as a result the 5-minute load average on the backends fell to sub-1.012:43
@harbott.osism.tech:regio.chatsounds like we were very lucky to get the release mostly done before this :-/12:44
@fungicide:matrix.orgkinda wonder if the openstack release might have triggered it12:45
@harbott.osism.tech:regio.chatyes, I've just been thinking the same, docs.o.o has had a big update which might trigger crawlers to recrawl everything12:45
@fungicide:matrix.orgapache server-status on the backends indicates they're mostly out of available worker slots12:46
@fungicide:matrix.orgeven fetching server-status locally on some of them is taking a while12:48
-@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/system-config] 981932: Apply Anubis to the Mailman lists server https://review.opendev.org/c/opendev/system-config/+/98193212:54
@fungicide:matrix.orgi'm going to try restarting apache on all of them, it looks like they have a bunch of workers stuck in "reading request" state12:54
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/98306113:44
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306814:04
@fungicide:matrix.orgthe bulk of those requests seemed to be crawler or browser activity, not git12:57
@vurmil:matrix.orgLooks like today’s “new OpenStack Gazpacho release” comes with a bonus feature SSL_ERROR_SYSCALL as a service ;) 12:57
@fungicide:matrix.orgyeah, the apache restarts didn't do any good. backends all went back to approximately the same state within a minute or two12:59
@fungicide:matrix.orgin related news, as soon as the anubis implementation deployed to lists01, responsiveness picked way up. but also the logs are now flooded with anubis hits and crawlers that don't identify as web browsers13:04
@fungicide:matrix.orgi'm seeing how much work it might be to port Clark's anubis implementation from mailman3 to gitea, but that's going to take some time to roll out13:17
@clarkb:matrix.orgThinking out loud here: anubis would be a problem with gitea if we didn't balance by client IP but we do (since the cookie wouldn't be known)13:41
@clarkb:matrix.orgI can't come up with any concrete reason why it wouldn't work13:42
@fungicide:matrix.orgnaive attempt ^13:44
@fungicide:matrix.orgideally we'd have an implementation that integrated at the haproxy side instead, but that would take more experimentation and testing to work out the details on13:52
@clarkb:matrix.orgIt would require terminating sel at haproxy too13:53
@fungicide:matrix.orgright13:54
@fungicide:matrix.orgstatus notice Git operations and repository browsing from the opendev.org site are currently experiencing overwhelming load and mostly unusable since 12:20 UTC, mitigation work is in progress14:09
@fungicide:matrix.orgshould we send something like that? ^14:09
@mnasiadka:matrix.org+114:10
@scott.little:matrix.orgseeing tons of error like 'fatal: unable to access 'https://opendev.org/starlingx/manifest/': Encountered end of file'   is this something on your end?  OR should I be talking to our local IT?  Sounds like your fighting fires over there.14:14
@fungicide:matrix.orgyes, the services have been overwhelmed for almost 2 hours, we're actively working to mitigate it14:14
@fungicide:matrix.org#status notice The opendev.org site is currently experiencing overwhelming load adversely impacting git operations and repository browsing since 12:20 UTC today, mitigation work is in progress14:16
@fungicide:matrix.orgeven statusbot seems a little slow on the uptake today14:17
@fungicide:matrix.orgthe statusbot process on eavesdrop02 is still running though14:18
@fungicide:matrix.orgstatusbot_debug.log hasn't seen anything from matrix in about 20 minutes14:20
@fungicide:matrix.orgthough it's logged messages from irc much more recently14:21
@status:opendev.org@fungicide:matrix.org: sending notice14:40
@fungicide:matrix.orgmy first attempt at adapting anubis to the gitea backends is failing all our testinfra gitea tests with a `503 Service Unavailable`14:39
-@status:opendev.org- NOTICE: The opendev.org site is currently experiencing overwhelming load adversely impacting git operations and repository browsing since 12:20 UTC today, mitigation work is in progress14:43
@status:opendev.org@fungicide:matrix.org: finished sending notice14:43
@fungicide:matrix.orgoh hey status! nice of you to make it14:40
@fungicide:matrix.orgi guess there was a 40-minute lag in its matrix session or something14:40
-@gerrit:opendev.org- Dr. Jens Harbott proposed: [opendev/system-config] 983077: Block some UA used by crawlers https://review.opendev.org/c/opendev/system-config/+/98307714:54
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306814:58
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/98306114:58
@fungicide:matrix.orgautohold reset14:59
@fungicide:matrix.orgshould we status notice that gitea is back to working again? or give it a little longer?15:37
@clarkb:matrix.orgI still see some load spikes on nodes at times15:39
@clarkb:matrix.orgI'm not certain yet that they represent a specific issue. But I think there is reason to continue to be cautious15:39
@clarkb:matrix.orghowever the spikes do see to be temporary and then things recover and they don't appear to occur cluster wide. So this may be "normal"15:40
@fungicide:matrix.orgsome progress on the anubis change. this time fetching /robots.txt is a 500 Internal Server Error, /.well-known/matrix/{server,client} are 404 Not Found, / by default is 500 Internal Server Error but with a filtered ua is hitting the anubis work prompt...15:58
@fungicide:matrix.orgprobably just some things i missed in the vhost config15:59
@clarkb:matrix.orgfungi: also notice anything interesting about: https://zuul.opendev.org/t/openstack/build/5fc38bee67794ced84179ab7b9a9502d/log/gitea99.opendev.org/containers/docker-anubis.log#615:59
@fungicide:matrix.org"REDIRECT_DOMAINS is not set, Anubis will only redirect to the same domain a request is coming from"16:02
@fungicide:matrix.orgthat i guess?16:02
@clarkb:matrix.orgfungi: that too. but the user agent is MSIE 7 tencent traveler. That almost certainly arrived from outside our test env16:03
@clarkb:matrix.orgthe crawlers are so aggressive now they are crawling our test nodes as we test things16:03
@clarkb:matrix.orgbut yes its possible the redirect domains may be doing things we don't want. I suspect debugging this may become easier with the held node16:03
@fungicide:matrix.orgno, that's our test of the ua rules16:03
@fungicide:matrix.org`-A " Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)"`16:04
@clarkb:matrix.orgoh!16:04
@clarkb:matrix.orgok good16:04
@clarkb:matrix.orgthe screenshot test showing a bunch of 500 error pages is funny to me for some reason. But ya I suspect this is easiest debugged on the held node16:05
@clarkb:matrix.orgfungi: gitea loads continue to look good. I think you probably can send the all clear now if you want16:08
@fungicide:matrix.org#status notice Load on the opendev.org Gitea backends is under control again for now, if any Zuul jobs failed with SSL errors or disconnects reaching the service they can be safely rechecked16:11
@status:opendev.org@fungicide:matrix.org: sending notice16:11
@clarkb:matrix.orgfungi: https://zuul.opendev.org/t/openstack/build/5fc38bee67794ced84179ab7b9a9502d/log/gitea99.opendev.org/containers/docker-gitea.log#11199 I think this is why we get 500s16:14
-@status:opendev.org- NOTICE: Load on the opendev.org Gitea backends is under control again for now, if any Zuul jobs failed with SSL errors or disconnects reaching the service they can be safely rechecked16:14
@status:opendev.org@fungicide:matrix.org: finished sending notice16:14
@clarkb:matrix.orgboth gitea and apache were terminating tls and now we've got an http vhost in the middle now and I think that is confusing things16:16
@fungicide:matrix.orgah, yes that would make sense16:16
@clarkb:matrix.orgso we probably need to trace that through and make sure the correct protocols are selected16:16
@clarkb:matrix.orgalternatively we could stop terminating tls in gitea entirely16:17
@clarkb:matrix.orgas I think we force all traffic through haproxy and that talks to apache now16:17
@clarkb:matrix.org(there is no more direct access)16:17
@clarkb:matrix.orgheh the UA update failed in the gate due to a timeout on the zuul job. Because it was still in check which passed it ended up queing into the gate again automatically16:28
@fungicide:matrix.orgoh! that makes sense16:34
@clarkb:matrix.orgfungi: we proxypass [::1]:11443 to the gitea port which is https. Since we're passing it straight through I think that we expect ssl there rather than http but anubis wants to talk http instead of https.16:35
@clarkb:matrix.orgThen to complicate things a bit further we serve the well known file for matrix clients from :11443 so I think we also need to terminate ssl there as well16:35
@clarkb:matrix.orgalternatively we could drop ssl from gitea and have it listen only on localhost and then make everything past anubis http on localhost16:35
@fungicide:matrix.orgi think we wanted to do that anyway after we disabled remote access to the backends16:37
@clarkb:matrix.orgI think I would be ok with either approach16:39
@fungicide:matrix.orghow is robots.txt being handled? i guess that was baked into the gitea images?16:39
@clarkb:matrix.orgfungi: yes looks like it via system-config/docker/gitea/custom/public/robots.txt16:39
@clarkb:matrix.orgfungi: we could move the well know file into there potentially and then just proxy pass https with anubis speaking https16:40
@clarkb:matrix.orgas a third option that keeps ssl16:40
@fungicide:matrix.orgfor the moment i'm just moving the matrix files into the earlier vhost16:41
@clarkb:matrix.orgoh yes that should work just fine as well16:41
@clarkb:matrix.organd is probably the minimal change16:42
@clarkb:matrix.orgthe UA change failed on static talking to PPAs16:42
@clarkb:matrix.orgI wonder if PPAs have the same issue we have16:42
@fungicide:matrix.orgthey've been falling over here and there for a few weeks16:43
@fungicide:matrix.orgwhat's the difference between `HTTP_PORT` and `PORT_TO_REDIRECT` in gitea's app.ini?16:48
@fungicide:matrix.orgit seems like we don't use 3080 for anything16:48
@fungicide:matrix.orgbut we set `PORT_TO_REDIRECT = 3080`16:48
@clarkb:matrix.orgI'm not sure. They publish a config doc somewhere 16:56
@fungicide:matrix.orgoh!16:56
@fungicide:matrix.orggitea listens on 3080 with plain http and serves a redirect16:57
@fungicide:matrix.orgwe forward directly to that from haproxy16:57
@fungicide:matrix.orgbypassing apache on the backends16:57
@fungicide:matrix.orgapache only listens on 3081 for https, then forwards to gitea listening on 3000 (currently also https)16:58
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306817:05
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/98306117:05
@fungicide:matrix.orgi reset the autohold again17:06
-@gerrit:opendev.org- Clark Boylan proposed wip on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306817:46
-@gerrit:opendev.org- Clark Boylan proposed on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/98306117:46
@clarkb:matrix.orgfungi: the two gitea management roles were talking to https://localhost:3000. I updated them to talk to http://localhost:3000 instead. I did not refresh the autohold beacuse I think I managed to push the update before the hold took effect but I'll try to confirm that now17:48
@clarkb:matrix.orgya none of the nodes in https://zuul.opendev.org/t/openstack/nodes say `fungi testing anubis for gitea` so i think the existing autohold is still valid17:49
@clarkb:matrix.orgfungi: I still have a set of held nodes for anubis and mailman. Should I clean that up now?17:54
@fungicide:matrix.orgClark: oh, yes we don't need the mailman held node, and thanks for the revision on the gitea change18:08
@fungicide:matrix.orgsince we're probably about an hour out still from having the held gitea node to play around with, i'm going to take this momentary calm to go out and grab some celebratory post-openstack-release lunch18:09
@fungicide:matrix.orgbbiaw18:10
-@gerrit:opendev.org- Zuul merged on behalf of Dr. Jens Harbott: [opendev/system-config] 983077: Block some UA used by crawlers https://review.opendev.org/c/opendev/system-config/+/98307718:41
@clarkb:matrix.orgthat is still in the process of being deployed. Once it is done I'll dobule check it and then grab lunch18:50
@clarkb:matrix.orggitea job reports success. Looking at the nodes now18:56
@clarkb:matrix.orgthe apache config file did update due to some space and comment differences. The actual content does not appear changed18:57
@clarkb:matrix.organd apache was reloaded18:57
@clarkb:matrix.orgok I've inspected the 6 nodes. Load looks reasonable and I'm still able to reach the service19:00
@clarkb:matrix.orgI think we can consider this particular thing done for now19:00
@clarkb:matrix.orgfungi: the anubis change is still having ssl problems. Possibly beacuse I didn't sufficient change https to http? Maybe because we still have the check certs flag on the http tasks?19:03
@fungicide:matrix.orgoh maybe i didn't change it hard enough... looking19:56
@clarkb:matrix.orgfungi: I think I see the issue20:03
@fungicide:matrix.orgyou're ahead of me, i'm still catching up on all the e-mail that arrived while i was at lunch20:04
@clarkb:matrix.orgplaybooks/test-gitea.yaml has several https requests too20:04
@clarkb:matrix.orgok I'll make an update20:04
@fungicide:matrix.orgi thought i fixed those, but i was probably in a hurry and missed something20:04
@fungicide:matrix.orgoh, nevermind i fixed the ones in `testinfra/test_gitea.py`20:05
-@gerrit:opendev.org- Clark Boylan proposed wip on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306820:06
-@gerrit:opendev.org- Clark Boylan proposed on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/98306120:06
@clarkb:matrix.orgdo you have autohold stuff in your shell history? I think we need to recycle the autohold20:06
@fungicide:matrix.orgyep, we have test ansible tasks too. i totally overlooked that, you're absolutely right20:06
@fungicide:matrix.orgi'll reset it, sure20:06
@clarkb:matrix.orgya I overlooked them too when I was updating the configurator roles20:06
@fungicide:matrix.orgnew hold set now20:06
@clarkb:matrix.orgfungi: thinking out loud a bit. I half wonder if we should consider splitting the gitea anubis change. The first change would be switching to the http backend and no anubis. Then add anubis? That may make it easier to step through and debug if something goes wrong21:03
@clarkb:matrix.orgI think we can continue to test it combined as is but when we get to a point where we think we want ot deploy it splitting it up may be useful?21:03
@fungicide:matrix.orgyes, i was having a similar thought. it initially didn't seem like getting rid of the local ssl connection would be complicated, but it has now ended up being at least twice as complex as adding anubis21:05
@fungicide:matrix.orglet's see the test results from this latest round first, and then i can work on splitting things21:06
@clarkb:matrix.orgsounds good21:07
@fungicide:matrix.orgthe ssl removal was an afterthought footnote in the commit message, but it's actually the majority of the change itself and has been the part that's required the most iteration21:07
@clarkb:matrix.orgyup and I think debugging that without anubis will be easier if we need to do that21:08
@fungicide:matrix.orgalso if we want to revert the anubis bits later while keeping the ssl cleanup, we won't need to untangle them at that point21:09
@fungicide:matrix.orgstill failing, but hopefully it's nearly there21:23
@clarkb:matrix.orgits the same issue in another playbook21:24
@clarkb:matrix.orgdo you want to update it or should I?21:24
@fungicide:matrix.orgah, nuts. i can get it21:24
@clarkb:matrix.orgfungi: `git grep https://localhost:3000` shows there are two locations21:25
@clarkb:matrix.orgI probably should've checked that the last time21:25
@fungicide:matrix.orgplaybooks/rename_repos.yaml and playbooks/roles/letsencrypt-create-certs/handlers/restart_gitea.yaml i thinkl21:25
@clarkb:matrix.orgyup21:26
@clarkb:matrix.orgthinking out loud again I wonder if it is more minimal to continue to do just do it everywhere? that is maybe slightly more expensive I guess21:27
@clarkb:matrix.orgbut add the same certs to the new vhost and then it can https through each step? and then we don't need to completely change all of this? but lets see if it works with http21:27
@clarkb:matrix.orgin other news. Artemis II moon mission launches in just under an hour21:29
@fungicide:matrix.orgokay, i've applied the outstanding fixes, now working on splitting the files into two commits21:29
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306821:46
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed:21:46
- [opendev/system-config] 983134: Remove intermediate HTTPS layer for Gitea backends https://review.opendev.org/c/opendev/system-config/+/983134
- [opendev/system-config] 983135: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/983135
@fungicide:matrix.orgoops, i missed copying over the change-id on split. fixing21:55
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306821:56
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed:21:56
- [opendev/system-config] 983134: Remove intermediate HTTPS layer for Gitea backends https://review.opendev.org/c/opendev/system-config/+/983134
- [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/983061
@fungicide:matrix.orgsystem-config-run-static is still bumping up against remote launchpad ppa errors22:21
@clarkb:matrix.orgfungi: looks like things may still eb failing for gitea. but we can probably pick this up tomorrow. It was an early start today23:01
@fungicide:matrix.orgyeah, held gitea99 is ready at 158.69.66.28 if someone wants to investigate locally on it, but i'll wait for the system-config-run-gitea builds to complete on the other changes before looking at logs23:03
@fungicide:matrix.orgog there they go23:03
@fungicide:matrix.org3080/tcp not listening23:04
@clarkb:matrix.orgmaybe it only does the redirect if you are also doing ssl?23:05
@clarkb:matrix.orgif that is the case we can do the redirect in apache23:05
@fungicide:matrix.orgyeah, that's where my mind was headed23:05
@fungicide:matrix.organd agreed, we have easy recipes for that, and it would make more sense there anyway23:05
@fungicide:matrix.orgi'll hack on that in the morning before openinfra.live23:06
@fungicide:matrix.orgi've been going for ~13.5 hours today though, so going to knock off finally23:06
@fungicide:matrix.orgmaybe watch more computer-generated images of what an orbital vehicle looks like23:07
@fungicide:matrix.orglater all23:07

Generated by irclog2html.py 4.1.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!