rlandy|ruck | https://4effb742a88be8659f07-40bd60678638a1db566d5d37b438f20d.ssl.cf5.rackcdn.com/822049/1/gate/openstack-tox-py36/8e12896/job-output.txt | 00:00 |
---|---|---|
rlandy|ruck | oops | 00:00 |
rlandy|ruck | https://ba8cdf5843f45d47b0c9-7705306510805cbea58d1e45781b1e11.ssl.cf5.rackcdn.com/816571/1/gate/openstack-tox-py36/a662754/job-output.txt | 00:01 |
Clark[m] | That looks like the pypi fallback backend issue | 00:03 |
Clark[m] | It's a known issue. Basically pypi serves you stale content from the CDN if their primary CDN backend fails and the CDN fallsback to their backup. The backup has old data and the install fails because constraints request a version that is newer than the backup contains | 00:04 |
Clark[m] | As mentioned in the TC meeting today I think it would be appropriate for openstack to reach out to pypa about it. Openstack is primarily affected due to the use of constraints | 00:04 |
rlandy|ruck | oh - back to that joy | 00:09 |
rlandy|ruck | thanks for the pointer | 00:10 |
rlandy|ruck | will raise it with the dev teams | 00:10 |
*** rlandy|ruck is now known as rlandy|out | 00:12 | |
fungi | is it cropping up again today? | 00:26 |
fungi | one thing frickler observed which seems to make things worse is that the broken/stale backend serves pages with a very long cache ttl compared to the primary backend | 00:27 |
fungi | so we end up caching and serving up the bad indices for far longer than the good ones in that case | 00:28 |
fungi | basically, our use of a caching proxy amplifies the problem, which could be another reason we seem to notice it more | 00:29 |
wxy-xiyuan_ | fungi: clarkb thanks very much for the quick debug and fix. | 00:49 |
fungi | of course! | 00:50 |
fungi | it's just a workaround, we still need to improve nodepool so it doesn't get confused by '.' in image names | 00:51 |
fungi | good news is the builder cleaned up the image records | 00:53 |
fungi | though it does seem to have left behind the files on disk | 00:54 |
fungi | clarkb: is it safe to delete those out of /opt/nodepool_dib/ once they're no longer mentioned by nodepool dib-image-list? | 00:56 |
Clark[m] | fungi: yes should be safe since nothing will try to use those files once the db records are gone | 01:01 |
fungi | thanks, done! | 01:03 |
fungi | there were none on nb01, just 02 (amd64) and 03 (arm64) | 01:04 |
corvus | if anyone feels like a zuul review, i believe https://review.opendev.org/822062 is what is causing the login button to fail to appear | 01:14 |
corvus | (on opendev's zuul) | 01:15 |
fungi | sure thing | 01:32 |
*** ysandeep|out is now known as ysandeep | 01:39 | |
fungi | okay, this is an odd behavior | 02:07 |
fungi | i can't connect to http://lists.openinfra.dev/ with firefox, it inists on going to https. not the case for other virtual domains on that same server | 02:08 |
fungi | chromium works though | 02:11 |
*** sshnaidm is now known as sshnaidm|afk | 02:45 | |
fungi | wow how has this never come to my attention before? .dev is unilaterally included in the hsts preload list. you're actually not supposed to to http with any site on a .dev domain | 03:00 |
fungi | this throws a new wrench into plans | 03:00 |
fungi | i'll need to sleep on this | 03:01 |
wxy-xiyuan_ | openEuler arm64 node works now, while X86 ones still raise node_failure error. I assume it needs more time to upload the x86 image to cloud provider? | 03:37 |
wxy-xiyuan_ | X86 works now! | 06:09 |
*** ysandeep is now known as ysandeep|brb | 06:19 | |
*** ysandeep|brb is now known as ysandeep | 06:57 | |
opendevreview | wangxiyuan proposed openstack/project-config master: Add cache-devstack element for openEuler https://review.opendev.org/c/openstack/project-config/+/822073 | 07:09 |
opendevreview | Merged openstack/project-config master: Add cache-devstack element for openEuler https://review.opendev.org/c/openstack/project-config/+/822073 | 07:37 |
ykarel | pypi proxies' affected today as well | 07:53 |
mnaser | infra-root: it seems that the routes are hard-coded for ipv6 on some of the nodes running in our mtl dc | 08:30 |
mnaser | appreciate changing them to `2604:e100:1::1/64` and `2604:e100:1::2/64` for us to move forward with our dc migration :) | 08:31 |
frickler | mnaser: was that due to the issues with rogue RAs? do you have an indication which nodes might be doing that? | 08:48 |
mnaser | frickler: nothing on that yet unfortunately, we're hoping to bring the cloud up to latest release and see if that helps | 08:55 |
*** ysandeep is now known as ysandeep|lunch | 09:11 | |
*** ysandeep|lunch is now known as ysandeep | 09:42 | |
*** redrobot6 is now known as redrobot | 10:37 | |
*** sboyron_ is now known as sboyron | 10:52 | |
frickler | hmm, gbot seems to be off for the holidays, too | 10:53 |
frickler | infra-root: https://review.opendev.org/c/opendev/system-config/+/822095 Set CacheMaxExpire to 1h | 10:53 |
frickler | that would be my attempt at reducing the impact of bad cdn responses | 10:54 |
frickler | ykarel mentioned a database upgrade on pypi last night https://status.python.org/ | 10:54 |
frickler | that might explain why the impact seems to have been larger than usual this time | 10:54 |
frickler | in an attempt to clean up things, I'm running this on mirrors now manually: | 10:55 |
frickler | htcacheclean -D -a -v -p /var/cache/apache2/proxy/|grep simple|xargs -i -n1 htcacheclean -v -p /var/cache/apache2/proxy/ {} | 10:55 |
dulek | There's another instance of deps issues, this time happening at ovh-bh1: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_06f/821626/5/check/kuryr-kubernetes-tempest-defaults/06f44e4/job-output.txt | 10:59 |
*** rlandy_ is now known as rlandy|ruck | 11:11 | |
*** dviroel|out is now known as dviroel|rover | 11:14 | |
*** frenzy_friday is now known as anbanerj|ruck | 11:27 | |
dulek | frickler: ^ | 11:35 |
*** ysandeep is now known as ysandeep|afk | 11:51 | |
*** ysandeep|afk is now known as ysandeep | 12:13 | |
*** frenzy_friday is now known as anbanerj|ruck | 12:21 | |
*** tobias-urdin9 is now known as tobias-urdin | 13:02 | |
*** pojadhav is now known as pojadhav|brb | 13:24 | |
*** jpena|off is now known as jpena | 13:49 | |
*** ysandeep is now known as ysandeep|dinner | 14:08 | |
*** pojadhav|brb is now known as pojadhav | 14:19 | |
*** ysandeep|dinner is now known as ysandeep | 14:52 | |
frickler | infra-root: gerritbot has stopped logging today at 08:12 it seems, seems quiet since then. is there anything we could debug or shall we just restart it? | 14:56 |
frickler | dulek: please recheck and let us know if the issue persists | 14:56 |
fungi | nothing in its debug log? | 14:56 |
frickler | fungi: which debug log? the docker log is what I was talking about | 14:57 |
dulek | frickler: Already did, it happened a few times, but seems better now. | 14:58 |
fungi | frickler: yeah, it says to log to syslog, but doesn't seem to actually write anything into /var/log/syslog | 15:04 |
fungi | i agree docker-compose logs looks like it died 7 hours ago | 15:04 |
fungi | it's still in channel so my guess is it's somehow lost the gerrit event stream | 15:05 |
fungi | and hasn't realized doesn't try to reconnect | 15:05 |
fungi | frickler: so yes, probably nothing left to do but stop/start or down/up -d the container | 15:06 |
frickler | fungi: right, will do that now | 15:07 |
*** rlandy|ruck is now known as rlandy|dr_appt | 15:07 | |
fungi | thanks! | 15:08 |
*** dviroel|rover is now known as dviroel|rover|lunch | 15:09 | |
frickler | seems to be doing fine again https://meetings.opendev.org/irclogs/%23starlingx/latest.log.html#t2021-12-17T15:10:43 | 15:16 |
fungi | excellent. likely we have a blind spot with some sorts of network disruption impacting the event stream | 15:19 |
frickler | fungi: mnaser mentioned something with ipv6 default routes earlier, that might have affected review02, too | 15:26 |
fungi | oh, yep great point | 15:27 |
fungi | and yes, i think we ended up with routes hard-coded into the review server in order to stop it from acting on bogus route announcements it was receiving | 15:27 |
fungi | checking | 15:28 |
frickler | fungi: yes, in netplan | 15:28 |
frickler | so it seems we should change those from the LL addresses we are currently using to the ones mnaser mentioned above | 15:29 |
fungi | looks like we added those in /etc/netplan/50-cloud-init.yaml | 15:29 |
fungi | right | 15:29 |
fungi | apparently we deploy it via playbooks/service-review.yaml | 15:30 |
frickler | I'll propose a patch | 15:30 |
fungi | perfect, i was about to ask if there was already one i could review | 15:30 |
fungi | that seems to be the only server we're doing it for | 15:31 |
fungi | at least the only server we're doing it that way for, in the system-config ansible | 15:31 |
Clark[m] | I think the mirror in that region also hardcore's via netplan | 15:32 |
fungi | possible we did something different for the mirror | 15:32 |
fungi | maybe not with system-config | 15:32 |
fungi | which region is that? ca-ymq-1 or sjc1? | 15:32 |
Clark[m] | Also if we break networking on that host we should have a recovery plan. ca-ymq-1 | 15:32 |
Clark[m] | Maybe do the mirror first to ensure it all works | 15:33 |
fungi | i think the recovery plan for review is to make the change locally, and if necessary we can reboot it via the nova api | 15:33 |
Clark[m] | Re gerritbot syslog we have syslog rules to write their output to /var/log/containers iirc | 15:33 |
fungi | then merge the fix in system-config if all is well | 15:33 |
fungi | clarkb: oh, yep! i see a /var/log/containers/docker-gerritbot.log on eavesdrop01 | 15:34 |
Clark[m] | fungi: you mean not via netplan but via route or whatever the command is now? | 15:34 |
fungi | clarkb: yeah, we can use ip -6 route add/del sure | 15:35 |
fungi | obviously editing the netplan config wouldn't get reset on a reboot | 15:35 |
fungi | good to call that out | 15:36 |
Clark[m] | Right | 15:36 |
frickler | I can do the changes while connected via v4, that should be safe enough | 15:36 |
fungi | oh, that's also a great point, even if a reboot breaks v6 connectivity, we should be able to get to it over v4 anyway | 15:37 |
opendevreview | Dr. Jens Harbott proposed opendev/system-config master: Switch router addresses for review02 to global https://review.opendev.org/c/opendev/system-config/+/822109 | 15:37 |
Clark[m] | Ah yup. It is too early in the morning and I'm having a slow start but that is an excellent point :) | 15:38 |
Clark[m] | I'm not sure how to trigger netplan to reapply without a reboot either | 15:39 |
Clark[m] | But maybe editing the file and manually updating the routes is good enough | 15:40 |
* frickler needs a break, will proceed in half an hour or so | 15:41 | |
gmann | It seems all my patches facing pypi CDN issue but did recheck let's see | 15:42 |
Clark[m] | I really think openstack needs to reach out to pypa over this. It isn't something we can fix (even lowering the max cache time won't help a ton) ourselves. And it primarily affects openstack due to openstack's use of cpnstraints | 15:44 |
Clark[m] | Bringing it up in here isn't going to change much. Other than for me to continue to suggest it be brought up with pypa :) | 15:45 |
Clark[m] | I can start a draft of an issue in an etherpad once I've booted my morning. But would appreciate it if someone else submits it given responses I have received from pypa in the past | 15:55 |
*** gibi is now known as gibi_pto_back_on_10th | 16:02 | |
frickler | so I changed the routing manually on mirror01.ca-ymq-1.vexxhost.opendev.org , can still reach the outside world from it and back | 16:11 |
Clark[m] | frickler fungi I just remembered that I think pip sends cache control to tell the servers it doesn't care about cached values. It may be the case that lowering TTLs won't help | 16:11 |
frickler | Clark[m]: we set config in apache to ignore that | 16:15 |
frickler | see the comment after my change in https://review.opendev.org/c/opendev/system-config/+/822095/1/playbooks/roles/mirror/templates/mirror.vhost.j2 | 16:16 |
frickler | but that may also be a reason why noone else is really affected by this | 16:17 |
frickler | so maybe if 822095 doesn't help much either, the next step could be to try and drop CacheIgnoreCacheControl | 16:18 |
clarkb | makes sense. | 16:32 |
clarkb | frickler: did you update the netplan on the mirror node too? | 16:32 |
*** dviroel|rover|lunch is now known as dviroel|rover | 16:33 | |
clarkb | frickler: I've approved the pypi proxy cache change and +2'd the review02 netplan update | 16:33 |
clarkb | checking the mirror node the netplan file and ip -6 route output seem to be in aggrement and use those newer addresses | 16:34 |
clarkb | frickler: fungi: I guess we proceed with review then? | 16:35 |
clarkb | did we want to approve and land the netplan file update before doing the manual update? if so can you review that chagne fungi? | 16:35 |
frickler | clarkb: I edited the netplan file for the mirror, yes. if you or fungi want to do review, go ahead, not sure how urgent this is for mnaser. I would continue tomorrow otherwise | 16:36 |
clarkb | ya I think we should land the netplan update for reivew02 then manually update the routes. I'm sure fungi or I can do the route upadte once the file is updated | 16:37 |
clarkb | thank you for getting that started for us and testing it works happily on the mirror node | 16:37 |
clarkb | frickler: how long are the TTLs on the backup backend indexes from pypi? | 16:37 |
clarkb | the primary produces 10 minute ttls iirc. But knowing how long the backup backend TTLs are may be useful for the issue draft I'll start momentarily | 16:38 |
*** ysandeep is now known as ysandeep|out | 16:39 | |
frickler | clarkb: well what I saw in the cache was 1d, which is also the old MaxExpire value, so we don't really know what the response was, could be anything >=1d | 16:40 |
clarkb | frickler: thanks that is still helpful | 16:40 |
fungi | sorry, stepped away for a few, catching back up now | 16:42 |
fungi | i've approved frickler's netplan patch for review | 16:44 |
fungi | since we have ipv4 as a fallback connectivity option, i'd be fine waiting for that to deploy and then performing a quick controlled reboot of the server just to make sure everything's fine | 16:45 |
clarkb | ok, I guess that is the other method of applying it | 16:49 |
clarkb | it is friday and holidays so should be reasonably safe | 16:49 |
fungi | revisiting my discovery from last night, i think the most straightforward course of action to get lists.openinfra.dev working is to make the addition of letsencrypt for the vhosts a preliminary step in the mailman3 spec and get https set up for the existing v2 servers | 16:50 |
fungi | we know we want https for mm3, and the openinfra ml migrations were basically the last major work we wanted to do on the mm2 deployments anyway | 16:51 |
fungi | otherwise, most users are going to be able to get to the list archives and admin/moderation interface for that site in the interim | 16:52 |
clarkb | fungi: wfm | 16:52 |
fungi | but that aside, i'm still floored that it's actually forbidden to use plan http with .dev domains | 16:53 |
fungi | (also questioning the foundation's decision to use a domain for which google owns the tld and serves as the name authority, but that's a discussion for another day) | 16:53 |
clarkb | https://etherpad.opendev.org/p/PVjM3iNd_DmmBAJC70ue if someone wants to file that feel free (or edit as necessary before filing) | 16:54 |
clarkb | gmann: ^ | 16:54 |
opendevreview | Merged opendev/system-config master: Set CacheMaxExpire to 1h https://review.opendev.org/c/opendev/system-config/+/822095 | 17:01 |
clarkb | To be clear I don't intend on filing that issue. My last experience with pypa has left me unwilling to directly interact iwth them. Someone else should file that if we want to go that route | 17:03 |
clarkb | (I was told my pull request against pip wasn't even worthy of code review. Probably the least constructive interaction I've ever had with an open source project) | 17:04 |
gmann | clarkb: ack, thanks. | 17:09 |
*** jpena is now known as jpena|off | 17:13 | |
opendevreview | Merged opendev/system-config master: Switch router addresses for review02 to global https://review.opendev.org/c/opendev/system-config/+/822109 | 17:16 |
clarkb | fungi: frickler: mnaser: 2604:e100:1:0::1 == 2604:e100:1::1 because :: means replace everything with 0s? | 17:18 |
clarkb | just double checking ^ against what mnaser said and what the mirror host shows in its ip -6 route output compared to it and reviews netplan edits | 17:18 |
*** rlandy|dr_appt is now known as rlandy|ruck | 17:34 | |
fungi | yes, exactly | 17:35 |
fungi | :0: can be replaced by just :: as can any arbitrarily long run of :0:0: so long as only one :: appears in the address | 17:36 |
fungi | more than one :: in an address would be ambiguous of course, as then you wouldn't know how many null bytes are being elided | 17:37 |
clarkb | the netplan file on review has updated though the review job isn't quite finished yet | 17:49 |
clarkb | fungi: I'll defer to you on manually updating the routes or doing a reboot (though I can help) as I've got plenty of distractions at home today | 17:50 |
fungi | sure, i'll wait for the deploy buildset to report first | 17:51 |
jentoio | Sorry, Ive not been following the log4j updates here. I saw this today... The importance of being at log4j v2.16 just got a bunch higher, CVSS score is raised from 3.7 to 9.7. https://www.lunasec.io/docs/blog/log4j-zero-day-severity-of-cve-2021-45046-increased/#update-the-localhost-bypass-was-discovered | 17:56 |
fungi | thanks for the link! | 17:57 |
opendevreview | Jeremy Stanley proposed opendev/zone-opendev.org master: Add letsencrypt record for lists site https://review.opendev.org/c/opendev/zone-opendev.org/+/822194 | 18:47 |
opendevreview | Jeremy Stanley proposed opendev/zone-zuul-ci.org master: Add letsencrypt record for lists site https://review.opendev.org/c/opendev/zone-zuul-ci.org/+/822195 | 18:48 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Generate HTTPS certs for Mailman sites https://review.opendev.org/c/opendev/system-config/+/822196 | 19:01 |
clarkb | fungi: ^ left a note on that one about a missing piece. It isn't important if you want to get the certs generated first though | 19:32 |
fungi | clarkb: yep, i'm almost done with that change as a separate step | 19:39 |
fungi | just writing tests for it | 19:40 |
fungi | got sidetracked dealing with dns in all the other domains | 19:42 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Add HTTPS vhosts to mailman servers https://review.opendev.org/c/opendev/system-config/+/822198 | 19:51 |
fungi | i'm slightly confused as to how playbooks/roles/install-ansible/files/inventory_plugins/test_yamlgroup.py knows what groups something *isn't* supposed to be in | 19:53 |
clarkb | I think it may do an equality match? I always have to look at it | 19:57 |
fungi | aha, we've just hard-coded a reference in playbooks/roles/install-ansible/files/inventory_plugins/test-fixtures/results.yaml | 20:01 |
clarkb | fungi: for the path to the certs is that based on the first name listed in the cert? | 20:06 |
fungi | i... think so? | 20:06 |
fungi | testing should confirm | 20:06 |
clarkb | ya | 20:06 |
fungi | also is there any additional job i need to add as required by the lists deploy job when using le? | 20:07 |
fungi | infra-prod-service-codesearch uses certs and doesn't require infra-prod-letsencrypt so i guess not | 20:08 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Generate HTTPS certs for Mailman sites https://review.opendev.org/c/opendev/system-config/+/822196 | 20:08 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Add HTTPS vhosts to mailman servers https://review.opendev.org/c/opendev/system-config/+/822198 | 20:08 |
clarkb | you don't, the testing is mocked out if you add the right groups iirc | 20:09 |
clarkb | and it sets up the self signed certs as if they came from le | 20:09 |
fungi | cool | 20:10 |
fungi | clarkb: the reason i split the vhost change out separate is because i had to touch so much dns to set up the acme-challenge cnames that i'm worried some certs may fail to get generated, and i wouldn't want to cause apache to fail to load configs for some of the vhosts | 20:15 |
fungi | once i see that the certs really do wind up on the servers, then i'll feel more comfortable merging the followup to use them | 20:16 |
opendevreview | Merged opendev/zone-zuul-ci.org master: Add letsencrypt record for lists site https://review.opendev.org/c/opendev/zone-zuul-ci.org/+/822195 | 20:22 |
opendevreview | Merged opendev/zone-opendev.org master: Add letsencrypt record for lists site https://review.opendev.org/c/opendev/zone-opendev.org/+/822194 | 20:27 |
fungi | clarkb: the second patchset on 822196 simply adjusts our test fixtures to say to expect that server in the new group, and the job the first revision failed has already succeeded (otherwise same as the one you already +2'd) | 20:31 |
fungi | we're a few minutes out from those two dns changes deploying, and i've hand-tested all the other acme-challenge cnames are already in place for 822196 to be able to work | 20:31 |
*** dviroel|rover is now known as dviroel|rover|brb | 20:34 | |
fungi | zuul is pretty quiet, so i'll get ready to reboot gerrit here soon | 20:35 |
Clark[m] | Sorry finishing up lunch can look in a bit | 20:45 |
fungi | those remaining two acme-challenge cnames are resolving publicly now | 20:46 |
fungi | so 822196 ought to be safe to merge if its remaining jobs pass | 20:47 |
fungi | the vhost addition is broken, but we're not collecting apache logs from those nodes so it's hard to know quite why. i'll add that | 20:51 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Add HTTPS vhosts to mailman servers https://review.opendev.org/c/opendev/system-config/+/822198 | 20:53 |
clarkb | 822196 lgtm | 21:05 |
fungi | thanks, looks like it's a minute or two from passing check | 21:07 |
*** dviroel|rover|brb is now known as dviroel|rover | 21:20 | |
*** dviroel|rover is now known as dviroel|out | 21:29 | |
fungi | yeah, as suspected, i've apparently guessed the ssl cert paths wrong in the vhost addition | 21:39 |
clarkb | I think it will use the first name in the list of certs | 21:41 |
clarkb | maybe reorder https://review.opendev.org/c/opendev/system-config/+/822196/2/inventory/service/host_vars/lists.openstack.org.yaml to put lists.openstack.org at the top since that is the host name? | 21:41 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Generate HTTPS certs for Mailman sites https://review.opendev.org/c/opendev/system-config/+/822196 | 21:46 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Add HTTPS vhosts to mailman servers https://review.opendev.org/c/opendev/system-config/+/822198 | 21:46 |
fungi | done, i also set it to collect the acme.sh logs since it should tell us the path | 21:47 |
clarkb | I thought we were already collecting those but ya we should | 21:48 |
fungi | we weren't | 21:48 |
fungi | not in the run-lists job anyway | 21:48 |
fungi | status notice The review.opendev.org server is being rebooted to validate a routing configuration update, and should return to service shortly | 21:51 |
fungi | that ^ look reasonable? | 21:51 |
clarkb | lgtm | 21:51 |
clarkb | maybe down the docker-compose stuff first? then up -d it when it is back? | 21:51 |
fungi | sure, can do | 21:56 |
fungi | there's a few things in the openstack tenant status page which look about to report, so i'll give those a little longer | 21:57 |
clarkb | ok | 21:57 |
clarkb | looks like those may have claered out? | 22:16 |
fungi | still waiting for that tripleo-heat-templates failure at the top of the gate to report, its last build is copying logs | 22:18 |
fungi | is there some playbook i need to add somewhere to generate the ssl certs in the run-lists job? looking at https://zuul.opendev.org/t/openstack/build/4b44f30c109e472ab61dc0f8986656fe there's no indication acme.sh was run | 22:19 |
fungi | which could explain the lack of certs apache's finding | 22:20 |
clarkb | you might need a different parent job let me see | 22:20 |
clarkb | looks like you add playbooks/letsencrypt.yaml to the playbooks listing on the job | 22:21 |
fungi | i only see that being added in the infra-prod-letsencrypt job | 22:21 |
clarkb | its on the system-config-run jobs | 22:21 |
clarkb | in the other file | 22:21 |
clarkb | zuul.d/system-config-run.yaml that one | 22:22 |
fungi | oh! yes i'm looking in the wrong jobs... *sigh* thanks | 22:22 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Generate HTTPS certs for Mailman sites https://review.opendev.org/c/opendev/system-config/+/822196 | 22:25 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Add HTTPS vhosts to mailman servers https://review.opendev.org/c/opendev/system-config/+/822198 | 22:25 |
fungi | okay, everything's reported that's going to report any time soon | 22:27 |
fungi | #status notice The review.opendev.org server is being rebooted to validate a routing configuration update, and should return to service shortly | 22:27 |
opendevstatus | fungi: sending notice | 22:27 |
-opendevstatus- NOTICE: The review.opendev.org server is being rebooted to validate a routing configuration update, and should return to service shortly | 22:27 | |
fungi | stopping the container | 22:27 |
fungi | and rebooting the server now that it's done | 22:27 |
fungi | it's responding to pings over ipv6 already | 22:28 |
fungi | i can ssh into it | 22:28 |
fungi | `ip -6 ro sh` gives the expected new routes | 22:28 |
fungi | starting the container again | 22:29 |
clarkb | it can ping6 google too | 22:29 |
clarkb | and ya I concur the ip -6 route gives me what I expect | 22:29 |
fungi | i'm ssh'd in over ipv6 btw | 22:29 |
fungi | webui is up and working for me | 22:30 |
clarkb | same here, maybe a bit slow but that should get better as the caches rebuild | 22:31 |
fungi | yup | 22:31 |
clarkb | (this is a known issue and apparently some libmodule plugin to swap out the cache types and make them persistent can help with this) | 22:31 |
fungi | adding the le playbook got things much closer, now it's just the https testinfra asserts which are failing. i may set an autohold on the vhost addition and recheck | 23:06 |
fungi | for some reason curl is getting "SSL routines:ssl3_get_record:wrong version number" | 23:10 |
fungi | i suspect something's not quite right with the vhost config | 23:11 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!