Thursday, 2025-02-20

clarkbfungi: we both responded to the networking-ovn documentation question. Note that I think the docs originated in a retired project that should've retired its docs before retiring the project00:01
clarkbfungi: the cloud launcher failed on the same you need to authenticate error I've been seeing. I don't see those errors now using the new cloud profile. Maybe we didn't use the correct cloud profile in the run launcher config?00:08
clarkbno that seems to be correct00:09
fungiyeah, i'm in the process of troubleshooting it00:09
fungiopendevci-rax-flex is working with the clouds.yaml on bridge, but opendevzuul-rax-flex is not00:10
fungipossible i didn't prime that one correctly00:10
clarkboh it specifically failed on opendevzuul00:10
clarkbyup I think that must be it00:11
ianwi think perhaps i/we never really got around to a full cleanup after https://review.opendev.org/c/opendev/system-config/+/82025000:16
clarkboh that could explain it.00:17
fungiclarkb: found it. i accidentally reused the project_id from opendevci-rax-flex for opendevzuul-rax-flex, fixed in the private hostvars just now00:18
clarkbMy immediate concern is being able to add new hosts to the inventory and have ansible work. But revisiting this effort you started is probably worth doing when there is time00:18
clarkbfungi: cool00:18
fungii guess the daily run will get it in few hours00:24
fungii went ahead and reenqueued 942230,4 in deploy instead00:33
fungiit's corrected clouds.yaml on bridge now00:46
fungicloud-launcher job is running again now00:47
fungisuccess!00:56
Clark[m]Excellent 00:57
fungiso i guess next we need to upload our noble image to both regions01:04
fungiwhere did ubuntu-noble-server-cloudimg-2024-08-22 get uploaded to our old project in flex sjc3 from?01:13
Clark[m]fungi: tonyb uploaded it from bridge using the current upstream Ubuntu cloud image01:38
Clark[m]But the file is no longer there. Only the vhd is there and I'm not sure that we can convert it back to raw or qcow201:39
Clark[m]I think we just grab the current noble image and upload it as qcow2 or raw depending on which is preferred 01:39
fungik01:46
fungican do01:46
opendevreviewIan Wienand proposed opendev/system-config master: add-inventory-known-hosts: lookup from Zuul checkout  https://review.opendev.org/c/opendev/system-config/+/94233303:17
opendevreviewIan Wienand proposed opendev/system-config master: add-inventory-known-hosts: lookup from Zuul checkout  https://review.opendev.org/c/opendev/system-config/+/94233303:53
opendevreviewIan Wienand proposed opendev/system-config master: add-inventory-known-hosts: lookup from Zuul checkout  https://review.opendev.org/c/opendev/system-config/+/94233304:25
slittleoops, I pushed a bad tag.   I need to delete tag 10.0.0 in git starlingx/manifest15:44
fungideleting a tag from the repository won't clean up anywhere copies of it have replicated to, or local copies in repositories where someone has pulled15:49
fungialso if you replace that tag with the same version later, other copies of the repository will hang onto the old one rather than pulling in the new one15:50
slittleI know.  I hope to delete it before it gets cloned15:50
fungiwe document that tag deletion is essentially impossible, at a minimum there will likely be stuck references to it in our ci systems and elsewhere that would have to be manually cleaned up afterward to prevent errors15:51
slittleah15:52
fungiinfra-root: ^ any opinion on whether we can try deleting that, or is the impact risky?15:52
clarkbI think zuul does try to update things, but I don't know how successful it would be15:53
clarkbI think the risk is largely to starlingx/manifest15:53
fungiusually we'd recommend leaving it, and pushing a 10.0.1 with a release note indicating that 10.0.0 was pushed in haste with the wrong content rather than trying to alter history15:53
fungibut i'm up for trying it if we think it's not going to be too disruptive for our systems, as long as the starlingx community is okay handling the possible impact there15:54
slittleI think we are ok on our end15:55
fungiokay to sort out any impact from wrong copies of a 10.0.0 tag in that repo? in that case i'll go ahead and escalate my perms while i give other infra-root sysadmins a few minutes to chime in15:56
slittleyep15:57
clarkbya I think beacuse gitea replication is a force push we may update those properly. But I'm not positive of that. My main concern would be that a new 10.0.0 is tagged but when pushed zuul operates on the old hash because it has already fetched it15:58
fungihttps://review.opendev.org/admin/repos/starlingx/manifest,tags shows a 10.0.0 tag you pushed for revision 906d3d25369e8b0a87d2c1451fc5941709bcd59e at 15:35:16 utc15:58
clarkbI don't expect any of our tools (gerrit, zuul, gitea, etc) to break but their outputs may become incorrect for that repo15:58
slittleshould point to 73e57878ddc6b05d50a11c709310a0d52a5100cd15:58
fungilast call for objections to me deleting that tag15:59
fungiif/when you push a new one, you'll want to pay extra close attention to any jobs that run on or infer things from tags, to make sure they used the "right" 10.0.016:00
slittleyes16:00
fungi#status log Deleted incorrect starlingx/manifest tag 10.0.0 for revision 906d3d25369e8b0a87d2c1451fc5941709bcd59e at slittle's request in #opendev16:01
opendevstatusfungi: finished logging16:01
clarkbkeep in mind it is possible for one job to use the right hash and another to use the wrong one if the tag was only fetched on a subset of executors/mergers previously16:02
fungislittle: if you spot anywhere that the old tag ends up used, let us know, i can try to help with any needed cleanup in our systems16:02
clarkbyou need to check every job that runs for the new tag16:02
clarkbyou can't just check one location16:02
slittlethanks guys16:02
clarkbthis assumes zuul can't sort it out properly which I'm reptty sure it attempts to do16:02
fungifwiw, it doesn't look like the deletion replicated to the gitea servers, so e.g. https://opendev.org/starlingx/manifest/src/tag/10.0.0/ is still present16:04
fungii guess we'll see if the new tag overwrites it in gitea later16:04
slittlewhat does it point to?   I pushed the corrected tag16:05
clarkbinfra-root when you get a chance can you take a look at https://review.opendev.org/c/opendev/system-config/+/942307 ianw's comments on that change and his suggested alternative in 942333. I'm fine with the alternative myself but need to properly review it still. corvus you may be interested as it applies executor state to things16:05
clarkbslittle: click the link and look16:05
fungioh, maybe i checked after the new tag had already replicated16:05
clarkbhttps://opendev.org/starlingx/manifest/commit/73e57878ddc6b05d50a11c709310a0d52a5100cd it points there now16:05
clarkbwhcih seems to match the correct hash above16:06
fungiyeah, so maybe i didn't realize there was a new tag already when i pulled it up16:06
clarkbso ya the force push behavior of replication to gitea addresses the problem whcih matches my expectation. That laeves zuul as the big question16:06
slittleyes, it's good16:06
corvusclarkb: yeah i was just going through all of that.16:07
clarkbslittle: fungi: taking a quick look at zuul that repo doesn't appear to run any tag jobs? So maybe it is a noop?16:11
clarkbI guess there may be downstream jobs that rely on that tag though16:11
clarkbdownstream of the tag pipeline but still within zuul I mean16:11
fungifrickler: it looks like you might have left your frickler.admin account as a member of the Project Bootstrappers group in gerrit, okay for me to remove it or are you in the middle of doing something with it?16:12
opendevreviewClark Boylan proposed opendev/zone-opendev.org master: Reduce tracing.o.o's CNAME TTL  https://review.opendev.org/c/opendev/zone-opendev.org/+/94237616:15
opendevreviewClark Boylan proposed opendev/zone-opendev.org master: Switch tracing.o.o CNAME to tracing02  https://review.opendev.org/c/opendev/zone-opendev.org/+/94237716:15
clarkb942376 there is straightforward so I'm going to approve it now16:15
corvusclarkb: i'm having a hard time deciding between your approach and ianw's.  i sort of feel like "make the source repo on bridge up to date" be one of the first things that happens in bootstrapping the whole process sounds like a good idea, so why not that?  is there a downside to that?  does ianw's approach somehow facilitate parallel runs better?16:15
fungii think mainly we have to make sure that if several system-config changes merge together before a deploy job runs, that we're okay with the deploy jobs for an earlier change using a newer state of system-config. but also we need to be careful to make that safe anyway because periodic deploy jobs can also happen between merge and deploy16:17
clarkbcorvus: I think ianw really wanted to keep that bootstrap job to a minimal set of tasks. Updating known_hosts to a good state definitely falls into that but I guess whether or not repo updates does is more ambiguous. I think part of the reason for the swap is that things should depende on infra-prod-base hwich can't be minimal because it runs ansible across all the nodes. This is a16:18
clarkbdependency because it bootstraps base things for our servers which should be in place before running services. This means you have two dependencies (bootstrap-bridge and infra-prod-base with base also dependingon bootstrap-bridge). Things get complicated later when setting up dependencies beacuse I don't think they are transitive when using soft dependencies?16:18
clarkbfungi: both proposals should use the state from the executor which should be specific for the change16:18
clarkbcorvus: anyway I suspect that part of this boils down to managing the job dependencies further down the line and keeping that simple while still accomplishing the goal of minimal bridge setup upfront16:19
fungimmm, is using state from the executor safe? periodics will potentially use newer state between merge of a change and deploy jobs for that change, so you could have a regression of state in that case16:19
clarkbthe more I think about it the more I think it probably is 6 one way half a dozen the other?16:19
clarkbfungi: I think that concern is valid but I don't think that is a new issue caused by these changes16:20
clarkbhowever zuul should respect enqueue order with the locks I think so we'd still have the right order?16:20
clarkbcorvus: fungi: maybe we try to sync up with ianw later today to make sure there isn't something we're missing?16:22
fungii guess we enqueue to deploy immediately on merge, not after a separate promote16:22
corvusyes deploy is change-merged, so it's change-specific16:22
fungiyeah, change-merged https://opendev.org/openstack/project-config/src/branch/master/zuul.d/pipelines.yaml#L22916:23
fricklerfungi: oops I must have missed that when I last force-merge something, nothing in progress, so feel free to clean up or I can do it, too 16:23
fungifrickler: done, thanks!16:23
clarkbeventually we will need to update the git repos and the end state we're aiming at is to do so once per buildset. Whether we do that update in infra-prod-bootstrap-bridge or infra-prod-base (or a third new job) probably doesn't matter too much. The biggest difference we'll notice is in the job dependencies I think16:25
fungiright, so as long as periodics can't run with newer state than things enqueued in deploy (because they'll get the semaphore lock in event order), that does seem safe16:25
clarkbso I'd prefer something that makes job dependencies simplest but I think we can make it work either way16:25
fricklerslittle: while you're around, may I remind you of https://zuul.opendev.org/t/openstack/config-errors?project=starlingx%2Fzuul-jobs&skip=0 once again?16:26
opendevreviewMerged opendev/zone-opendev.org master: Reduce tracing.o.o's CNAME TTL  https://review.opendev.org/c/opendev/zone-opendev.org/+/94237616:27
fungilooking at when i uploaded a noble cloud image to our old control-plane tenant in flex sjc3, i ended up specifying --property hw_disk_bus='scsi' --property hw_scsi_model='virtio-scsi'16:35
clarkbthat was to fix the disk issue we had right? I thought that rax thoguht they had fixed ti generally?16:36
opendevreviewBrian Haley proposed openstack/project-config master: Charms: add review priority to charms repos  https://review.opendev.org/c/openstack/project-config/+/94238116:42
clarkbI don't think using scsi will hurt and we can always test dropping it with a different image if we just want to do the thing that is expected to work the first time16:42
fungiyeah, i'll see what happens if i leave those out this time16:51
fungiand right, it was because the swap and ephemeral block devices ended up enumerated incorrectly in the guests16:52
fungiforcing virtio worked around it16:52
Clark[m]https://matrix.org/blog/2025/02/crossroads/ the oftc matrix bridge (and others) will shutdown if the matrix foundation doesn't find 100k in new funding by the end of March. This message brought to you by the oftc bridge16:52
fungiyikes16:53
fungiokay, latest daily build for noble-server-cloudimg-amd64.img uploaded to our new tenant in flex dfw3 and sjc3, i'll see if i can boot servers from them next16:57
fungihuh, flavors in dfw3 don't match those in sjc317:20
fungithe mirror in our old tenant in sjc3 used gp.0.4.8 (8gb ram, 80gb rootfs, 64gb ephemeral, 4vcpu), but in dfw3 that seems to be named gp.5.4.8 instead17:23
corvuswonder if that's a generation number in the first of the triplet17:23
corvusor maybe region number or something17:23
corvusbtw i think you're going to like how we handle that with niz17:24
fungilike in a sarcastic way, or in a that's very elegant sort of way? ;)17:29
fungimmm, i'm starting to think that maybe rackspace nerfed the --network=PUBLICNET option since nova keeps complaining "No valid service subnet for the given device owner"17:32
fungiso maybe they're going to force new instances to use floating-ip anyway17:33
clarkbis the name still PUBLICNET?17:33
clarkbI guess no valid service subnet implies it found the network but doesn't have any ips it can give out to us17:34
clarkbI think this can happen if the entire subnet range is assigned to floating ip pools17:34
clarkbbut its been a long time since I dabbled with neutron networking17:34
corvuselegant17:36
fungiclarkb: yeah, the full error message mentions the uuid for the PUBLICNET network17:37
fungiand i agree, it's likely they just ran out of ipv4 addresses for it in both regions and need to route some more17:43
corvusi'm looking at the larger nodes i'm trying to dogfood for zuul with niz, and i was hoping to name the flavors by ram size (like niz-ubuntu-noble-8GB) but there can be significant performance differences between flavors with the same ram size across clouds.17:50
corvusso i wonder if we should try to do something more abstract and have like "normal" "small" "large" flavors where we try to go for some sort of approximation of ram+compute equivalency?17:51
corvusso maybe a "large" node has at least 16g of ram, but possibly more if we need more compute to get it up to par?17:52
clarkbthe risk with that is if you actually need more ram than that and only pass on half the clouds. I think this is less of a concern for zuul but it would definitely become an issue for openstack almost immediately17:53
corvus(maybe in the future we should just get rid of flavors and go with resource specs for jobs or something, but it's too late to fold that into the niz work; that would have to be a future thing)17:53
clarkbunforanately we can weather variance in time (so fewer cpus == slower or different cpus on different clouds are faster etc) but its tough to overcome the oomkiller without major concessions17:54
corvustrue17:54
clarkbgranted openstack has been happy to set swap to 8GB and be extra slow rather than determine where the memory leaks/cost are and attempt to better optimize them17:54
corvuswe could do both with a bit more verbose config... we could have "niz-ubuntu-noble-8GB" and "niz-ubuntu-noble-normal".  not sure i love that idea.17:55
clarkbits an interesting problem and one that I'm guessing most zuul uses don't really have as they likely put all their resources in one cloud bucket17:58
clarkbthe extra variance is a side effect of our multicloud setup. I know at one point there was an attempt by some to try and get openstack clouds to standardize on flavors (then optionally have extra flavors) which may have made things simpler for us17:58
corvusyeah, or maybe 2 baskets with fairly comparable flavors.17:58
corvusi'm going to gather a bit more data, then see about proposing some changes later17:59
corvusfungi: i saw a bunch of flex auth stuff earlier, and i see 401 errors in zuul launcher; do we need to do something for it?18:02
corvus2025-02-20 17:40:07,110 ERROR zuul.Launcher:   keystoneauth1.exceptions.http.Unauthorized: The request you have made requires authentication. (HTTP 401)18:03
corvusthat's for flex sjc318:03
*** mtomaska__ is now known as mtomaska18:34
clarkbcorvus: there are two things going on. The first was a bugfix (to use project id or name or something I think) and the other is we have two projects in sjc3 and one in dfw3. We basically bootstrapped with an old projcet in sjc3 that we need to get off of to be in line with dfw3 on the new project so there is a new set of profiles on bridge for sjc3 and dfw3 that map to one another.19:15
fungicorvus: first i'v heard, maybe they're having an outage now though19:15
clarkbfungi: it could be related to the thing you fixed with the new project ids I think19:15
clarkbif zuul launcher isn't using the same secrets as nodepool?19:15
clarkbsince you updated those in nodepool right?19:15
fungilemme check but i shouldn't have touched those19:16
clarkbfungi: before we we started working on making dfw3 there was the auth issue that you fixed with cloudnull's hlep19:16
clarkbthats the issue I'm suspecting is still hitting zuul launcher19:16
fungioh, where do we set that? what's the var name for the project name?19:16
fungior for the project_id more likely19:16
clarkbsorry there are tw o different things that happened that both involve projects that have happendd recently which makes it confusing to talk about19:16
fungii'll check git on bridge to see if there were references i missed when updating19:17
clarkb22:45:56         fungi | looks like some local project ids changed19:18
fungii don't see a reference to the old project_id in our private hostvars19:18
fungiso everything that was there has been updated19:18
clarkbmaybne the launcher needs to restart to pick them up?19:18
clarkbyou restarted the nodepool launcher iirc19:18
clarkbthat may be all that is needed19:18
fungiyeah, that's likely it19:18
fungii had to down/up the nodepool-launcher container to notice the deployed change to clouds.yaml19:19
clarkbcorvus: any objection to me landing https://review.opendev.org/c/opendev/zone-opendev.org/+/942377 to point tracing.opendev.org at tracing02.opendev.org and see if zuul pciks that up? I half wonder if we need to restart schedulers to see that too (but doing the update today/tomorrow should make that automatic ish over the weekend)19:20
fungicorvus: any objection to me downing and upping the zuul-launcher container on zl01 to pick up the clouds.yaml update from yesterday?19:26
fungier, from day before yesterday i mean19:27
clarkbI wonder if we need to restart the nodepool builders too19:28
fungipotentially19:35
corvusfungi: clarkb re picking up clouds.yaml, that makes sense; i'll go ahead and restart the zuul-launcher since i have a window open20:03
corvusclarkb: no objection re tracing and sounds like a good experiment to see if updates are needed20:03
corvusit might not need restarts since i think everything is just http queries, so will probably just use the libc resolver behavior20:04
corvus+2 but did not +w20:06
clarkbI have approved it20:22
opendevreviewMerged opendev/zone-opendev.org master: Switch tracing.o.o CNAME to tracing02  https://review.opendev.org/c/opendev/zone-opendev.org/+/94237720:24
clarkbdns has updated for me. I don't see any data in https://tracing.opendev.org/search yet20:36
clarkbcorvus: the scheduler on zuul02 stopped with this message 2025-02-17 21:05:15,247 DEBUG zuul.Scheduler: Stopping tracing20:38
clarkbthat was a few days ago so unrelated to this dns update20:38
clarkbI probably shouldn't be surprised but the tracing server has very difficult logs to read as a human... There are errors in the jaeger log they seem to preceed updating DNS but maybe get more frequent after. The errors appear to be with setting up tls connections. The ansible should've deployed with certs that allow for the connectivity but maybe that is the next thing to check20:48
clarkband I guess checking if tracing01 is still getting new traces20:49
clarkbthe certs and keys files are in place so that bit at least happened20:49
clarkbsince it has been a problem with other noble nodes I did chcek /var/log/kern.log and there are no recent audit messages20:50
clarkbtracing01 appears to have similar errors so that may just be noise20:51
clarkbI am still seeing traces go to tracing0120:53
clarkbper https://tracing01.opendev.org/search?end=1740084807154000&limit=20&lookback=15m&maxDuration&minDuration&service=zuul&start=1740083907154000 so maybe the issue is just that we're not talking to the new server yet? Restarting the scheduler on zuul02 should confirm that is the case but I don't want to restart the scheduler there without knowing why it stopped20:54
clarkband I'ev confirmed there are established tcp connections from zuul01 to tracing01 so ya probably just not connected yet20:55
clarkbnothing seems to be actually broken. Today is a not rainy day so I may try to pop out in a bit for a bike ride. I think the above situation is fine if we just want to leave it until I get back to debug further. I suspect that will start with restarting the scheduler on 02 to see if it connects to the new server. Then we can probably let weekly restarts connect everything else over20:56
clarkbthe weekend20:56
clarkbthe other thing to dig into this afternoon is if ianw had a specific reason for preferring the executor load of inventory/base/hosts.yaml over snychronizing the git repo20:58
clarkbianw: I know corvus in particular was curious if there was a specific reason for that. Would be good if you can fill us in on the motivation there20:58
clarkbis the zuul02 scheduelr shutdown potentially related to the api issues we had with vexxhost?21:01
clarkbmaybe that wedged the restart in a way that didn't cause it to come back up as expected?21:01
corvusclarkb: looking21:04
corvusclarkb: we should look for an external cause for the shutdown; "stopping tracing" is just one of the last things it does; the shutdown started at 2025-02-17 21:05:11,271 DEBUG zuul.Scheduler: Stopping scheduler21:06
corvusthat was a few days ago.  perhaps i botched the manual restart?  or maybe my manual restart interacted poorly with the playbook21:08
clarkb2025-02-17 20:06:19,646 INFO zuul.WebServer: Zuul Web Server stopped21:09
clarkbthe web server stopped about an hour prior to the scheduler stopping21:09
clarkbbut web and fingergw restarted21:09
corvusi think it's highly likely that i did not re-up the scheduler, sorry21:10
clarkbis it possible the scheduler container hadn't gracefully existing so a followup `up -d` command didn't start it then an hour later it exited and got "orphaned"21:10
clarkbthat too would explain it21:11
clarkbshould we start it now? Any concern with running a newer version since the 17th?21:11
corvusnah, i think we can up it now.  i will do so.21:11
clarkbko21:11
clarkber ok21:11
corvuswe're one launche-only commit ahead of the rest of the cluster21:12
clarkbno tracing data from zuul02 now that it has started but it doesn't seem to be very active yet based on the debug log21:13
clarkbprobably have to wait for updating system config to complete?21:13
clarkbI see an established tcp connection from zuul02 to tracing02 now21:16
clarkbhttps://tracing.opendev.org/search?end=1740086194401000&limit=20&lookback=1h&maxDuration&minDuration&service=zuul&start=1740082594401000 and we have data21:16
clarkbI think its fine for the actual cutover to occur as part of the weekly restart after whcih we can cleanup tracing01 after confirming we're no longer receiving new data21:17
clarkbthere are warnings about invalid parents spans I think because the data is going to two places for now. Again probably ok for a couple days21:17
clarkband the zuul componets list looks better21:17
clarkbcorvus: thank you for double checking this wasn't a bigger fire with zuul0221:18
ianwhey, sorry, running late today23:27
ianwis it just me or is review not being very responsive?23:33
tonybianw: Not just you.  It's slow here too23:34
tonyband not just the web ui, ssh is slow too23:35
ianwsigh, which ai bot is it this time23:36
tonybLooking ....23:36
tonybThe load average isn't terrible 1.9323:37
tonybLooks like meta-externalagent/1.1 and and possibly GoogleBot/2.123:41
Clark[m]`gerrit show-queue -w` might offer insight too23:41
Clark[m]Historically it hasn't been the many request AI bots but researchers crawling through changes23:42
tonybLooking at show-queue but it's sooooo sloooooow23:44
clarkbhrm it responded quickly for me23:51
clarkbis it possible this is a series of tubes problem and not a server side issue?23:51
tonybI guess so23:51
clarkbthe web ui loads pretty normal for me too23:51
tonybthe load is 3.1423:51
tonybbut I don't think that's silly high23:52
clarkbno thats reasonable for the server23:52
tonybalso it's much better now even for me23:52
clarkbianw: any change for you?23:52
ianwmy ssh session is still slllloooowww23:54
tonybclarkb: time ssh ... gerrit show-queue -w is approx 45seconds for me23:54
clarkbreal0m1.474s for me23:55
tonybOkay so possibly the wet string across the pacific23:55
clarkbthats my hunch right now given the disparity in our experiences23:56
ianw--- review.opendev.org ping statistics ---23:56
ianw30 packets transmitted, 24 received, 20% packet loss, time 33162ms23:56
ianw#justaustralianthings23:56
clarkbianw: tonyb out of curiousity I wonder if forcing ipv4 or ipv6 (depending on which you'd use by default) woudl help23:56
clarkbsometimes the routing is different enough that you get a different experience23:57
ianw--- review.opendev.org ping statistics ---23:57
ianw13 packets transmitted, 0 received, 100% packet loss, time 12306ms23:57
ianwthat's ping -4 ...23:57
tonyb--- review02.opendev.org ping statistics ---23:57
tonyb30 packets transmitted, 14 received, 53.3333% packet loss, time 29425ms23:57
tonybas is that ^^23:57
clarkbianw: oof23:57
ianwoh eventually it came alive, but similar packet loss23:58
tonyb--- review.opendev.org ping statistics ---23:58
tonyb30 packets transmitted, 30 received, 0% packet loss, time 29039ms23:58
tonybrtt min/avg/max/mdev = 22.775/23.076/25.875/0.534 ms23:58
tonybthat from a US host I could quickly access23:59
clarkbhttps://mirror.ca-ymq-1.vexxhost.opendev.org is in the same cloud region I wonder if you get a similar experience there? Could compare with the other mirrors in north america to see if it is a north america problem or something more specifc23:59

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!