*** jph0 is now known as jph | 08:25 | |
*** ralonsoh_ is now known as ralonsoh | 08:56 | |
opendevreview | Elod Illes proposed openstack/project-config master: [relmgt] Update reno when cutting unmaintained branch https://review.opendev.org/c/openstack/project-config/+/907626 | 10:11 |
---|---|---|
opendevreview | Merged openstack/project-config master: [relmgt] Update reno when cutting unmaintained branch https://review.opendev.org/c/openstack/project-config/+/907626 | 13:00 |
bauzas | I was getting weird timeouts when accessing review.opendev.org | 14:04 |
bauzas | now this is fixed, but some friends (not from work) seem to get the same timeouts that I was having | 14:04 |
bauzas | not sure it's a problem from the server or something about the ISP | 14:05 |
fungi | bauzas: the gerrit server runs in vexxhost, and one of the backbone providers they peer with has been having problems. seems to mostly be affecting transatlantic connections | 14:17 |
bauzas | ahah, thanks for the explanation | 14:18 |
bauzas | I was indeed thinking about a peering issue | 14:18 |
bauzas | my ISP is connected to review.opendev.org thru Cogent, maybe that helps to define which SAs are getting trouble | 14:22 |
bauzas | AS* | 14:23 |
fungi | yes, recently cogent has been fine but zayo (formerly abovenet) has been problematic | 14:23 |
fungi | though it's hard to know for sure where your connections are routed through since they may not be symmertic and a traceroute will only show you the outbound hops | 14:24 |
bauzas | I just checked with traceroute | 14:24 |
bauzas | and I saw cogent | 14:24 |
bauzas | unfortunately when I got my timeout, I haven't checked by traceroute | 14:24 |
bauzas | but I can ask priteau to doublecheck | 14:24 |
fungi | people may be reaching vexxhost through cogent but then responses may be coming back to them through zayo, but their traceroute will only show the cogent path | 14:24 |
bauzas | he seems to get the problem | 14:24 |
fungi | we can also traceroute back to client ip addresses from the gerrit server to get the other half of the picture | 14:25 |
bauzas | ahah I see | 14:25 |
priteau | I don't have connectivity issues at all, however I cannot log into Gerrit anymore | 14:25 |
bauzas | I thought AS peerings were more symmetric but you're making the point | 14:25 |
priteau | It says: Provider is not supported, or was incorrectly entered. | 14:25 |
bauzas | priteau: can you traceroute review.opendev.org ? | 14:26 |
fungi | priteau: mmm, that sounds like you're failing to log into the gerrit webui via ubuntuone sso? | 14:26 |
bauzas | priteau: my own https://paste.opendev.org/show/bIwgG0AhnWxxfQxTVRJ7/ | 14:26 |
bauzas | fungi: when I got impacted, Gerrit was telling me I needed to reauth | 14:27 |
bauzas | now that I'm able to reach it correctly, Gerrit doesn't ask me to refresh my creds | 14:27 |
priteau | bauzas: Similar trace for the last few hops | 14:28 |
fungi | bauzas: it's not just you. looks like gerrit logins are broken again for everyone | 14:28 |
fungi | priteau: you i mean | 14:28 |
priteau | Thanks for confirming | 14:28 |
priteau | Good luckā¦ | 14:28 |
bauzas | super strange, because once my connection went back, my token was still valid and I didn't login again | 14:28 |
fungi | i just logged out of the gerrit webui and can't log back in either. we saw this recently. may need to restart gerrit to force obtain another openid handle from ubuntuone | 14:29 |
bauzas | ++ | 14:32 |
fungi | no, the last time the association handle we were using changed was 2024-01-23 19:38:03 utc, when forced it to refresh by restarting gerrit. it hasn't changed since then | 14:43 |
fungi | now as i'm testing, the redirect to ubuntuone is timing out, so maybe they're the ones having an issue this time | 14:47 |
fungi | aha, not us according to the chatter in #launchpad | 14:49 |
fungi | https://ubuntu.social/@launchpadstatus | 14:49 |
fungi | https://status.canonical.com/ | 14:50 |
fungi | okay, seems to be working again for me now | 14:54 |
fungi | priteau: try again? | 14:54 |
priteau | It's working now, thanks | 14:54 |
fungi | just glad it wasn't a problem on our end. i was preparing to lose half my day to another openid problem | 14:55 |
frickler | outage is still ongoing according to #launchpad, so expect more hickups | 15:34 |
fungi | joyous | 15:45 |
clarkb | fungi: I made note of the symptom to check for in the docs update for the hanlde thing. basically we awnt to look for a new handle and then no successful auth attempts afterwards | 16:14 |
fungi | clarkb: yep, that's what i did, and how i determined that i should check #launchpad for signs of a struggle ;) | 16:16 |
clarkb | cool just want to make sure that info was available | 16:16 |
fungi | also that we weren't logging auth failures anywhere i could find this time, i think it wasn't getting that far | 16:17 |
clarkb | makes sense if the issue was on the ubuntu side as we won't get the redirect back to gerrit which triggers the failure on our side | 16:17 |
*** mtreinish_ is now known as mtreinish | 18:24 | |
*** dasm is now known as Guest1739 | 20:33 | |
*** Guest1739 is now known as dasm | 20:38 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!