| *** bauzas1 is now known as bauzas | 05:27 | |
| opendevreview | Merged openstack/project-config master: Update neutron grafana dashboard with new FWaaS jobs https://review.opendev.org/c/openstack/project-config/+/950660 | 08:41 |
|---|---|---|
| bauzas | hey folks, super weirdo, but it seems that review.o.o has peering issues | 13:26 |
| bauzas | in #openstack-nova, we discussed it, some people can't connect to the Gerrit UI while others are able | 13:27 |
| bauzas | https://downforeveryoneorjustme.com/review.opendev.org | 13:27 |
| JayF | I'm resolving to review03 everytime, which I can hit over v6 and v4 from Centurylink Quantum in the PNW. | 13:27 |
| JayF | I can provide a traceroute/mtr if requested. | 13:27 |
| fungi | bauzas: any idea when it started? are the people experiencing issues mostly in a particular region of the globe? | 13:29 |
| bauzas | priteau and me experienced this issue from the last 30 mins | 13:29 |
| priteau | For me it's even stranger, I can access the UI, I can pull, but I can submit a new patch | 13:29 |
| bauzas | we're both in France but we don't have the same ISPs | 13:29 |
| priteau | Sorry, but *I cannot submit a new patch* | 13:29 |
| bauzas | I *can't* access the UI | 13:29 |
| fungi | yeah, but it's likely they traverse the same backbone provider to reach canada | 13:30 |
| bauzas | I get an HTTP timeout | 13:30 |
| priteau | Actually, it just worked now for me | 13:30 |
| priteau | UI access is very slow though. Maybe the failures are just random | 13:30 |
| fungi | the server itself is not under any load, actually more quiet than usual | 13:31 |
| priteau | I have colleagues in the UK complaining as well. | 13:31 |
| JayF | croelandt reported in -glance it works for him over VPN, but not locally | 13:31 |
| croelandt | Indeed | 13:31 |
| priteau | If it's a peering issue, not much we can do about it. | 13:32 |
| fungi | yeah, in the past when we've seen similar issues it's often been like fr+ie+de+uk | 13:33 |
| fungi | all of them going across a common trans-atlantic backbone | 13:33 |
| fungi | or sometimes half of the uk will have a problem and the other half won't, i guess depending on which backbone providers their isp peers with | 13:34 |
| croelandt | I have been having some issues with Google Meet as well, so I wonder whether that might indeed be a peering issue and not something related to the infra | 13:35 |
| fungi | a good comparison would be if you have any trouble browsing docs.openstack.org which is in rackspace dfw in the usa (while review.opendev.org is in vexxhost ca-ymq-1 in canada) | 13:37 |
| fungi | it might be a problematic backbone en route to vexxhost and not rackspace | 13:38 |
| croelandt | docs.openstack.org works perfectly | 13:38 |
| croelandt | oh ok review.opendev.org is back after being "down" for the last hour | 13:38 |
| fungi | i only just thought to test from our ci mirror in ovh gra1, i guess if it starts back up again i can see if that's impacted (right now it can pull web content from review.o.o just fine) | 13:41 |
| priteau | It's working fine for me now | 13:41 |
| fungi | global routing issues like this have a tendency to come and go for a while, so i'll keep an eye out for more reports of probles | 13:42 |
| fungi | problems | 13:42 |
| fungi | thanks for bringing it up! | 13:42 |
| elodilles | fwiw, for us in hungary we had in the past months issues with very lossy connection (10-30% packet loss) towards review.o.o during CET/CEST afternoon (likely the same mentioned issue like fr+ie+de+uk folks experienced) BUT this time (and some weeks now already) things seem to work (0% packet loss) | 13:43 |
| elodilles | (docs.o.o worked those days perfectly for us) | 13:44 |
| fungi | good to know, thanks for the additional data points! | 14:01 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!