| opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/962557 | 02:15 |
|---|---|---|
| noonedeadpunk | fwiw gitea13 seems to be kinda slow since yesterday. at least by my personal feelings... takes around 15-20 sec for the response... | 05:55 |
| *** mrunge_ is now known as mrunge | 05:55 | |
| tonyb | noonedeadpunk: For any particular repos? Are you going direct to gitea13 or going through the load-balancer and noting the slowness with the backend is 13? | 05:58 |
| noonedeadpunk | going through loadbalancer, but whenever I'm bored waiting for repo to open - it's always gitea13 | 05:59 |
| noonedeadpunk | or well. so TLS says at least | 05:59 |
| noonedeadpunk | I'm not sure about repos though - yesterday spotted around same for nova/glance/neutron | 06:00 |
| tonyb | noonedeadpunk: Yup, it looks like 13 is manytimes more busy than the others | 06:00 |
| * tonyb is going to guess someone is crawling it directly | 06:01 | |
| tonyb | but I'll confirm that | 06:01 |
| tonyb | 06:02:17 up 131 days, 14:31, 1 user, load average: 22.53, 21.29, 21.40 | 06:02 |
| tonyb | infra-root: gitea13 is seeing a large amount of connections (through the loadbalancer) 5-6 times the volume of the other nodes in the pool. | 06:59 |
| tonyb | it is slow, but it isn't drowning | 06:59 |
| tonyb | It looks like it started about 2 days ago, I'm inclined to leave it as is and monitor | 07:01 |
| tonyb | noonedeadpunk: So you're correct 13 is under pressure We could block the IP at the load-balancer but I'm not sure we want to start that kind of wack-a-mole | 07:03 |
| opendevreview | Merged openstack/project-config master: update_constraints.sh: Better describe what we're skipping https://review.opendev.org/c/openstack/project-config/+/959628 | 07:36 |
| opendevreview | Simon Westphahl proposed zuul/zuul-jobs master: Allow upload-image-s3 role to export S3 URLS https://review.opendev.org/c/zuul/zuul-jobs/+/963828 | 08:30 |
| fungi | tonyb: is it using an identifiable user agent string (note you have to match up the proxy ports between haproxy and the backend apache to map the ip to a request, unfortunately) | 12:54 |
| clarkb | corvus: totally no rush on this, but I'm cleaning up some autoholds that were requested by openstack folks and notice my gerrit server autoholds are gone. I wonder if those got cleared out by the handling for the zk schema update for lunacher? | 14:57 |
| clarkb | just want to point it out in case it indicates there is a bug somewhere. I'm happy to request new nodes for the gerrit held nodes not a big deal there | 14:57 |
| fungi | the incident a few weeks ago with the zk schema change caused zuul to drop and clean up all existing nodes, including held ones | 14:58 |
| clarkb | fungi: aha thanks | 14:59 |
| clarkb | so that explains it and it was a side effect of the original issue not the handling aftwerards | 15:00 |
| fungi | yeah, that was how it first came to my attention in fact. the autohold you just cleaned up lost its held node and the dev working on it notified me it had disappeared | 15:02 |
| clarkb | fungi: tonyb noonedeadpunk tailing the access log on gitea13 I think it is getting crawled directly and not via the haproxy (or maybe via the load balancer too, but I see lots of traffic that is direct). And drumroll please the IP addresses appear to originate in...... Cloudflare! | 15:03 |
| fungi | and then the autohold itself was also undeletable at the time, with a similar traceback to what i fouond when the nodes were deleted at upgrade | 15:03 |
| clarkb | the company that wants you to pay them to block bots is now running the bots (or possibly their own customers are). Is this a shake down? | 15:03 |
| clarkb | I suspect (but don't know for sure) that for one reason or another gitea13 has become visible to these bots on the internet but not the other 5. So it gets relatively overloaded when things are crawling | 15:05 |
| clarkb | fwiw the User Agent story is similar there. Bunch of what look to me to be completely bogus UAs (firefox 3, android 3, etc) but instead of originating in alibaba cloud they are now running in cloudflare | 15:07 |
| clarkb | we may need to consider blocking direct access to the backends to force the load to be more balanced. This will impact our ability to debug things, but maybe that is a sacrifice we should be making at this point. | 15:10 |
| clarkb | Note I will not be making any changes as I have one meeting in 50 minutes then I need to turn things off and prep for travel | 15:10 |
| corvus | clarkb: ack. confirming what fungi said. the autohold records were in zk for a bit longer because of the second part of the bug, but the underlying nodes were already gone as soon as the first part of the bug hit. | 16:24 |
| clarkb | cool I'll regenerate autoholds when I have a moment then. Not a big deal | 16:40 |
| opendevreview | Jon Bernard proposed opendev/irc-meetings master: Update Cinder weekly team meeting location https://review.opendev.org/c/opendev/irc-meetings/+/964179 | 17:23 |
| opendevreview | Goutham Pacha Ravi proposed opendev/irc-meetings master: [os-tc] Add 2026.1 meeting schedule https://review.opendev.org/c/opendev/irc-meetings/+/964199 | 21:05 |
| opendevreview | Goutham Pacha Ravi proposed opendev/irc-meetings master: [os-tc] Add 2026.1 meeting schedule https://review.opendev.org/c/opendev/irc-meetings/+/964199 | 21:29 |
| opendevreview | Merged opendev/irc-meetings master: [os-tc] Add 2026.1 meeting schedule https://review.opendev.org/c/opendev/irc-meetings/+/964199 | 21:41 |
| opendevreview | Merged openstack/project-config master: Have more files trigger test-release-openstack https://review.opendev.org/c/openstack/project-config/+/958709 | 22:04 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!