Tuesday, 2025-09-02

opendevreviewOpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml  https://review.opendev.org/c/openstack/project-config/+/95799502:26
*** elodilles_pto is now known as elodilles08:19
opendevreviewBenjamin Schanzel proposed zuul/zuul-jobs master: ensure-dib: Don't report task failure when dib is not installed  https://review.opendev.org/c/zuul/zuul-jobs/+/95911209:14
*** dhill is now known as Guest2564813:44
opendevreviewClark Boylan proposed opendev/zuul-providers master: Disable raxflex sjc3 for cloud maintenance window  https://review.opendev.org/c/opendev/zuul-providers/+/95920014:53
clarkbinfra-root ^ I'll plan to land that after my morning of meetings today then shutdown the mirror in SJC3 so that we're ready for the maintenance window tomorrow14:53
clarkbI'm going to send out our meeting agenda momentarily. Let me know if you want things edited in the next few minutes. I just hit save changes on the wiki with my edits15:43
opendevreviewClark Boylan proposed openstack/diskimage-builder master: Drop unnecessary debian security location configuration switch  https://review.opendev.org/c/openstack/diskimage-builder/+/95857217:06
clarkbmy matrix client says connectivity to the server has been lost. Seems to affect all rooms (on different homeservers) and private messages. I feel like that means it islikely an issue on my end rather than the serverside?17:45
clarkbcurious if anyone else is suffering through similar17:45
clarkblooking in my browser's network debugging window it looks like matrix-client.matrix.org is returning 429 messages which is likely the source of my matrix connectivity errors18:37
clarkblooks like this is hosted by cloudflare if the server: cloudflare header can be believed. Maybe they changed rate limit settings and boom exploded clients18:38
clarkbhttps://status.matrix.org/ ok this shows there is a problem. Maybe the 429 is a symptom of them trying to get clients to stay away while they fix it18:39
* clarkb will wait patiently18:39
fricklerfwiw there seems to be no issue between my homeserver and opendev18:41
clarkbya I think this will affect anyone with matrix.org accounts (like me)18:41
clarkbbut clients using accounts on other homeservers should be fine as long as they aren't also trying ot reach rooms hosted by matrix.org (which includes the ircbridges)18:42
mnasiadkaDoes that mean it’s better to run your own server? :-)18:55
clarkbonly if you can maintain better uptime than matrix.org18:59
opendevreviewMerged opendev/zuul-providers master: Disable raxflex sjc3 for cloud maintenance window  https://review.opendev.org/c/opendev/zuul-providers/+/95920019:38
clarkb"We are in the process of restoring the matrix.org database from a backup. The matrix.org homeserver will be offline until this has been completed"20:02
corvus_that doesn't sound like a fun day20:03
fungiwow20:04
fungiyes, someone's day is not going to plan there20:04
opendevreviewClark Boylan proposed opendev/system-config master: Exclude django caches from lists.o.o backups  https://review.opendev.org/c/opendev/system-config/+/95923620:27
clarkbI've made some assertions about mailman3 and django and whether or not these caches are worth preserving based on about 5 minutes of googling20:27
clarkbreviewers may want to see if they can find evidence that my assertions are faulty when reviewing that change20:27
opendevreviewClark Boylan proposed opendev/glean master: Switch to new devstack based testing  https://review.opendev.org/c/opendev/glean/+/95316320:49
clarkbinfra-root ^ I think we should go ahead and land that change when we land the bionic cleanup fix for glean (I had to rebase and clean some stuff up due to the bionic jobs not working by default anymore)20:49
clarkblooks like rax flex sjc3 dropped to 0 nodes according to grafana about 20 minutes ago20:55
clarkbI'll wait a bit longer then shutdown the mirror node20:56
clarkblast call on https://review.opendev.org/c/openstack/diskimage-builder/+/958572 I'll approve it shortly if there are no objections so that we can work on fixingthe trixie image builds21:01
clarkbcorvus_: I think your zuul fix broke test cases. I'm guessing due to some internal fakes/mocks no longer aligning21:06
corvus_clarkb: yep, it' a test-only thing.  i'm running the test suite locally right now to verify the fix so we don't waste any more time21:07
clarkback thanks. I'll rereview when pushed21:07
corvus_clarkb:   https://review.opendev.org/c/zuul/zuul/+/959228 updated and is clean locally21:21
clarkbcorvus_: +2 thanks21:23
clarkboh I also forgot about ze11. Updating the emergency file to include the sjc3 mirror reminded me21:27
clarkbthe system had pending updates so I'm rebooting it first just to be sure it comes up cleanly then I'll shut it back down again in prep for tomorrow21:29
clarkband it has been added to the emergency file so ansible should ignore it21:29
clarkbreboot looked good so I've shut it down now. Nova reports status is SHUTOFF. We should be ready for tomorrows maintenance now21:33
opendevreviewMerged openstack/diskimage-builder master: Drop unnecessary debian security location configuration switch  https://review.opendev.org/c/openstack/diskimage-builder/+/95857221:56
clarkbok I'm going to approve the trixie fix now that ^ has merged21:57
clarkbhttps://review.opendev.org/c/opendev/zuul-providers/+/958561 this change for the record21:57
clarkblooks like ppa access may be sad right now for the trixie image builds :/22:11
corvus_this is clearly not a good day for sysadmins22:27
clarkbcorvus_: I think the zuul fixup change hit some network blip too22:39
gouthamrhttps://ptg.opendev.org/ is down22:42
gouthamr^ totally unrelated i'd guess, was curious if it was planned22:43
clarkbnot planned22:44
gouthamrack, was just looking for a link to the team signups.. i don't think we'd worry for a few more weeks22:45
clarkbI see the problem. I should be able to fix it pretty quickly22:46
clarkbjust need to test it first22:46
clarkbgouthamr: I don't think ptg.opendev.org tells you has has signed up until diablo_rojo configures the bot22:48
clarkbgouthamr: but this is a bug in the hosting after the irc services moved to a new service. I'll fix that aspect22:48
gouthamrah, i was looking for this form clarkb - thought i'd find it on the website: https://openinfrafoundation.formstack.com/forms/oct2025_ptg_survey 22:49
opendevreviewClark Boylan proposed opendev/zone-opendev.org master: Point ptg.opendev.org to the new eavesdrop02 server  https://review.opendev.org/c/opendev/zone-opendev.org/+/95925322:50
clarkbgouthamr: ^ that is the fix for the web service not being accessible22:51
clarkbI'll probably just go ahead and self approev that if ci passes22:51
gouthamrah nice! ty clarkb 22:51
opendevreviewMerged opendev/zone-opendev.org master: Point ptg.opendev.org to the new eavesdrop02 server  https://review.opendev.org/c/opendev/zone-opendev.org/+/95925322:56
clarkbgouthamr: once your dns cache evicts the old record you should see that ptg.opendev.org works again. The TTL was 1 hour so approximately an hour from now22:59
gouthamryes! Works22:59
clarkbI've got to wait ~2600 seconds locally22:59
corvus_clarkb: fetching the zuul-scheduler image from quay.io took 10m which exceeded the startup timeout for one of the bootstrapping scripts23:15
corvus_so, continuing today's theme i guess23:16
clarkbcorvus_: the third retry for the trixie build does seem to have run further than the first two attempts at least23:18
clarkband I'm under 1500 seconds on my dns cache timeout to confirm I fixed ptg.opendev.org (I could dig out my router credentials and restart unbound but meh)23:19
opendevreviewMerged opendev/zuul-providers master: Build actual trixie now that it is released  https://review.opendev.org/c/opendev/zuul-providers/+/95856123:34

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!