Thursday, 2026-04-02

-@gerrit:opendev.org- Zuul merged on behalf of Brian Haley: [opendev/irc-meetings] 982430: Move neutron-drivers meeting earlier by 1 hour https://review.opendev.org/c/opendev/irc-meetings/+/98243007:35
-@gerrit:opendev.org- Zuul merged on behalf of Michal Nasiadka: [opendev/irc-meetings] 982437: Move kolla meeting earlier by 1 hour https://review.opendev.org/c/opendev/irc-meetings/+/98243707:36
-@gerrit:opendev.org- Adrian Vladu proposed: [openstack/project-config] 923044: x/cloudbase-init: move to GitHub https://review.opendev.org/c/openstack/project-config/+/92304409:10
-@gerrit:opendev.org- Gregory Thiemonge proposed: [opendev/irc-meetings] 983191: Update meeting chair for Octavia https://review.opendev.org/c/opendev/irc-meetings/+/98319112:14
-@gerrit:opendev.org- Zuul merged on behalf of Michal Nasiadka: [openstack/diskimage-builder] 982231: Add support for building Ubuntu Resolute (26.04) https://review.opendev.org/c/openstack/diskimage-builder/+/98223113:00
@mnasiadka:matrix.orgOk, I’m around if we’re fine to do the mirror03.bhs1.ovh switch - if not I’ll be around on Tuesday (Easter is kicking in)15:36
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306815:39
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed:15:39
- [opendev/system-config] 983134: Remove intermediate HTTPS layer for Gitea backends https://review.opendev.org/c/opendev/system-config/+/983134
- [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/983061
@fungicide:matrix.orgi've deleted and reset the autohold on the dnm change15:39
@clarkb:matrix.orgmnasiadka: yes lets do that now15:39
@clarkb:matrix.orgI have approved https://review.opendev.org/c/opendev/zone-opendev.org/+/98270015:40
-@gerrit:opendev.org- Zuul merged on behalf of Michal Nasiadka: [opendev/zone-opendev.org] 982700: Promote mirror.bhs1.ovh to new mirror03 https://review.opendev.org/c/opendev/zone-opendev.org/+/98270015:47
-@gerrit:opendev.org- Clark Boylan proposed: [opendev/system-config] 983221: Drop Bionic package mirrors for Ubuntu, UCA, and Puppetlabs https://review.opendev.org/c/opendev/system-config/+/98322115:49
@clarkb:matrix.orginfra-root ^ I think that is the first step in the promised bionic cleanup15:49
@clarkb:matrix.orgcorvus: for the zuul-provider side of cleanup is there a specific process we need to take to have zuul-launcher delete the old nodes and images?15:50
@clarkb:matrix.orgI'm thinking maybe we remove the labels themselves and remove the labels from the providers first?15:51
@clarkb:matrix.orgoh actually no we probably want to drop the image build and upload jobs first. Then the labels and labels from providers, Then anything else?15:51
@jim:acmegating.com> <@clarkb:matrix.org> oh actually no we probably want to drop the image build and upload jobs first. Then the labels and labels from providers, Then anything else?15:52
yes. then drop the image and label objects. oh, probably start with dropping the nodeset objects.
@clarkb:matrix.orggot it. Nodesets -> builds -> labels and images -> anything else15:53
@jim:acmegating.comyeah let's start with that.  i can probably review changes in a bit.15:53
@jim:acmegating.comor write them15:53
@clarkb:matrix.orgI'm happy to write them. Helps aid my familiarity with the system. Then you can double check15:54
-@gerrit:opendev.org- Clark Boylan proposed:16:02
- [opendev/zuul-providers] 983222: Drop Ubuntu Bionic Nodesets https://review.opendev.org/c/opendev/zuul-providers/+/983222
- [opendev/zuul-providers] 983223: Drop Ubuntu Bionic image builds https://review.opendev.org/c/opendev/zuul-providers/+/983223
- [opendev/zuul-providers] 983224: Drop Bionic images and labels from our providers https://review.opendev.org/c/opendev/zuul-providers/+/983224
@clarkb:matrix.orgsomething like that maybe. And I think we can probably do this in parallel with the mirror content cleanup16:02
@clarkb:matrix.orgmnasiadka: the bhs1 mirror resolves to 03 for me now. but the ttl is an hour so it may take a bit longer for other things depending on caches. That said it seem to be working from my desk16:03
@jim:acmegating.comClark: in https://review.opendev.org/983222 do we have one tenant complaining but others are okay?16:37
@jim:acmegating.comi'm guessing the openstack tenant?16:37
@clarkb:matrix.orgcorvus: yes looks like openstack-zuul-jobs is still using bionic16:39
@clarkb:matrix.orgI think starlingx may still be using it too?16:39
@mnasiadka:matrix.orgClark: dns returns 03 on my side and it seems it works16:40
@clarkb:matrix.orgcorvus: I think I'm to the point where pushing forward on this is probably reasonable. But maybe we want ot figure out a way to handkle that gracefully in zuul? I think we got lucky that the end result was  +1 and if the reporting order had flipped it would be a -1?16:40
@jim:acmegating.comClark: yes, i agree that's a potential problem -- except that it's actually still okay since opendev doesn't have clean-check.16:41
@clarkb:matrix.orgheh if at first you don't succeed. Just recheck :)16:42
@jim:acmegating.comi think https://review.opendev.org/983224 lost the race like you describe, but is still okay16:42
@jim:acmegating.comwell i mean we don't need to recheck, just +w16:42
@jim:acmegating.comi guess i'm saying "other tenants -1ing config changes in zuul-providers is okay and expected, just ignore them" is a reasonable procedure for us to adopt now.  i agree that it could be better, but i don't think it's blocking us.16:43
@clarkb:matrix.orgack16:44
@clarkb:matrix.organd ya it is good signal to remind us that some things will break. But then we can decide we're comfortable with that anyway. Which I believe is the case for bionic16:44
@jim:acmegating.comleft a comment on https://review.opendev.org/983223 but i +2d the stack16:45
@clarkb:matrix.orgcorvus: that is a good tip. I'll split that out into a change at the very end of the stack16:46
@fungicide:matrix.orgi want to say in the past we replaced the nodesets with dead entries so that jobs returned an obvious failure instead of just being skipped and users not realizing they had new tenant errors16:49
-@gerrit:opendev.org- Clark Boylan proposed:16:51
- [opendev/zuul-providers] 983223: Drop Ubuntu Bionic image builds https://review.opendev.org/c/opendev/zuul-providers/+/983223
- [opendev/zuul-providers] 983224: Drop Bionic images and labels from our providers https://review.opendev.org/c/opendev/zuul-providers/+/983224
- [opendev/zuul-providers] 983229: Cleanup Bionic DIB Element Details https://review.opendev.org/c/opendev/zuul-providers/+/983229
@jim:acmegating.comfungi: yeah, that's an option (i guess you mean that the jobs won't be defined due to config errors so users won't notice them not running)16:52
@jim:acmegating.comshould be same general method of operation with niz, so whichever way we want to go works16:52
@clarkb:matrix.orgcan/should we have tenants do that themselves?16:53
@clarkb:matrix.orgeg if openstack wants openstack-zuul-jobs to be quiet add a nodeset in the openstack tenant?16:53
@clarkb:matrix.orgI'm mostly wondering what level of hand holding we need to continue to provide for an ancient (and largely broken due to ansible) system anyway16:53
@fungicide:matrix.orgyeah, i don't have an opinion either way, just remembering we did that with... maybe it was xenial removal?16:54
@clarkb:matrix.orgya I seem to recall we did it at some point. And basically `nodes: []` causes the ansible tasks to fail because there is no node. I can amend the first change in the stack to do that16:55
@fungicide:matrix.orgif we do that, it would probably only be a temporary thing for like a month or two in order to give people time to notice they didn't realize they had any jobs relying on bionic still16:57
-@gerrit:opendev.org- Clark Boylan proposed:16:58
- [opendev/zuul-providers] 983222: Set ubuntu-bionic nodeset to an empty list https://review.opendev.org/c/opendev/zuul-providers/+/983222
- [opendev/zuul-providers] 983223: Drop Ubuntu Bionic image builds https://review.opendev.org/c/opendev/zuul-providers/+/983223
- [opendev/zuul-providers] 983224: Drop Bionic images and labels from our providers https://review.opendev.org/c/opendev/zuul-providers/+/983224
- [opendev/zuul-providers] 983229: Cleanup Bionic DIB Element Details https://review.opendev.org/c/opendev/zuul-providers/+/983229
@clarkb:matrix.orgdone16:58
@clarkb:matrix.orgfungi: your gitea change to use http on the backend passed17:00
@clarkb:matrix.orgstill waiting on anubis change results17:00
@clarkb:matrix.orgbut we may have something to interact with on the held node that is representative soon17:00
@jim:acmegating.comClark: the (non-blocking) error on https://review.opendev.org/983224 tells us there are also bionic nodeset defs inside the openstack tenant, so we need to decide if we want to empty those or let them turn into config-errors17:05
@jim:acmegating.comalso the vexxhost tenant17:05
@fungicide:matrix.orgClark: i think i got overzealous trimming jobs in my dnm change, so probably don't have a good held node after all17:08
@clarkb:matrix.orgfungi: oh heh17:08
@clarkb:matrix.orgcorvus: I think we can probably turn those into config errors?17:09
@fungicide:matrix.orgi needed to keep the dependency for system-config-run-gitea after all, even though we're not changing any image content17:09
@clarkb:matrix.orgcorvus: considering the alternative is to break them when the next change lands and the labels go away I think breaking them a step early is probably fine?17:09
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306817:12
@clarkb:matrix.orgfungi: looking at some of the failures ofr the anubis change I think we want to run the user agent checks before anubis in that vhost?17:16
@fungicide:matrix.orgthe anubis change failed on legitimate issues though, in don't have the matrix well-known file serving right yet, but also our ua test is hitting anubis first so we need to decide whether we want to prioritize the ua filter17:16
@clarkb:matrix.orgI think there is still value in just straight up rejecting those hosts first17:16
@clarkb:matrix.orgyup17:16
@jim:acmegating.comClark: yeah, maybe we draw the line at being helpful in zuul-providers and other tenants are on their own when the nodesets break.  that's what service-announce/discuss is for.17:19
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306817:27
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/98306117:27
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org marked as active: [openstack/project-config] 982179: Revert "Temporarily remove release docs semaphores" https://review.opendev.org/c/openstack/project-config/+/98217917:28
@clarkb:matrix.orgfungi: any reason to not approve https://review.opendev.org/c/openstack/project-config/+/982179 now?17:34
@fungicide:matrix.orgClark: please do17:40
-@gerrit:opendev.org- Zuul merged on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [openstack/project-config] 982179: Revert "Temporarily remove release docs semaphores" https://review.opendev.org/c/openstack/project-config/+/98217917:52
@clarkb:matrix.orgfungi: it looks like corvus seems happy with the zuul-providers changes to remove bionic. Any chance you want to weigh in on those and decide if we should start merging things? Also https://review.opendev.org/c/opendev/system-config/+/983221 is the mirror reprepro side step 1 cleanup the happens before the manual cleanups18:14
@clarkb:matrix.orgI expect to be around all day until about 4pm . Then its science fair night at the school and I need to help shuttle people around18:15
@fungicide:matrix.orgyeah, just a sec18:15
@fungicide:matrix.orgooh, SCIENCE!18:15
@clarkb:matrix.organd then the stack ending at https://review.opendev.org/c/opendev/system-config/+/982125/ is the gerrit updates stack that maybe we can proced with tomorrow if things remain calm so we can restart gerrit18:15
@clarkb:matrix.orgwe didn't do any experiments or projects this year due to illness in February. Will help out with the event itself18:16
@fungicide:matrix.orgaww, that's too bad, but understandable18:17
@fungicide:matrix.orgdid we already drop bionic from ubuntu-ports i guess?18:18
@fungicide:matrix.orgah yeah that was https://review.opendev.org/c/opendev/system-config/+/956115 back in august18:18
@clarkb:matrix.orgYup there was much less usage due to being added later I think18:20
@clarkb:matrix.orgThe arm adoption was after bionic18:20
-@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/zuul-providers] 983222: Set ubuntu-bionic nodeset to an empty list https://review.opendev.org/c/opendev/zuul-providers/+/98322218:21
-@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/zuul-providers] 983223: Drop Ubuntu Bionic image builds https://review.opendev.org/c/opendev/zuul-providers/+/98322318:23
-@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/zuul-providers] 983224: Drop Bionic images and labels from our providers https://review.opendev.org/c/opendev/zuul-providers/+/98322418:26
@clarkb:matrix.orgHere we go :)18:28
@clarkb:matrix.orgLabel ubuntu-bionic is not found errors have shown up in the Openstack tenant as expected18:35
@jim:acmegating.comlooking at launcher logs, it looks like some bionic nodes were used around 2:00 - 3:00utc time today18:42
@jim:acmegating.com```18:43
2026-04-02 18:27:21,689 INFO zuul.Launcher: [upload: bcd9c6650a8c49a98a207b65b3c5b101] Deleting image upload <ImageUpload bcd9c6650a8c49a98a207b65b3c5b101 state: ready attempt: 0 endpoint: openmetal/openmetal-IAD3 artifact: f26f89c347024880865e4d0247ec1cb7 validated: True external_id: 3068dd28-b369-4665-baf6-fba0ecc6e632>
2026-04-02 18:27:22,972 INFO zuul.Launcher: [upload: 0b963e6f0d6e498587d4c5e663ea2209] Deleting image upload <ImageUpload 0b963e6f0d6e498587d4c5e663ea2209 state: ready attempt: 0 endpoint: vexxhost/vexxhost-ca-ymq-1 artifact: f26f89c347024880865e4d0247ec1cb7 validated: True external_id: 94909497-a3ad-4b30-810e-9abf9b3b6815>
2026-04-02 18:27:26,575 INFO zuul.Launcher: Deleting image build artifact with no uploads: <ImageBuildArtifact f26f89c347024880865e4d0247ec1cb7 state: ready canonical_name: opendev.org%2Fopendev%2Fzuul-providers/ubuntu-bionic build_uuid: 2ec4bac93cc34fe281e5ca3738650200 validated: True>
```
this is promising.
@fungicide:matrix.orgwell, we knew as of a couple days ago that starlingx's jobs were still set to use ubuntu-bionic nodes (and presumably overriding zuul's `ansible_version`)18:43
@jim:acmegating.comwe deleted 6 build artifacts like that18:43
@clarkb:matrix.orgOh ya I guess I should check all the clouds for bionic images after lunch18:43
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306819:09
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/98306119:09
@fungicide:matrix.orgyay! anubis change for gitea is finally passing too19:56
@clarkb:matrix.orgnice19:57
@fungicide:matrix.orgthough at this point i almost wonder if we shouldn't just point anubis straight at port 3000 instead of wrapping that in another apache vhost, the only thing left in that vhost besides proxying is `ProxyAddHeaders Off`19:58
@fungicide:matrix.orgmaybe gitea doesn't even need that workaround any longer?19:58
@fungicide:matrix.orgi guess there's no better time to try it than now20:01
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/98306120:05
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/98306820:05
@fungicide:matrix.orgwe'll want to check the gitea logging on that, in case the workaround was actually still needed20:05
@clarkb:matrix.orgfungi: oh ya we can just have apache -> anubis -> gitea without apache in the middle again?20:25
@fungicide:matrix.orgthat's what we're about to find out20:26
@fungicide:matrix.orgthat way we only use up half as many worker slots in apache too, in theory20:27
@fungicide:matrix.orgokay, it didn't break tests at least20:43
@fungicide:matrix.orgClark: do you recall what the logging problem in gitea was with the presence of x-forwarded-for headers? or maybe anubis just doesn't add them by default? https://zuul.opendev.org/t/openstack/build/5850d8afbf6d4f66b61b739c0fbdce27/log/gitea99.opendev.org/docker/gitea-docker_gitea-web_1.txt looks fairly normal to me20:45
@fungicide:matrix.org50.56.159.67 is the held haproxy server, for anyone who wants to test things out20:47
@fungicide:matrix.orgwhen adding `50.56.159.67 opendev.org` to my /etc/hosts and then browsing https://opendev.org/ i did get an initial anubis graphic20:49
@fungicide:matrix.organd `GIT_SSL_NO_VERIFY=1 git clone https://opendev.org/opendev/system-config` worked for me too20:51
@clarkb:matrix.orgfungi: I think the problem was that it chagned the contents of the logs to include misleading data20:54
@clarkb:matrix.orgfungi: I don't remember the specific but I want to say it made it harder/impossible to trace a connection from the haproxy to the gitea backend20:55
@clarkb:matrix.orgfungi: so maybe make a connection like that and see if you can trace it through?20:55
@clarkb:matrix.orgfungi: looks like if you don't edit /etc/hosts then the anubis redirect stuff breaks and says this is not allowed.20:56
@clarkb:matrix.orgJust calling that out because I wonder if that breaks our ability to socks proxy through the haproxy and talk to a specific backend. That might be another thing to test20:56
@fungicide:matrix.orgwe can socks proxy to 3000/tcp right?20:57
@fungicide:matrix.org(or ssh forward even)20:57
@clarkb:matrix.orgyes, the problem with that is how do you get your browser to resolve localhost foo as gitea0920:58
@clarkb:matrix.orgits easy if you let the other side resolve it for you20:58
@fungicide:matrix.orgi usually do that by editing /etc/hosts but didn't know there was any other option20:58
@clarkb:matrix.orgfungi: what i do is ssh -D 1080 gitea-lb03.opendev.org. Then udpate my firefox proxy configs to use the socks proxy at localhost:1080 (you can opitonally use onee of those js functions to set this on a host by host basis). Then enter https://gitea09.opendev.org and you'll get that specific backend21:00
@clarkb:matrix.organd then everything works. ssl, redirects etc etc21:00
@clarkb:matrix.orgbut yes you can edit /etc/hosts as well and say gitea09.opendev.org is a 127.0.0.1 and hit :4443 mapped to :443 on the gitea21:01
@fungicide:matrix.orgas for the logging, i'm not really sure what i'm looking for that would be wrong. it does record the client source port in the gitea-web log, though i think tracing it through anubis is the bigger challenge regardless of whether or not we have apache in between them21:02
@fungicide:matrix.orgbecause the source port that gitea-web sees will be on a socket from anubis, so won't be the same source port as apache used to connect to anubis21:03
@clarkb:matrix.orgright if we ignore anubis for a minute what we want to do is be able to look at haproxy logs and get the port mapping. Take that to the apache logs and get the connection for that port to map the next step. Then go to the gitea log and find the final request21:03
@fungicide:matrix.orgbut we can still match up the request content that both apache and gitea-web logged21:04
@clarkb:matrix.orgonce we mix in anubis we'd need to follow that through the anubis log and back to apache21:04
@clarkb:matrix.orgthe more I think about it the more I Think the problem is x forwarded for was loaclhost or similar because haproxy is layer 3/4 so no http headers21:05
@fungicide:matrix.orgwhat i'm saying is that anubis isn't (currently) logging source ports for the requests it receives from apache, nor the source ports it uses in connecting to gitea-web21:05
@clarkb:matrix.organd gitea's logging was doing magic on that thinking the data was sufficient. if you don't have it I think it logs more of the raw connection details which ended up being more important?21:05
@clarkb:matrix.orgah21:05
@fungicide:matrix.orgwe could look at the apache logs for request bodies and match them to the request bodies that gitea-web logs, with approximate timestamp correlation, but not directly by tracing source ports at each hop through the chain21:06
@clarkb:matrix.organ easy thing to check would be to grab a log line from a prod server for a request in /var/gitea/logs/access.log and comapre to what is on the held node to see if they are different formats or data sets now that you're not removing x forwarded for21:07
@clarkb:matrix.orgjust to answer if there is a difference21:07

Generated by irclog2html.py 4.1.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!