| -@gerrit:opendev.org- Zuul merged on behalf of Brian Haley: [opendev/irc-meetings] 982430: Move neutron-drivers meeting earlier by 1 hour https://review.opendev.org/c/opendev/irc-meetings/+/982430 | 07:35 | |
| -@gerrit:opendev.org- Zuul merged on behalf of Michal Nasiadka: [opendev/irc-meetings] 982437: Move kolla meeting earlier by 1 hour https://review.opendev.org/c/opendev/irc-meetings/+/982437 | 07:36 | |
| -@gerrit:opendev.org- Adrian Vladu proposed: [openstack/project-config] 923044: x/cloudbase-init: move to GitHub https://review.opendev.org/c/openstack/project-config/+/923044 | 09:10 | |
| -@gerrit:opendev.org- Gregory Thiemonge proposed: [opendev/irc-meetings] 983191: Update meeting chair for Octavia https://review.opendev.org/c/opendev/irc-meetings/+/983191 | 12:14 | |
| -@gerrit:opendev.org- Zuul merged on behalf of Michal Nasiadka: [openstack/diskimage-builder] 982231: Add support for building Ubuntu Resolute (26.04) https://review.opendev.org/c/openstack/diskimage-builder/+/982231 | 13:00 | |
| @mnasiadka:matrix.org | Ok, I’m around if we’re fine to do the mirror03.bhs1.ovh switch - if not I’ll be around on Tuesday (Easter is kicking in) | 15:36 |
|---|---|---|
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/983068 | 15:39 | |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: | 15:39 | |
| - [opendev/system-config] 983134: Remove intermediate HTTPS layer for Gitea backends https://review.opendev.org/c/opendev/system-config/+/983134 | ||
| - [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/983061 | ||
| @fungicide:matrix.org | i've deleted and reset the autohold on the dnm change | 15:39 |
| @clarkb:matrix.org | mnasiadka: yes lets do that now | 15:39 |
| @clarkb:matrix.org | I have approved https://review.opendev.org/c/opendev/zone-opendev.org/+/982700 | 15:40 |
| -@gerrit:opendev.org- Zuul merged on behalf of Michal Nasiadka: [opendev/zone-opendev.org] 982700: Promote mirror.bhs1.ovh to new mirror03 https://review.opendev.org/c/opendev/zone-opendev.org/+/982700 | 15:47 | |
| -@gerrit:opendev.org- Clark Boylan proposed: [opendev/system-config] 983221: Drop Bionic package mirrors for Ubuntu, UCA, and Puppetlabs https://review.opendev.org/c/opendev/system-config/+/983221 | 15:49 | |
| @clarkb:matrix.org | infra-root ^ I think that is the first step in the promised bionic cleanup | 15:49 |
| @clarkb:matrix.org | corvus: for the zuul-provider side of cleanup is there a specific process we need to take to have zuul-launcher delete the old nodes and images? | 15:50 |
| @clarkb:matrix.org | I'm thinking maybe we remove the labels themselves and remove the labels from the providers first? | 15:51 |
| @clarkb:matrix.org | oh actually no we probably want to drop the image build and upload jobs first. Then the labels and labels from providers, Then anything else? | 15:51 |
| @jim:acmegating.com | > <@clarkb:matrix.org> oh actually no we probably want to drop the image build and upload jobs first. Then the labels and labels from providers, Then anything else? | 15:52 |
| yes. then drop the image and label objects. oh, probably start with dropping the nodeset objects. | ||
| @clarkb:matrix.org | got it. Nodesets -> builds -> labels and images -> anything else | 15:53 |
| @jim:acmegating.com | yeah let's start with that. i can probably review changes in a bit. | 15:53 |
| @jim:acmegating.com | or write them | 15:53 |
| @clarkb:matrix.org | I'm happy to write them. Helps aid my familiarity with the system. Then you can double check | 15:54 |
| -@gerrit:opendev.org- Clark Boylan proposed: | 16:02 | |
| - [opendev/zuul-providers] 983222: Drop Ubuntu Bionic Nodesets https://review.opendev.org/c/opendev/zuul-providers/+/983222 | ||
| - [opendev/zuul-providers] 983223: Drop Ubuntu Bionic image builds https://review.opendev.org/c/opendev/zuul-providers/+/983223 | ||
| - [opendev/zuul-providers] 983224: Drop Bionic images and labels from our providers https://review.opendev.org/c/opendev/zuul-providers/+/983224 | ||
| @clarkb:matrix.org | something like that maybe. And I think we can probably do this in parallel with the mirror content cleanup | 16:02 |
| @clarkb:matrix.org | mnasiadka: the bhs1 mirror resolves to 03 for me now. but the ttl is an hour so it may take a bit longer for other things depending on caches. That said it seem to be working from my desk | 16:03 |
| @jim:acmegating.com | Clark: in https://review.opendev.org/983222 do we have one tenant complaining but others are okay? | 16:37 |
| @jim:acmegating.com | i'm guessing the openstack tenant? | 16:37 |
| @clarkb:matrix.org | corvus: yes looks like openstack-zuul-jobs is still using bionic | 16:39 |
| @clarkb:matrix.org | I think starlingx may still be using it too? | 16:39 |
| @mnasiadka:matrix.org | Clark: dns returns 03 on my side and it seems it works | 16:40 |
| @clarkb:matrix.org | corvus: I think I'm to the point where pushing forward on this is probably reasonable. But maybe we want ot figure out a way to handkle that gracefully in zuul? I think we got lucky that the end result was +1 and if the reporting order had flipped it would be a -1? | 16:40 |
| @jim:acmegating.com | Clark: yes, i agree that's a potential problem -- except that it's actually still okay since opendev doesn't have clean-check. | 16:41 |
| @clarkb:matrix.org | heh if at first you don't succeed. Just recheck :) | 16:42 |
| @jim:acmegating.com | i think https://review.opendev.org/983224 lost the race like you describe, but is still okay | 16:42 |
| @jim:acmegating.com | well i mean we don't need to recheck, just +w | 16:42 |
| @jim:acmegating.com | i guess i'm saying "other tenants -1ing config changes in zuul-providers is okay and expected, just ignore them" is a reasonable procedure for us to adopt now. i agree that it could be better, but i don't think it's blocking us. | 16:43 |
| @clarkb:matrix.org | ack | 16:44 |
| @clarkb:matrix.org | and ya it is good signal to remind us that some things will break. But then we can decide we're comfortable with that anyway. Which I believe is the case for bionic | 16:44 |
| @jim:acmegating.com | left a comment on https://review.opendev.org/983223 but i +2d the stack | 16:45 |
| @clarkb:matrix.org | corvus: that is a good tip. I'll split that out into a change at the very end of the stack | 16:46 |
| @fungicide:matrix.org | i want to say in the past we replaced the nodesets with dead entries so that jobs returned an obvious failure instead of just being skipped and users not realizing they had new tenant errors | 16:49 |
| -@gerrit:opendev.org- Clark Boylan proposed: | 16:51 | |
| - [opendev/zuul-providers] 983223: Drop Ubuntu Bionic image builds https://review.opendev.org/c/opendev/zuul-providers/+/983223 | ||
| - [opendev/zuul-providers] 983224: Drop Bionic images and labels from our providers https://review.opendev.org/c/opendev/zuul-providers/+/983224 | ||
| - [opendev/zuul-providers] 983229: Cleanup Bionic DIB Element Details https://review.opendev.org/c/opendev/zuul-providers/+/983229 | ||
| @jim:acmegating.com | fungi: yeah, that's an option (i guess you mean that the jobs won't be defined due to config errors so users won't notice them not running) | 16:52 |
| @jim:acmegating.com | should be same general method of operation with niz, so whichever way we want to go works | 16:52 |
| @clarkb:matrix.org | can/should we have tenants do that themselves? | 16:53 |
| @clarkb:matrix.org | eg if openstack wants openstack-zuul-jobs to be quiet add a nodeset in the openstack tenant? | 16:53 |
| @clarkb:matrix.org | I'm mostly wondering what level of hand holding we need to continue to provide for an ancient (and largely broken due to ansible) system anyway | 16:53 |
| @fungicide:matrix.org | yeah, i don't have an opinion either way, just remembering we did that with... maybe it was xenial removal? | 16:54 |
| @clarkb:matrix.org | ya I seem to recall we did it at some point. And basically `nodes: []` causes the ansible tasks to fail because there is no node. I can amend the first change in the stack to do that | 16:55 |
| @fungicide:matrix.org | if we do that, it would probably only be a temporary thing for like a month or two in order to give people time to notice they didn't realize they had any jobs relying on bionic still | 16:57 |
| -@gerrit:opendev.org- Clark Boylan proposed: | 16:58 | |
| - [opendev/zuul-providers] 983222: Set ubuntu-bionic nodeset to an empty list https://review.opendev.org/c/opendev/zuul-providers/+/983222 | ||
| - [opendev/zuul-providers] 983223: Drop Ubuntu Bionic image builds https://review.opendev.org/c/opendev/zuul-providers/+/983223 | ||
| - [opendev/zuul-providers] 983224: Drop Bionic images and labels from our providers https://review.opendev.org/c/opendev/zuul-providers/+/983224 | ||
| - [opendev/zuul-providers] 983229: Cleanup Bionic DIB Element Details https://review.opendev.org/c/opendev/zuul-providers/+/983229 | ||
| @clarkb:matrix.org | done | 16:58 |
| @clarkb:matrix.org | fungi: your gitea change to use http on the backend passed | 17:00 |
| @clarkb:matrix.org | still waiting on anubis change results | 17:00 |
| @clarkb:matrix.org | but we may have something to interact with on the held node that is representative soon | 17:00 |
| @jim:acmegating.com | Clark: the (non-blocking) error on https://review.opendev.org/983224 tells us there are also bionic nodeset defs inside the openstack tenant, so we need to decide if we want to empty those or let them turn into config-errors | 17:05 |
| @jim:acmegating.com | also the vexxhost tenant | 17:05 |
| @fungicide:matrix.org | Clark: i think i got overzealous trimming jobs in my dnm change, so probably don't have a good held node after all | 17:08 |
| @clarkb:matrix.org | fungi: oh heh | 17:08 |
| @clarkb:matrix.org | corvus: I think we can probably turn those into config errors? | 17:09 |
| @fungicide:matrix.org | i needed to keep the dependency for system-config-run-gitea after all, even though we're not changing any image content | 17:09 |
| @clarkb:matrix.org | corvus: considering the alternative is to break them when the next change lands and the labels go away I think breaking them a step early is probably fine? | 17:09 |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/983068 | 17:12 | |
| @clarkb:matrix.org | fungi: looking at some of the failures ofr the anubis change I think we want to run the user agent checks before anubis in that vhost? | 17:16 |
| @fungicide:matrix.org | the anubis change failed on legitimate issues though, in don't have the matrix well-known file serving right yet, but also our ua test is hitting anubis first so we need to decide whether we want to prioritize the ua filter | 17:16 |
| @clarkb:matrix.org | I think there is still value in just straight up rejecting those hosts first | 17:16 |
| @clarkb:matrix.org | yup | 17:16 |
| @jim:acmegating.com | Clark: yeah, maybe we draw the line at being helpful in zuul-providers and other tenants are on their own when the nodesets break. that's what service-announce/discuss is for. | 17:19 |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/983068 | 17:27 | |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/983061 | 17:27 | |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org marked as active: [openstack/project-config] 982179: Revert "Temporarily remove release docs semaphores" https://review.opendev.org/c/openstack/project-config/+/982179 | 17:28 | |
| @clarkb:matrix.org | fungi: any reason to not approve https://review.opendev.org/c/openstack/project-config/+/982179 now? | 17:34 |
| @fungicide:matrix.org | Clark: please do | 17:40 |
| -@gerrit:opendev.org- Zuul merged on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [openstack/project-config] 982179: Revert "Temporarily remove release docs semaphores" https://review.opendev.org/c/openstack/project-config/+/982179 | 17:52 | |
| @clarkb:matrix.org | fungi: it looks like corvus seems happy with the zuul-providers changes to remove bionic. Any chance you want to weigh in on those and decide if we should start merging things? Also https://review.opendev.org/c/opendev/system-config/+/983221 is the mirror reprepro side step 1 cleanup the happens before the manual cleanups | 18:14 |
| @clarkb:matrix.org | I expect to be around all day until about 4pm . Then its science fair night at the school and I need to help shuttle people around | 18:15 |
| @fungicide:matrix.org | yeah, just a sec | 18:15 |
| @fungicide:matrix.org | ooh, SCIENCE! | 18:15 |
| @clarkb:matrix.org | and then the stack ending at https://review.opendev.org/c/opendev/system-config/+/982125/ is the gerrit updates stack that maybe we can proced with tomorrow if things remain calm so we can restart gerrit | 18:15 |
| @clarkb:matrix.org | we didn't do any experiments or projects this year due to illness in February. Will help out with the event itself | 18:16 |
| @fungicide:matrix.org | aww, that's too bad, but understandable | 18:17 |
| @fungicide:matrix.org | did we already drop bionic from ubuntu-ports i guess? | 18:18 |
| @fungicide:matrix.org | ah yeah that was https://review.opendev.org/c/opendev/system-config/+/956115 back in august | 18:18 |
| @clarkb:matrix.org | Yup there was much less usage due to being added later I think | 18:20 |
| @clarkb:matrix.org | The arm adoption was after bionic | 18:20 |
| -@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/zuul-providers] 983222: Set ubuntu-bionic nodeset to an empty list https://review.opendev.org/c/opendev/zuul-providers/+/983222 | 18:21 | |
| -@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/zuul-providers] 983223: Drop Ubuntu Bionic image builds https://review.opendev.org/c/opendev/zuul-providers/+/983223 | 18:23 | |
| -@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/zuul-providers] 983224: Drop Bionic images and labels from our providers https://review.opendev.org/c/opendev/zuul-providers/+/983224 | 18:26 | |
| @clarkb:matrix.org | Here we go :) | 18:28 |
| @clarkb:matrix.org | Label ubuntu-bionic is not found errors have shown up in the Openstack tenant as expected | 18:35 |
| @jim:acmegating.com | looking at launcher logs, it looks like some bionic nodes were used around 2:00 - 3:00utc time today | 18:42 |
| @jim:acmegating.com | ``` | 18:43 |
| 2026-04-02 18:27:21,689 INFO zuul.Launcher: [upload: bcd9c6650a8c49a98a207b65b3c5b101] Deleting image upload <ImageUpload bcd9c6650a8c49a98a207b65b3c5b101 state: ready attempt: 0 endpoint: openmetal/openmetal-IAD3 artifact: f26f89c347024880865e4d0247ec1cb7 validated: True external_id: 3068dd28-b369-4665-baf6-fba0ecc6e632> | ||
| 2026-04-02 18:27:22,972 INFO zuul.Launcher: [upload: 0b963e6f0d6e498587d4c5e663ea2209] Deleting image upload <ImageUpload 0b963e6f0d6e498587d4c5e663ea2209 state: ready attempt: 0 endpoint: vexxhost/vexxhost-ca-ymq-1 artifact: f26f89c347024880865e4d0247ec1cb7 validated: True external_id: 94909497-a3ad-4b30-810e-9abf9b3b6815> | ||
| 2026-04-02 18:27:26,575 INFO zuul.Launcher: Deleting image build artifact with no uploads: <ImageBuildArtifact f26f89c347024880865e4d0247ec1cb7 state: ready canonical_name: opendev.org%2Fopendev%2Fzuul-providers/ubuntu-bionic build_uuid: 2ec4bac93cc34fe281e5ca3738650200 validated: True> | ||
| ``` | ||
| this is promising. | ||
| @fungicide:matrix.org | well, we knew as of a couple days ago that starlingx's jobs were still set to use ubuntu-bionic nodes (and presumably overriding zuul's `ansible_version`) | 18:43 |
| @jim:acmegating.com | we deleted 6 build artifacts like that | 18:43 |
| @clarkb:matrix.org | Oh ya I guess I should check all the clouds for bionic images after lunch | 18:43 |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/983068 | 19:09 | |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/983061 | 19:09 | |
| @fungicide:matrix.org | yay! anubis change for gitea is finally passing too | 19:56 |
| @clarkb:matrix.org | nice | 19:57 |
| @fungicide:matrix.org | though at this point i almost wonder if we shouldn't just point anubis straight at port 3000 instead of wrapping that in another apache vhost, the only thing left in that vhost besides proxying is `ProxyAddHeaders Off` | 19:58 |
| @fungicide:matrix.org | maybe gitea doesn't even need that workaround any longer? | 19:58 |
| @fungicide:matrix.org | i guess there's no better time to try it than now | 20:01 |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/system-config] 983061: Apply Anubis to the Gitea backend servers https://review.opendev.org/c/opendev/system-config/+/983061 | 20:05 | |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed wip: [opendev/system-config] 983068: DNM: Break Gitea testing so we can hold a node https://review.opendev.org/c/opendev/system-config/+/983068 | 20:05 | |
| @fungicide:matrix.org | we'll want to check the gitea logging on that, in case the workaround was actually still needed | 20:05 |
| @clarkb:matrix.org | fungi: oh ya we can just have apache -> anubis -> gitea without apache in the middle again? | 20:25 |
| @fungicide:matrix.org | that's what we're about to find out | 20:26 |
| @fungicide:matrix.org | that way we only use up half as many worker slots in apache too, in theory | 20:27 |
| @fungicide:matrix.org | okay, it didn't break tests at least | 20:43 |
| @fungicide:matrix.org | Clark: do you recall what the logging problem in gitea was with the presence of x-forwarded-for headers? or maybe anubis just doesn't add them by default? https://zuul.opendev.org/t/openstack/build/5850d8afbf6d4f66b61b739c0fbdce27/log/gitea99.opendev.org/docker/gitea-docker_gitea-web_1.txt looks fairly normal to me | 20:45 |
| @fungicide:matrix.org | 50.56.159.67 is the held haproxy server, for anyone who wants to test things out | 20:47 |
| @fungicide:matrix.org | when adding `50.56.159.67 opendev.org` to my /etc/hosts and then browsing https://opendev.org/ i did get an initial anubis graphic | 20:49 |
| @fungicide:matrix.org | and `GIT_SSL_NO_VERIFY=1 git clone https://opendev.org/opendev/system-config` worked for me too | 20:51 |
| @clarkb:matrix.org | fungi: I think the problem was that it chagned the contents of the logs to include misleading data | 20:54 |
| @clarkb:matrix.org | fungi: I don't remember the specific but I want to say it made it harder/impossible to trace a connection from the haproxy to the gitea backend | 20:55 |
| @clarkb:matrix.org | fungi: so maybe make a connection like that and see if you can trace it through? | 20:55 |
| @clarkb:matrix.org | fungi: looks like if you don't edit /etc/hosts then the anubis redirect stuff breaks and says this is not allowed. | 20:56 |
| @clarkb:matrix.org | Just calling that out because I wonder if that breaks our ability to socks proxy through the haproxy and talk to a specific backend. That might be another thing to test | 20:56 |
| @fungicide:matrix.org | we can socks proxy to 3000/tcp right? | 20:57 |
| @fungicide:matrix.org | (or ssh forward even) | 20:57 |
| @clarkb:matrix.org | yes, the problem with that is how do you get your browser to resolve localhost foo as gitea09 | 20:58 |
| @clarkb:matrix.org | its easy if you let the other side resolve it for you | 20:58 |
| @fungicide:matrix.org | i usually do that by editing /etc/hosts but didn't know there was any other option | 20:58 |
| @clarkb:matrix.org | fungi: what i do is ssh -D 1080 gitea-lb03.opendev.org. Then udpate my firefox proxy configs to use the socks proxy at localhost:1080 (you can opitonally use onee of those js functions to set this on a host by host basis). Then enter https://gitea09.opendev.org and you'll get that specific backend | 21:00 |
| @clarkb:matrix.org | and then everything works. ssl, redirects etc etc | 21:00 |
| @clarkb:matrix.org | but yes you can edit /etc/hosts as well and say gitea09.opendev.org is a 127.0.0.1 and hit :4443 mapped to :443 on the gitea | 21:01 |
| @fungicide:matrix.org | as for the logging, i'm not really sure what i'm looking for that would be wrong. it does record the client source port in the gitea-web log, though i think tracing it through anubis is the bigger challenge regardless of whether or not we have apache in between them | 21:02 |
| @fungicide:matrix.org | because the source port that gitea-web sees will be on a socket from anubis, so won't be the same source port as apache used to connect to anubis | 21:03 |
| @clarkb:matrix.org | right if we ignore anubis for a minute what we want to do is be able to look at haproxy logs and get the port mapping. Take that to the apache logs and get the connection for that port to map the next step. Then go to the gitea log and find the final request | 21:03 |
| @fungicide:matrix.org | but we can still match up the request content that both apache and gitea-web logged | 21:04 |
| @clarkb:matrix.org | once we mix in anubis we'd need to follow that through the anubis log and back to apache | 21:04 |
| @clarkb:matrix.org | the more I think about it the more I Think the problem is x forwarded for was loaclhost or similar because haproxy is layer 3/4 so no http headers | 21:05 |
| @fungicide:matrix.org | what i'm saying is that anubis isn't (currently) logging source ports for the requests it receives from apache, nor the source ports it uses in connecting to gitea-web | 21:05 |
| @clarkb:matrix.org | and gitea's logging was doing magic on that thinking the data was sufficient. if you don't have it I think it logs more of the raw connection details which ended up being more important? | 21:05 |
| @clarkb:matrix.org | ah | 21:05 |
| @fungicide:matrix.org | we could look at the apache logs for request bodies and match them to the request bodies that gitea-web logs, with approximate timestamp correlation, but not directly by tracing source ports at each hop through the chain | 21:06 |
| @clarkb:matrix.org | an easy thing to check would be to grab a log line from a prod server for a request in /var/gitea/logs/access.log and comapre to what is on the held node to see if they are different formats or data sets now that you're not removing x forwarded for | 21:07 |
| @clarkb:matrix.org | just to answer if there is a difference | 21:07 |
Generated by irclog2html.py 4.1.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!