Monday, 2025-09-15

zigoHi. I want to rewrite the code that's doing this:08:10
zigohttp://osbpo.debian.net/deb-status/08:10
zigoIs there a better thing to do than parsing the HTML file from let's say https://releases.openstack.org/flamingo/index.html ? Is there a json output somewhere of all artifacts from a release for example ?08:10
fricklerzigo: the source data for that page should be all yaml in https://opendev.org/openstack/releases/src/branch/master/deliverables/flamingo , maybe some of the release tooling https://opendev.org/openstack/releases/src/branch/master/openstack_releases/cmds can help you deal with it?09:45
zigofrickler: Thanks ! :)10:20
zigofrickler: Are you suggesting that my tool should just fetch the git and parse the yaml files?10:21
opendevreviewyatin proposed zuul/zuul-jobs master: [WIP] Make fips setup compatible to 10-stream  https://review.opendev.org/c/zuul/zuul-jobs/+/96120810:30
fricklerzigo: up to you, but that would at least sound simpler to me than trying to parse the html that is generated from those files10:33
zigoSounds good, thanks.10:34
*** gthiemon1e is now known as gthiemonge11:11
vsevolod_Hello. I have job, tox-py37, which started to fail with retry/retry_limit sometimes between 9 Aug and 10 Sep.11:23
vsevolod_https://zuul.opendev.org/t/openstack/builds?job_name=tox-py37&project=jjb/jenkins-job-builder11:23
vsevolod_No logs are shown for it11:23
vsevolod_How to check, what is wrong? Have something changed in infrastructure in this period?11:23
fungithe mirror.ubuntu-ports volume move took just shy of 30 hours, i've got mirror.ubuntu moving now and that should hopefully be the last of them, so i ought to be able to upgrade afs02.dfw to noble tomorrow or wednesday finally12:43
fungizigo: the openstack/releases repo also has tooling for parsing and filtering that data, so it may be just a few lines of code to make it output the exact information you're looking for12:48
fungiyou want *all* the artifacts for the release, or only the most recent version of each of them in the release12:49
zigofungi: I just wrote, with the help of AI, a tool that does what I needed ! :)12:50
fungineat12:50
zigoIt's the 2nd time I'm writing something with AI, it's very fast to do quick and dirty code. :P12:50
zigoIn this case, I don't really care code quality, so that's ok.12:51
fungiwell, if it ends up being a frequent task, we could probably add some feature to the existing openstack release tools, though that's more a conversation for the #openstack-release channel12:52
fungivsevolod_: the ansible default in the zuul tenant where that's running increased to a version that won't run on ubuntu 18.04 lts (bionic), the job either needs to be moved to a newer platform (i think ubuntu-focal also has python 3.7 packages?) or pin it back to `ansible-version: 9`13:04
fungihttps://lists.opendev.org/archives/list/service-announce@lists.opendev.org/thread/EUTEAOIUQZ6YJUQWD75QO34H5TDGBLKP/13:04
fungithe lack of logs is a quirk of how ansible dropped support for older platforms, its execution never gets far enough to be able to generate a log, and since it's insta-failing in the pre-run phase zuul retries it thinking it might be an intermittent setup problem or broken test node, hence the retry_limit results13:08
opendevreviewJeremy Stanley proposed opendev/bindep master: Switch from license classifier to SPDX expression  https://review.opendev.org/c/opendev/bindep/+/94541613:21
fungivsevolod_: digging deeper, ubuntu-focal doesn't have python3.7 packages (though does have 3.8), so setting `ansible-version: 9` on the tox-py37 job or dropping that job are your only options for now i think, and zuul will probably cease supporting ansible<11 soon so even that won't work for long13:51
clarkbyes, we intend on removing the bionic test platform as part of that process too14:45
clarkbmy internet connection is experiencing packet loss somewhere beyond my personal home network (looks like between my isp and their next hop in california maybe). 14:48
clarkbeven my mobile provider seems to be struggling this morning so maybe it is more widespread15:09
clarkband now things seem improved. Maybe I just needed to wait for osmoene to roll into work and reboot a router15:19
clarkbif things stary reliable then I'd like to consider rolling out gitea to the latest bugfix release: https://review.opendev.org/c/opendev/system-config/+/96067515:19
clarkbfungi: other than the afs moves being slow (expected) anything else I should catch up on this morning?15:22
fungii don't think so15:30
fungiand yeah, gitea upgrades next would be good15:30
fungihad some lengthy power outages this morning (high winds today), but seems stable enough now15:30
fungii'm still busy getting noncritical stuff powered back up and clocks reset but about done15:31
clarkbmy internets seem consistently happy at this point16:17
stephenfinfungi: I brought this up before (and clarkb explained why it was happening) but when you reply to a review with gertty, it doesn't use the (new'ish) comment system so there's no REPLY / QUOTE / ACK buttons. Probably one to fix at some point, assuming you ever contribute to that16:17
clarkbfungi: I think I'm happy for you to approve the gitea change (if you're happy with it) whenever you think your local network is up and happy16:17
* stephenfin has never used gertty16:17
stephenfinjfyi16:17
stephenfin(spotted on https://review.opendev.org/c/openstack/requirements/+/960843)16:18
clarkbstephenfin: you can manually reply/quote using email style quotes with > fwiw16:18
stephenfinthat's what I've done16:18
clarkbbut yes also some of fungi's comments also don't show up properly in the web ui comment thread views if you're looking for what changed since as a result so not just in the reporting side of things16:18
clarkbI think corvus does have a patch for this but it isn't reliable so hasn't been merged yet16:19
fungiyeah, i'm using that patch, but i think it only does that for inline comments at the moment16:20
funginot for patchset level comments16:20
fungisorry about that16:20
opendevreviewIvan Anfimov proposed openstack/project-config master: Add translation-jobs to masakari-dashboard  https://review.opendev.org/c/openstack/project-config/+/96126116:22
opendevreviewIvan Anfimov proposed openstack/project-config master: Add translation-jobs to masakari-dashboard  https://review.opendev.org/c/openstack/project-config/+/96126116:23
clarkbthere is a fake file that the new style system leaves comments against for patchset level comments16:24
clarkbI wonder if it would be difficult to trick gertty into leaving comments on that "file"16:24
fungii'm not positive it does file-level comments in the new style yet either, may just be line-level so far 16:25
vsevolod_fungi: Back to tox-py37. I have tried all combinations of `nodeset: ubuntu-focal` and `ansible-version: 9`16:28
vsevolod_Results are same: `No package matching 'python3.7-dev' is available`16:28
vsevolod_https://zuul.opendev.org/t/openstack/build/c3f4189b34d9431cb89f2d3e042aa85916:28
clarkbvsevolod_: correct focal does not have bionic16:29
clarkber sorry16:29
clarkbfocal does not have python 3.7 but bionic does16:29
clarkbbut that old version of python on bionic only works if you use ansible version 916:29
clarkbvsevolod_: is there a reason to keep python3.7 testing alive at this point or can it be removed?16:30
zero__Not sure if there anyone using it.16:33
zero__It is Jenkins Job Builder project.16:33
clarkbI have been removing python3.6 and 3.7 testing from a number of projects since the move since it doesn't amke sense in a lot of locations at this point due to age16:33
clarkbI think it is fair to tell people they need to run jjb with a newer version of python. Worst case they can run it in a container if they are on an older platform16:34
zero__I see. I will remove it then. Thank you.16:34
fungior run an older version of jjb that was tested with 3.716:38
clarkbfungi: I guess the other consideration for https://review.opendev.org/c/opendev/system-config/+/960675 is that we are deep into the openstack release process16:40
clarkbbut if you're good with it so am I16:41
fungiit's actually a bit of a lull, most stuff is frozen, projects are figuring out what might make it into an rc216:41
fungizuul isn't idle, but it's not particularly busy either16:42
fungiand this is a minor bugfix release for gitea16:44
fungii don't see it as all that different from our weekly zuul upgrades, which we haven't paused either16:46
fungibetween improvements to our testing and redundancy over the years, what "staying slushy" during releases means has changed for us, in my opinion16:47
clarkbwfm16:48
clarkbdo you want to approve it or should I?16:48
fungii can16:49
clarkbfungi: oh I meant to ask you if doing the vos releases after earch rw move did end up addressing the disk space issues17:14
clarkband then I guess we don't need to stop vos release ssh tooling beacuse it uses afs01 and afs01 is all done, its just afs02 left17:14
fungii think there may be somewhat of a delayed reaction for freeing space after a vos release, because the reason the mirror.ubuntu move had to be restarted is that it initially complained about insufficient space17:17
fungibut it was the only one this time17:17
clarkbcould be that the backend cleanups run periodically and not instantly I guess17:17
fungiyeah, seems like something along those lines17:18
clarkbthen if you want to weigh in on timing for https://review.opendev.org/c/opendev/system-config/+/957555 and https://review.opendev.org/c/opendev/system-config/+/958597 I think that would be helpful. I want to alnd the change to update the versions first then the quay.io source change second and then restart gerrit on the result after both are done17:21
clarkbthe reason for that is I don't want to build what is effectively a new 3.10.8 with the wrong plugin versions if we do the quay move first17:22
clarkbbetter to get the gerrit details right then the docker stuff.17:22
clarkbBut with the openstack release I'm not sure what we're thinking about as far as timing goes17:22
fungiyeah, the bug fixes there look like stuff that probably doesn't affect any of our projects, except maybe the mobile ui one which is cosmetic17:28
fungiwe're still 2 weeks away from openstack release week, shouldn't be too hard to find a quiet moment for a gerrit restart or two, and this update also doesn't seem like it has any real risk of serious regressions17:30
fungi(openstack 2025.2/flamingo release is two weeks from wednesday)17:30
clarkbI think we only need one restart, we just need to land those changes in order in quick succession. Or I can squash them together if we're worried17:31
funginah, just approving them close together and then restarting after they're done is fine by me17:32
fungimaybe even later today if nothing goes sideways with the gitea upgrades17:32
fungiooh, i totally missed that https://review.opendev.org/c/zuul/zuul-jobs/+/958605 existed and merged, thanks clarkb!17:41
fungii just started trying to write that change myself and noticed it was not defaulting to what i remembered17:42
clarkbfungi: yup I announced it two weeks prioer to merging that and then merged it on the announced date17:42
fungiawesome. i can focus on the cleanup in that case17:42
fungionce afs server upgrades are done17:42
clarkbso bullseye backports and stretch can be cleared out of that volume17:42
clarkbI'm still curious to know if we can run the cleanup steps in a noop fashion safely 99% of the time. I'd be on board with doing that if os17:43
fungiyeah, and openeuler can also go completely, and soon we can drop bionic from ubuntu and ubuntu-ports17:43
clarkbI think bionic may already be out of ubuntu-ports fwiw but yes17:43
fungioh, right you are, i did clean that one up in order to free space back at the start of the upgrades17:44
opendevreviewMerged opendev/system-config master: Update gitea to 1.24.6  https://review.opendev.org/c/opendev/system-config/+/96067518:36
clarkbhttps://gitea09.opendev.org:3081/opendev/system-config/ is updated18:40
clarkbgit clone works and browsing the web ui seems happy18:41
fungiyeah, working for me18:42
clarkball six backends look updated ot me18:54
clarkband the job succeeded https://zuul.opendev.org/t/openstack/buildset/ff1a7330123441adbd67fc3d4d83662018:54
clarkbI'm going to mark this done on my todo list and start looking for lunch18:54
fungicool, when you get back we can decide whether to do the gerrit changes and restart today19:04
clarkbfungi: I do plan to be around for the rest of the day if we want to try and land those changes today. I guess worst case we can do the restart tomorrow if we get images updated today?19:34
clarkbI always feel bad with things like that later in the day as i know it is 3 hours later for you relative to me19:34
clarkbwe could also aim for tomorrow or wednesday if that fits into schedules better19:34
fungituesdays are tough for me with all the meetings, would prefer to just knock it out while we can19:35
fungii don't have anywhere to be tonight19:36
clarkbya I have meetings all day tomorrow too19:36
fungithough i'll probably take off thursday after my lunchtime meeting since it's christine's birthday19:36
clarkbok I think we want to approve https://review.opendev.org/c/opendev/system-config/+/957555 then https://review.opendev.org/c/opendev/system-config/+/95859719:36
fungishould i wait for the first one to merge before approving the second, since there's no change relationship between them?19:37
clarkbthey aren't stacked and don't have a depends on but I can keep an eye on them in the gate so that if the first one fails I can -W the second one to preserve the order. Then figure out strict order updates (I don't want to do that now because then we'll have to check at least one again)19:37
fungiotherwise the first might fail out of the gate19:37
fungioh, that also works19:37
clarkbya but if it does it should reset the second one giving plenty of time to -W it19:37
fungiokay, i'm approving them both in sequence now, it'll take them a while in the gate anyway19:38
clarkbsounds good thanks19:38
clarkbits been a while since we did a gerrit restart. I think the process is stop the containers, move the replication queue stuff aside, delete h2 cache backing files, start gerrit19:41
clarkbthen once up and looking happy trigger full reindexing for changes19:41
fungithat matches my memory19:41
clarkboh and we need to pull the new image first19:44
fungiright19:50
clarkboh I meant to check gitea replication earlier. https://opendev.org/openstack/swift/commit/1a97c54766e9254ba4f1bf2e64e5a2d53e102ec6 was replicated from https://review.opendev.org/c/openstack/swift/+/961285 so I think it is working20:13
clarkbI did an initial edit to the meeting agenda, but will need to update things once the gerrit stuff is done20:26
fungilooking at the system-config-run-review-3.10 results from the second change, it did use a 3.10.8 image20:35
opendevreviewMerged zuul/zuul-jobs master: Make buildx builder image configurable  https://review.opendev.org/c/zuul/zuul-jobs/+/96084020:36
fungiso the dependent pipeline is working correctly there for image builds20:36
clarkbfungi: it should've rebuilt it entirely too fwiw20:37
fungiright20:37
fungii've got a root screen session going on review20:39
fungii let #openstack-release know too20:43
fungistatus notice The Gerrit service on review.opendev.org will be offline briefly for a quick patch update, but will return within a few minutes20:43
fungican send that ^ when the time comes20:44
opendevreviewMerged opendev/system-config master: Update Gerrit images to 3.10.8 and 3.11.5  https://review.opendev.org/c/opendev/system-config/+/95755520:44
clarkbfungi: `ls -lh /home/gerrit2/review_site/cache/` on review shows one these is not like the others20:51
clarkbactually two of them. Those are the two caches and their lock files we want to delete when gerrit is shutdown20:51
clarkbgerrit_file_diff.h2.db and git_file_diff.h2.db20:51
clarkbI'm worried its going to hit the shutdown timeout due to that...20:52
clarkbbut not sure what we can do at this point other than find out. removing the h2 resizing timeout increase did make things go faster last time but not instant so I wonder if it is based on how big those files are20:52
clarkb`quay.io/opendevorg/gerrit       3.10      2e7da5290a2d` appears to be the image we're currently running on20:54
opendevreviewMerged opendev/system-config master: Build gerrit image with python base from quay.io  https://review.opendev.org/c/opendev/system-config/+/95859720:59
clarkbthat is "deploying" now21:01
fungiyeah21:01
clarkbthe deployment reports success. I guess we can pull now if we're ready to do this21:05
fungistarting, i have the command queued in screen already21:05
fungiand done21:06
fungiquay.io/opendevorg/gerrit       3.10      16e3e6710a12   55 minutes ago   694MB21:06
clarkbI did a docker inspect on the new image and it seems to amtch the latest version on quay21:06
fungishall i send the notice?21:07
clarkb58cc86858076f8f5992aa4128a2b15078f557f51de75fe442fa93f033a564c94 from https://quay.io/repository/opendevorg/gerrit/manifest/sha256:58cc86858076f8f5992aa4128a2b15078f557f51de75fe442fa93f033a564c9421:07
clarkbyes I think we can send the notice then work out a command to do the restart while that is happening?21:07
fungi#status notice The Gerrit service on review.opendev.org will be offline briefly for a quick patch update, but will return within a few minutes21:07
opendevstatusfungi: sending notice21:07
-opendevstatus- NOTICE: The Gerrit service on review.opendev.org will be offline briefly for a quick patch update, but will return within a few minutes21:07
clarkblooks like the problematic cache files haven't changed names21:08
fungii've queued up an edited version of what we ran last time, but you noted some different files21:08
fungiokay, so it's the same names as last time21:08
clarkbfungi: ya sorry its the same files I just meant those two are so outrageously different in size to the other h2 cache files they stand out21:08
clarkbnot that they were different from last time21:08
fungioh, right you are, they are rather huge21:09
clarkbabsurdly so imo21:09
clarkbI'm really surprised no one else in the gerrit community is complaining21:09
opendevstatusfungi: finished sending notice21:10
fungiokay, command look right? should i wait for jobs in opendev-prod-hourly to finish?21:11
clarkbI think the command looks right. The two caches match the absurdly large ones21:11
clarkband hourlies are just about to finish so may as wellwait another few seconds?21:11
clarkbin the time it took me to type that they completed21:12
fungiheh21:12
clarkbI guess we are as ready as we can be21:12
fungithe mistral changes in the gate pipeline that say they have 2 minutes remaining?21:12
clarkbyes then they will run post jobs potentially too21:12
clarkbso could wait for those too I suppose21:12
fungi961162,1 and 961163,121:13
fungiodds are they would try to merge while gerrit was down21:13
fungione of them just did, the other has one job uploading logs now21:15
fungithey're just publishing branch tarballs and mirroring to github now, zuul should be able to do that with gerrit down yeah?21:16
clarkbas long as the merges have completed21:16
fungithey have, i guess time to run the restart then?21:17
clarkbwhich they won't have started for teh second one yet21:17
clarkbsince the queue is supercedent21:17
fungioh, merges in the zuul merger sense21:17
clarkbyes21:17
fungiokay, the second one has started running jobs21:17
fungianything else or initiate the restart now?21:18
clarkbI think zuul would've compelted all operations against gerrit for that ref now (in order to enqueue the jobs it has to do that work) but it may do the work again on the executor when an executor is assigned? I can't remember how that works21:18
clarkblooks like it started anyway so I think we're good21:19
fungiit's underway21:19
clarkbthe two large h2 files shrunk ever so slightly which is inline with our removal of the lgoner timeout there21:20
clarkbstrace then cross checking agains fds in /proc show it doing stuff with /var/gerrit/cache/git_file_diff.h2.db so ya I'm worried this is going to timeout21:22
clarkbso frustrating21:22
fungimaybe we need to go back to having weekly gerrit restarts21:22
clarkbfungi: if we hit the timeout it should exit non zero so you'll need to edit your command to followup and do the other steps once gerrit actually stops21:24
fungiright, i figured21:24
fungiError response from daemon: given PID did not die within timeout21:24
clarkbfungi: hold on21:24
clarkbfungi: you should actually rerun the entire command mariadb was not stopped21:25
fungirerunning21:25
fungistarting back up now21:25
clarkb[2025-09-15T21:25:43.989Z] [main] INFO  com.google.gerrit.pgm.Daemon : Gerrit Code Review 3.10.8-14-g12f33e503d-dirty ready21:26
clarkbstill waiting for diffs to load but the ui is "up"21:27
fungiyeah, it's rendering for me21:27
corvuslol i pushed a big stack that very second :)21:27
fungid'oh!21:27
clarkbyup I see diffs now21:27
corvusthey seem fine! :)21:27
fungithe matrix version of our statusbot can't come soon enough21:27
clarkbI'm really glad that the gerrit folks will be at the summit. I'm really hoping that sitting down with them and explaining that not being able to gracefully stop their software is a major flaw21:28
clarkbparticularly since 1) this is just for caches not even data we care about and 2) gerrit is full of data we do care about21:28
clarkbfungi: there are a few of the h2 pruner tasks running (that don't actually reduce the backing file size...) once those are done we can probably think about triggering a reindex?21:29
fungiyeah21:30
clarkbcorvus: and you might want to check your changes to make sure you didn't end up generating new changes for them. I think this is the specific race where people push while gerrit is stopping that should update existing changes but if things haven't updated in the index yet then boom21:30
clarkbI spot checked one and it looks ok so didn't hit the race21:31
fungi`gerrit index start changes --force` looks like what i've done before21:31
clarkbthat sounds right. the --force is needed bceause we're already on the latest version of the index so by default it won't reindex21:32
clarkbfungi: queues look empty now21:32
funginot any more! ;)21:32
clarkbhttps://opendev.org/zuul/zuul/commit/a7baeca65d92dad883b59ab05480c1b0c84bbfe5 seems to have replicated from https://review.opendev.org/c/zuul/zuul/+/960695 which is also the change I spot checked for misalignment on changeid lookups21:33
clarkbso I think replication is working21:33
clarkbfungi: confirmed the queue is now full of work to do :)21:33
fungii'll let #openstack-release know we're basically done21:34
clarkbsounds good. I already detached from the scree nand will let you decide when it can be shutdown21:34
corvusclarkb: yeah, lgtm so far.  i think i pushed right after the process started21:34
fungii terminated the screen session on review now21:35
clarkbcorvus: ack21:36
clarkbfungi: I do notice a few replication errors like those that we move the waiting queue out of the way to avoid. At first I worried that we didn't move things, but on closer inspection I think someone is simply doing edits via the web ui which generates those errors21:37
fungioh good21:37
clarkbfungi: so it is just coincidence that it happend around when we restarted and using the web ui to edit things is one of the cases where the replication system fails and why we do the mv in the first place21:37
fungii mean, not good that using the webui causes errors, but whatev21:37
clarkbhttps://review.opendev.org/c/openstack/kolla-ansible/+/961313 is one of teh changes in question which did get an edit published. Interestingly that seems to affect replication of the change ref itself (it didn't replicate)21:38
clarkbI don't think I ever tracked down if that was the case for not21:38
clarkbbut now we know? it shouldn't impact actual refs just the magical gerrit ones so probably not the end of the world. If we want we can request kolla-ansible get force replicated and see if that ref shows up21:39
clarkbhttps://review.opendev.org/c/openstack/freezer-web-ui/+/961311 is the other and the comment thread there seems to concur21:41
clarkbactually something as trivial as leaving a comment on the chagne may get it to replicate too. I think that is why 961311 is replicated (it was marked ready for review at the end)21:43
clarkbfungi: 961311 raises an error in the error_log if you open it because it can't find ps3 in the index. I think that is because its waiting for reindexing to complete before it can reindex that update21:44
clarkbso many fun and exciting behaviors to sort through here. I really don't like that gerrit restarts were once simple easy and reliable and now are full of gotchas and confusing behaviors21:44
clarkbreindexing is just about halfway done21:50
clarkbonce reindexing is done I'll check that 961311 seems happier when loaded then I'll work on getting the meetin agenda updated and sent21:52
clarkbcorvus: I've just tried to delete autoholds 0000000230 and 0000000231 in the openstack tenant in order to recycle my autoholds due to the new images and the commands don't complain but nothing seems deleted22:01
clarkbI went ahead and put in two new autoholds and rechecked the change22:02
clarkbreindexing is complete. It complained about 3 changes which I believe is consistent with preexisting failures22:05
clarkband as expected (hoped really) opening https://review.opendev.org/c/openstack/freezer-web-ui/+/961311 no longer emits a traceback in error_log now that reindexing is complete22:06
fungioh good22:06
clarkbok my meeting agenda edits are in. Let me know if there are any other items to change/add/remove and I'll get that done before sending the email out. Otherwise expect the email to go out around 23:00 UTC22:09
clarkbfungi: zuul +1'd https://review.opendev.org/c/openstack/kolla-ansible/+/961313 but it isn't replicated. Do you think we should try a force replication for kolla-ansible to see if that fixes it?22:10
clarkbI suspect that this problem exists for any changes that are created via the gerrit web ui so not a new regression, but we may wish to do this in order to udnerstand the issue better22:11
clarkbin particular that change was created via cherry pick from one branch to another then edited via the web ui. It is the web ui events in particular that I think break replication due to the events being filtered out at least partially22:12
clarkbhrm https://review.opendev.org/c/openstack/kolla-ansible/+/961312 was an edit done in a similar way to a different branch in the same repo and it looks ok replication wise22:14
clarkbso ya not sure why that particular one didn't replicate now22:14
clarkbthe replication log shows status=NOT_ATTEMPTED for all replication related to 961313 which also seems to be the cae for 961313. Maybe something else updated that caused a side effect of replicating 961313 but that same side effect source didn't occur for 961312?22:17
clarkbjust to rule out a luck backend load balancing situation for myself I checked all six backends and the one that is present is present on all six backends and the one that is missing is missing on all six backends which is good I guess22:21
clarkbhere is a theory [2025-09-15 21:32:51,583] i the timestamp for the first entry for the missing item. That coincides roughly with the start of reindexing. What if that is a race condition and it couldn't replicate due to index state?22:22
corvusclarkb: i think that's due to a one-time zk cleanup for leaked lock nodes, i can fix22:22
clarkbcorvus: thanks22:22
clarkbre the replication situation as far as I can tell replication generally works and I suspect that the problem is specific to this change due to a confluence of events (how it was created and when it was created). I'm not sure it is worth debugging much further but fi we want I think we can trigger a full replication against openstack/kolla-ansible22:23
corvusdone.  i also deleted the old nodepool node records.22:29
clarkbend of an era22:30
corvusi'm going to delete some old autoholds from tonyb which aren't effective anymore22:31
corvusokay, i think that's all the surgery needed; what remains looks modern.  if we see any more issues with holds in the openstack tenant, lmk, those might be real new bugs.22:33
clarkbwill do thanks again!22:33

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!