Wednesday, 2025-12-10

clarkbcool screenshots for hound indicate it is working. I was worried about updating go and nodejs that it would break something in the codebase but seems fine00:01
*** efoley_ is now known as efoley13:25
opendevreviewEnzo Candotti proposed openstack/project-config master: Update app-seaweedfs config  https://review.opendev.org/c/openstack/project-config/+/97040815:37
fungiinfra-root: for ^ i think we can do it as part of friday's project renames if I just add the group rename to https://review.opendev.org/970307 right?15:42
opendevreviewEnzo Candotti proposed openstack/project-config master: Update app-seaweedfs config  https://review.opendev.org/c/openstack/project-config/+/97040815:45
funginever mind, it looks like we already ended up with two groups because both the correct and incorrect spellings were included in the original acl15:45
clarkbfungi: I think there is a group deletion api now? So theoretically we could fix the acl then delete the unused group?15:52
clarkboh its a plugin15:52
clarkbwhich I din't think we install. Meh its not a big deal15:52
fungiyeah, we have a ton of orphaned groups we've accumulated over the years, would probably be a good idea to clean them all up at some point15:52
fungiif we were going to that effort though, i'd want to script something to generate a list of all unused groups and then bulk delete them15:53
clarkbI think All-Users.git controls it so the non plugin api now is to push a commit that removes groups. If done while gerrit is offline due to conflicts in the notedb user db we can either first address those conflicts (a long standing todo) or do it offline and reindex groups before starting gerrit again15:57
opendevreviewMerged openstack/project-config master: Update app-seaweedfs config  https://review.opendev.org/c/openstack/project-config/+/97040816:00
clarkbinfra-root https://review.opendev.org/c/opendev/system-config/+/970322 https://review.opendev.org/c/opendev/system-config/+/970325 and https://review.opendev.org/c/opendev/system-config/+/970321 are three more trixie image updates. I'd like to get those in today as further sanity checking of trixie before we commit to it with Gerrit. I'm happy to approve these changes or have you16:04
clarkbapprove them as you review. I should be around all day and can monitor16:04
clarkblet me know if there are any concerns. The hound one (listed first) is probably the most interesting16:04
fungiclarkb: looks like ems replied "email addresses must be unique across users"16:05
fungiso at least we have the answer16:06
fungisort of odd that they've designed it with a one-account-per-person expectation, requiring every account to have a unique e-mail address16:07
fungii don't understand the reasoning behind these new account restrictions, especially for accounts on a dedicated homeserver16:07
clarkbI suspect it has to do with being able to track abuse to verified email addresses.16:09
clarkbWhich is maybe ok for humans but yes if I'm manually adding an account to my server for use by a bot I don't think that is important16:09
fungiyeah, but i don't see howq having more than one account associated with an address breaks that ability16:10
fungiwith every new change it seems like they're further dumbing down matrix to make it more like proprietary/commercial chat systems16:12
clarkbgiven that answer I guess I'll do soemthing like infra-root+matrixstatus@ for the email address?16:13
fungiwfm16:13
fungiheading out for lunch errands, back in a while16:21
opendevreviewHediberto Cavalcante da Silva proposed openstack/project-config master: Add LVM CSI App to StarlingX  https://review.opendev.org/c/openstack/project-config/+/97043917:03
mordredclarkb: all three image changes lgtm17:19
clarkbmordred: thanks!17:22
mordredclarkb: out of curiosity - I was looking at the gerrit stack. we have ensure-nodejs on there, which is ensuring 18, which is EOL. If we're building gerrit with java 21, perhaps node should get updated?17:23
mordredI can't find any information about explicit versions supported17:24
clarkbmordred: I looked that up but the polygerrit-ui content in the gerrit stable-3.11 branch says you need nodejs 18. It is possible that newer nodejs would work too and the upstream docs are wrong17:26
clarkbmordred: it may be worth a change on the end of my stack to bump that up and see if it affects 3.11 or 3.12 image builds negatively17:26
mordred++ I'll push one up for the fun of it, see what happens17:27
clarkbhttps://gerrit.googlesource.com/gerrit/+/refs/heads/stable-3.11/polygerrit-ui/#installing-node_js-and-npm-packages is the document I am referring to17:27
opendevreviewMonty Taylor proposed opendev/system-config master: Update node to 24 for Gerrit builds  https://review.opendev.org/c/opendev/system-config/+/97045117:33
clarkbcool looks like that triggered image rebuild jobs too17:37
fungiwill be interesting to see the result, yes17:37
mordredI also asked over in gerrit discord just to be complete17:42
clarkbfungi: if the hound, matrix-eavesdrop, and accessbot updates look good to you I'm happy to ensure they deploy as expected17:47
fungiyeah, they're on my list to go through in a few minutes17:47
clarkbthanks17:48
clarkbmordred: I think your jobs are failing beacuse the parent change image builds are not available in the insecure ci registry anymore. They should be we should only delete images after 30 days iirc, but I seem to recall hitting this before and not having time to debug it then18:03
clarkbI suspect that if we recheck the whole stack it will work beacuse we'll have new images across the board18:03
clarkbcorvus: ^ fyi. I'm not aware of any zuul registry changes that would impact this other than potentially the work we did to make swift expirations work at all (but maybe it is buggy?)18:04
mordredoh - nod. fun. well, I imagine we'll naturally recheck that stack at some point before actually moving forward18:04
corvusclarkb: yeah, i don't expect any errors, so this is a good opportunity to debug, but unfortunately i don't have that time right now.18:04
clarkbone thought I had was that maybe we're deleting layers that are shared between new images and old images because the cleanup routine sees them as assoicated with the old stuff. However, I want to say we should account for that via some sort of reference counting iirc18:05
corvusyep, if that's not working, it's not because we forgot to think of that :)18:06
corvus    expiration: 15552000  # 180 days18:07
clarkbok more than 30 days even18:07
clarkb/var/log/registry-prune/prune.log should have pruning logs on insecure-ci-registry. So I guess step 0 is looking there to try and confirm the objects returning 404 are listed as pruned. I can look at that18:09
corvusi'm pretty sure the registry is dealing with seconds as a time unit and we're converting the swift info into seconds.  like, i don't think anything is using milliseconds.18:12
clarkb2025-12-10 00:00:08,209 DEBUG registry.storage: Keep _local/repos/quay.io/opendevorg/gerrit-base/manifests/6e220809a5224a74a19ffdb3f60e0303_latest18:13
clarkbhttps://zuul.opendev.org/t/openstack/build/8a3e07f57b0044bab6def17aced4a1a5/console#2/1/13/localhost which is one of the two things that failed here18:14
clarkbso maybe it isn't pruning that is the problem?18:15
clarkbI'm trying to find where the registry itself is logging the 404 now as maybe that has a clue18:16
clarkbthis service doesn't have its logs going to syslog then to the container specific log files so grepping out of docker logs is slow18:18
clarkbsomething we could fix up I guess18:18
corvusclarkb: you can use "-n 10000" to reduce the set18:19
mordredclarkb: gerrit-base should have been build in that changeset18:20
mordredso I'm not sure this is actually about pulling any images built by previous incarnations of these jobs18:21
clarkbmordred: yes but its trying to get the gerrit-base from the parent change I think18:21
clarkbat 2025-12-08 21:10:31,523 I see regsitry logs check if the manifest exists and it doesn't so then it proceeds with an upload to that manifest and things start returning 200 at that point18:22
clarkbbut then at 2025-12-10 17:59:45,135 we get our first 40418:22
mordredbut why? the gerrit-base image job shouldn't be referencing the gerrit-base images from the previous jobs. and the gerrit-3.11 job should be using the gerrit-base image that should have been uploaded by the gerrit-base job18:23
clarkbthe gets to swift return 200 then we return 404 for some reason (is the data getting corrupted I probably need to expand my log grep context range)18:23
clarkbmordred: I think its teh default when you use the intermediate registry hooks. We don't know if each image is going to be used but grab them from the artifacts lists of any parents just in case18:23
clarkbmordred: I think you are correct that in this case we don't actually use that image at all18:24
mordredoh - gotit. the job itself doesn't actually need them, but the machinery has no way of knowing that18:24
mordredthat makes much more sense18:24
corvushttps://etherpad.opendev.org/p/Vcwu0q2fSfRDo_bfevkW18:25
corvussome logs18:25
clarkbcorvus: oh is it some blob under the manifest?18:27
corvuslooks like it, and i think we pruned it18:28
corvus /var/log/registry-prune/prune.log.1:2025-12-09 01:19:13,498 DEBUG urllib3.connectionpool: https://storage101.dfw1.clouddrive.com:443 "DELETE /v1/MossoCloudFS_622b11a1-5dfa-43b4-9f58-4ad3c6dbc4a0/intermediate_registry/_local/blobs/sha256:2dca1d89c1ce32840623231f8569b38d9a55a7c15503bd7f2fcc3a6887af23bc/data HTTP/11" 204 018:28
clarkb2025-12-09 01:19:13,419 DEBUG registry.storage: Prune _local/blobs/sha256:2dca1d89c1ce32840623231f8569b38d9a55a7c15503bd7f2fcc3a6887af23bc/data18:29
clarkb2025-12-09 01:19:13,499 DEBUG registry.storage: Keep _local/blobs/sha256:2dca1d89c1ce32840623231f8569b38d9a55a7c15503bd7f2fcc3a6887af23bc/18:29
clarkbits interesting that we decdied to prune the /data but keep /. I wonder if that is a clue to where the bug may be18:29
corvusonly objects have a timestamp, directories always have the current time and so are never deleted18:30
corvusthey aren't real, so once the objects are gone, so are the dirs18:30
clarkbah18:30
corvusthe logic is to delete any blob older than our target expiration time as long as that blob isn't mentioned as one of the layers for the manifests we decided to keep (which we do the same way: manifests older than our target expiration time).18:33
corvusthat suggests either the times are off, or the most likely i think is that we did not correctly identify all of the necessary layers for that manifest18:34
corvus2025-12-09 00:12:32,297 DEBUG registry.storage: Unknown manifest _local/repos/quay.io/opendevorg/gerrit-base/manifests/6e220809a5224a74a19ffdb3f60e0303_latest18:35
corvusi think maybe the registry doesn't know how to read that manifest format18:35
corvusso if our build process is now producing a manifest format other than 'application/vnd.docker.distribution.manifest.v2+json' we are probably effectively just pruning everything every 24 hours18:36
clarkboh, this might eb related to quay.io also having the two arch entries. One for x86-64 and one for unknown18:37
mordredapplication/vnd.oci.image.manifest.v1+json18:37
mordredthat's what current docker purportedly produces18:37
clarkbthe big change we did semi recently is using buildx for everything18:37
clarkbto support speculative builds in docker18:38
corvushttps://opendev.org/zuul/zuul-registry/src/branch/master/zuul_registry/storage.py#L29618:38
corvusthen i think the task would be to add parsing of application/vnd.oci.image.manifest.v1+json to that method18:38
mordredcorvus: yeah. I agree18:38
clarkbok so maybe related to buildx change or maybe just related to docker updates18:39
clarkbeither way there is an additional format we need to be able to understand18:39
mordredYeah - correct thing is update zuul-registry18:39
mordredin the short term, we could add --output type=docker to the buildx commands it seems, to produce the old format18:40
clarkbI suspect fixing this is probably easy. Just need to get one of these manifests (the hound image I proposed yesterday should still be in there I suspect?18:40
clarkbwell and actually the manifest itself is still there the blobs aren't18:41
clarkbhttps://github.com/opencontainers/image-spec/blob/main/manifest.md this appears to be the spec18:42
mordredaccording to the internet, the formats are very similar, mostly differing in media type identifiers18:43
clarkbvs github.com/openshift/docker-distribution/blob/master/docs/spec/manifest-v2-2.md18:44
clarkbthough I just noticed that is an openshift repo so probably not authoritative18:44
clarkbcomparing those I think the datastructures are basically equiavlent for what we are doing18:48
mordredcorvus: in https://opendev.org/zuul/zuul-registry/src/branch/master/zuul_registry/main.py#L330-L333 - it seems like another place that, for completeness, we'd want to also validate the new default. Any change you know why we'd only want to validate the docker v2? 18:49
mordredwe also already list application/vnd.oci.image.manifest.v1+json here: https://opendev.org/zuul/zuul-registry/src/branch/master/zuul_registry/main.py#L178-L185 - but as the lowest priority18:51
clarkblookingat the format specs I'm not sure I understand what https://opendev.org/zuul/zuul-registry/src/branch/master/zuul_registry/storage.py#L300-L301 is getting us. I'm half wondering if this would break for the supported format too18:52
clarkbsince manifests don't seem to have key's off of their mediatype18:53
corvusmordred: i think we only added that validation to catch an error that was being produced; i think we presume it doesn't occur otherwise18:53
mordrednod18:53
clarkboh wait we get blob as a separate step so I guess that isn't getting the data I think it is in the first half of the get manifests method18:54
corvusmordred: yeah, so seems like the prune system should be able to handle all of those formats18:54
mordred++ - want me to add the other three to the prune check while I'm at it?18:55
mordredit doesn't look like we have any tests to test manifest parsing for this18:55
corvusclarkb: yep, the manifest blob is an index which points to one or more other blobs which are the actual manifest content (ie, layer list and config) in different formats18:56
corvusmordred: yeah, if they all have the same layer list format (ie, lines 307-311 work for all of them) then that might be it18:57
clarkbcorvus: https://opendev.org/zuul/zuul-registry/src/branch/master/zuul_registry/main.py#L380-L382 yup it seems to originate from here18:57
corvusif one of them puts their layer list in some other json key, we'd need to handle it there18:57
clarkbI was expecting that to be from docker but it is our own accounting data18:57
corvusit does come from docker18:58
corvusthe content does at least18:59
clarkbcorvus: I mean we're writing it in as data to account for data that docker gave us. So the value itself is from docker but its not a direct blob or manifest18:59
corvusbut yeah, if you're referring to the way we store the multiple versions, i see what you mean18:59
clarkbmordred: corvus manifest lists are slightly different https://github.com/openshift/docker-distribution/blob/main/docs/spec/manifest-v2-2.md#example-manifest-list18:59
clarkbI think for manifest lists we want for m in data[manifests]: m[digest]. And for manifests we want manifest[config][digest] and for l in manifest[layers]: l[digest]19:01
mordredmanifest lists are just references to other manifests. so for pruning, they really shouldn't come in to play?19:01
mordredwe should be pruning at the manifest level19:02
clarkbmordred: I think pruning also prunes manifests so if the manifest list is newer than our prune timeout we want to add the digest for each manifest in the manifest list to the keep list19:02
clarkbmordred: to preserve similar behavior to https://opendev.org/zuul/zuul-registry/src/branch/master/zuul_registry/storage.py#L309 in the single manifest case ?19:03
clarkbmordred: I think otherwise we could make a new manifest list with old manifests in it that would get deleted breaking the manifest list ?19:03
mordredI think you're right in the pathological case, but in practice I don't know of any flow where we'd create and upload a manifest list using pre-existing manifests. we're super clever with buildx things, but we're not that clever :)19:04
clarkbthat is a good point. Those get uploaded together19:04
clarkband manifest lists aren't themselves artifacts within manifests19:05
clarkb(unlike layers)19:05
clarkbin any caase it sounds like yuo're writing a change so I don't need to?19:05
mordredyeah. I'm on it19:06
corvusso we ignore manifest lists for pruning and assume that the manifest list and all the manifests it references will be pruned at the same time.19:06
corvusthat's probably worth a code comment :)19:06
clarkb++ and thanks19:06
corvus(i assume we won't have any collisions with manifests being used either alone or in multiple manifest lists, just because they should have build data that won't be identical in two builds.  if i'm wrong about that, then we would need to handle manifest lists as an extra reference-counting step)19:07
mordredremote:   https://review.opendev.org/c/zuul/zuul-registry/+/970480 Update manifest parsing to support oci manfiests [NEW]     19:09
clarkbre testing: it might be easy to do that at a functional level19:09
mordredthere's a first stab19:09
clarkbbuild image, upload it, prune, pull image19:09
clarkbthe 24 hour sanity check may need overriding though19:10
clarkbor we can fast forward our clocks before pruning19:10
corvusyes, but otoh, with so many ways of making images, maybe a synthetic test where we have test fixtures for each of the manifest formats, and then can mock out the time in the test might let us keep more stable test coverage over time19:13
clarkbmordred: I left a note about manifest list handling on the change19:13
clarkbor wait are manifest lists never entering the code in the first place?19:14
clarkb(I'm trying to reconicle the don't do anything with manifests lists when we have one in our acceptable content types)19:14
mordredohhhh ... yeah. hang on19:18
opendevreviewMerged opendev/system-config master: Update Hound Container to Debian Trixie  https://review.opendev.org/c/opendev/system-config/+/97032219:18
clarkbhound is in the process of starting back up now after ^ deployed19:23
mordredcorvus, clarkb: ok, updated. made it much simpler change19:24
clarkbmordred: you pushed a new change :)19:24
mordredwat?19:24
mordredoh - piddle. derp19:24
clarkbthe change id changed. Maybe you overdeleted teh content in the commit message when editing it or something19:25
mordredyup. it was a commit message edit derp19:25
corvusso what's a application/vnd.oci.image.index.v1+json ?19:26
mordredthat's the manifest list version19:26
corvusi assumed that was application/vnd.docker.distribution.manifest.list.v2+json19:27
clarkbcorvus: https://github.com/opencontainers/image-spec/blob/main/image-index.md ist the oci manifests list format I think19:27
corvusare both of those manifest lists then?19:27
mordredyeah. the docker non-oci veriso is that 19:27
mordredyes19:27
mordredof the 4, two are manifest lists, and 2 are manifests19:27
clarkbthey look almost identical too. So basically they formalized the docker things as oci things and maybe sprinkled a bit of extra info in at the same time19:27
corvusok cool.  i mean... you know, "cool" because... cool.19:28
mordredyup. there are some metadata differences in areas we don't care about :)19:28
corvus+3 thanks mordred !19:31
fungiin unrelated news, deploy of the matrix-eavesdrop container trixie update failed on a "502 Bad Gateway" error from quay19:31
clarkbfungi: ya I was trying to confirm it was quay (it was pulling accessbot but the accessbot change was behind it in the gate so I think it was quay19:32
clarkb(if accessbot was ahead it would use the image from the gate job ahead I think)19:32
clarkbhound seems to be working https://codesearch.opendev.org/?q=foobar&i=nope&literal=nope&files=&excludeFiles=&repos=19:35
clarkbthe accessbot eavesdrop job is finally running. If it looks ok I'll recheck the matrix-eavesdrop change19:51
opendevreviewMerged opendev/system-config master: Build accessbot on trixie  https://review.opendev.org/c/opendev/system-config/+/97032120:08
mordredclarkb, corvus well piddle. functional tests fail :( will dig a little more20:55
mordredhrm. it fails on a skopeo task20:57
clarkbhttps://zuul.opendev.org/t/zuul/build/3fbe42da4bbb475ca1baf7161ce804c5/log/docker/functional-test_registry_1.txt#7 is the log on the other side I think20:58
clarkblooks like ssl problems before we do anything docker protocol specific20:58
mordredwe haven't run that job since april20:59
mordredperhaps something has degraded/bitrotted20:59
clarkbit would not surprise me20:59
mordredor just changed (hi ssl)20:59
clarkbit is using cherrypy which zuul also uses. Maybe we need to update deps or pin versiosn or something21:00
clarkbmordred: some googling indicates that maybe its trying to do http on one side and https on another21:02
mordredthat could be a thing21:02
mordredafk for a sec, but I'll poke more in just a bit21:02
corvusthere is (was) a problem with cheroot, a cherrypy dep, causing python to hang instead of exiting after tests.  sounds like that isn't the problem here, but if that shows up, we can copy the pin from zuul.21:02
clarkbhttps://opendev.org/zuul/zuul-registry/src/branch/master/playbooks/functional-test/conf/registry.yaml this indicates to me that the server side is configured to do ssl21:04
clarkbhttps://opendev.org/zuul/zuul-registry/src/branch/master/playbooks/functional-test/docker.yaml#L3-L17 is where we're failing21:06
clarkbmaybe we need skopeo --tls-verify=false ?21:07
clarkbcorvus: that cheroot issue claims to have been fixed in november, but I'm not sure if there is a new release for it yet21:10
corvusoh nice, last i checked they hadn't merged a fix21:11
clarkbmordred: actually I misidentified the docker compose file I think it is this one: https://opendev.org/zuul/zuul-registry/src/branch/master/playbooks/functional-test/docker-compose.yaml21:13
clarkbanyway that docker-compose file loads that registry.yaml config into the registry so yes should still be listening on port 9000 with ssl certs21:14
clarkbthe error message implies the tcp connection made it to the process running in the container too21:14
clarkbI'm not seeing anything obviously wrong around that. We create a tls cert and it has a dns entry for localhost and an ip entry for 127.0.0.121:16
clarkband we run update-ca-certificates against it21:16
clarkbmordred: corvus: that skopeo command is doing `skopeo copy docker-archive:/tmp/registry-test/test.img docker-daemon:localhost:9000/test/image:latest` but localhost:9000 is a docker registry not a docker daemon.21:21
clarkbI wonder if that just worked in the past21:22
clarkbbut now we need to use docker:// ?21:22
corvusweird.  i do have a successful build result from a change from april of this year, so it worked at least that recently21:28
clarkbthe command name does say copy into local docker image storage. But localhost:9000 appears to be the zuul registry we start in the task just prior listening on port 900021:29
corvusbut maybe it's the other way around: maybe we need to be copying to the daemon.  maybe the "localhost:9000/" is the new thing21:29
opendevreviewMerged opendev/system-config master: Update matrix-eavesdrop container to build on Debian Trixie  https://review.opendev.org/c/opendev/system-config/+/97032521:29
corvuscan you link to the build console?21:29
clarkbcorvus: https://zuul.opendev.org/t/zuul/build/3fbe42da4bbb475ca1baf7161ce804c5/console this is the failing task21:30
corvusthe comment and structure of the task are correct:21:34
corvuswe want to copy the test image into the local docker daemon's local image storage, and once there, we want that image to have the name "localhost:9000/test/image:latest"21:35
corvusin other words, we're not trying to get skopeo to talk to port 900021:35
corvuswe're trying to create a local docker image with "localhost:9000" as part of its name21:35
corvusso that we can do a "docker push localhost:9000/test/image" in the next task21:36
corvus(which will causer the docker daemon to push the image named that to localhost:9000 -- that's the point at which we want something to talk to that port)21:36
clarkbcorvus: https://zuul.opendev.org/t/zuul/build/3fbe42da4bbb475ca1baf7161ce804c5/log/docker/functional-test_registry_1.txt#7 https://zuul.opendev.org/t/zuul/build/3fbe42da4bbb475ca1baf7161ce804c5/log/job-output.txt#3095 these are the two timestamps and they seem to line up almost?21:36
corvusmaybe skopeo changed in such a way that it's trying to be helpful?21:37
corvusyeah, i mean, i'm not saying that it isn't talking to port 900021:37
corvusi'm saying it's not supposed to :)21:37
clarkbthe timestamps are off by one second. It is enough to make me question that this is the issue but also close enough that it wouldn't surprise me if that is a logging artifact21:38
corvusi don't think we expect anything to contact port 9000 until the next task21:38
clarkbso ya one theory maybe that newer skopeo (on noble?) is interpretting that as talk to the docker daemon at localhost:9000 and it doesn't work rather than naming things that way21:38
corvusyep21:39
clarkbside note: the openstack zuul tenant is in an interesting state (lots of release approvals and other changes making my browser not render things quickly or at all)21:39
corvuswhich is a super weird thing for skopeo to do, since it entails not understanding that image names can have hostnames in them, which is kind of the rallying cry of the entire "containers lib" project...21:40
clarkboh I think release-approval is just noise. The pipeline manager has to consider each change but really its just a bunch of new changes pushed up?21:40
corvusyeah, just give it a sec to work through it21:40
mordredso maybe try plopping a docker:// in front of the localhost:9000 ?21:58
clarkbI was going to sugguest maybe running skopeo locally against your docker daemon to see if it tries to hit some specific port if you've got things instaleld (I Don't think I have skopeo currently)22:01
clarkbthe job ran on ubuntu noble and I'm also trying to remember when we switched the default nodeset but I think it was earlier than april so any distro packaging change would've been caught earlier?22:01
mordredyeah ... also, according to docs, docker:// implies http transport22:03
mordredso whatever the answer is, it's almost certainly not that22:03
mordred(trying to reproduce locally)22:04
clarkbmatrix-eavesdrop updated on eavesdrop02.opendev.org and https://meetings.opendev.org/irclogs/%23zuul/%23zuul.2025-12-10.log is recording new logs22:04
mordredwell. that's unfortunate. the skopeo command TOTALLY worked as expected for me locally 22:06
mordredit did not try to hit localhost:9000 at all22:07
clarkbso maybe the timing being close there is just a distraction22:09
clarkbas I said its not exactly overlapping but within a second or so of one another22:09
mordredclarkb: gerrit channel confirms that things should work on node 2422:12
clarkbcool I just caught up on discord and see that is what paladox uses22:15
clarkbkolla-ansible got a 65 change stack for fixing ansible lint errors22:20
fungi65?!?22:20
clarkbyes sir https://review.opendev.org/c/openstack/kolla-ansible/+/970555/1 is a random one in the middle that I clicked on and the show all button says (65)22:21
clarkbI didn't count them to make sure gerrit's count is accurate22:21
fungiyikes22:22
tonyb#dedication!22:31
opendevreviewMonty Taylor proposed zuul/zuul-jobs master: Update the default skopeo upstream version  https://review.opendev.org/c/zuul/zuul-jobs/+/97056623:51

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!