opendevreview | Tony Breeds proposed opendev/system-config master: Install ARA master in the ansible-devel job https://review.opendev.org/c/opendev/system-config/+/924012 | 00:07 |
---|---|---|
opendevreview | Tony Breeds proposed opendev/system-config master: Install ARA master in the ansible-devel job https://review.opendev.org/c/opendev/system-config/+/924012 | 00:15 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add ara and tzdata as installed requirements for the ansible-devel job https://review.opendev.org/c/opendev/system-config/+/934917 | 00:15 |
opendevreview | Merged zuul/zuul-jobs master: Switch logo color in docs pages to dark blue https://review.opendev.org/c/zuul/zuul-jobs/+/934453 | 00:37 |
clarkb | I've been semi regularly checking the gerrit queues for the last ~10 minutes or so and index interactive status:abandoned tasks keep showing up | 00:55 |
clarkb | I suspect these are tasks handling those queries? | 00:55 |
clarkb | interesting though as status:abandoned is a weurd query to get consistently | 00:56 |
clarkb | gitea13 is having a bit of a sad (I noticed since some replication tasks were hanging around | 01:00 |
clarkb | looks like a gitea process is consuming all of the memory. I half wonder if gitea has a memory leak now | 01:00 |
clarkb | we had attributed memory pressure to git processes but in this case git seems fine | 01:00 |
clarkb | I'm going to stop and start gitea on gitea13 | 01:00 |
clarkb | https://github.com/go-gitea/gitea/issues/31565 seems like a likely culprit though no known solution (just a workaround that may or may not help) | 01:06 |
clarkb | I haven't see anything out of the ordinary around 0100 in gerrit queues fwiw | 01:06 |
clarkb | maybe it was coincidence or we need more stars to align | 01:06 |
clarkb | noonedeadpunk: ^ fyi I'm beginning to suspect that this may result in your request timeouts | 01:08 |
tonyb | What distros do we care to "support" in system-config? LTSs only? > Xenial? | 01:11 |
fungi | currently yes i think | 01:11 |
clarkb | yes I think we actually started to remove xenial when there were some recent problems? | 01:12 |
tonyb | This is in reference to https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/pip3/tasks/main.yaml#L17 and trying to get noble support moving along | 01:12 |
fungi | if pip3 only exists to handle <=xenial at this point, then happy to see it deleted | 01:12 |
clarkb | I guess that is an important distinction though. We do still run on a small number of xenial nodes | 01:12 |
tonyb | python3-distutils no longer exists on noble | 01:13 |
clarkb | so removing existing support like that is likly to break something | 01:13 |
clarkb | tonyb: I would probably rewrite that to include vars listing the packages to install or include tasks for each release that install the right packages | 01:14 |
clarkb | the default can be noble (so we're forward looking) then have special cases for the older stuff | 01:14 |
clarkb | fungi: that role exists to install pip from upstream I think | 01:14 |
tonyb | I'll take a gently does it approach | 01:16 |
clarkb | ya you can also just != xenial and != noble | 01:16 |
clarkb | thats probably the minimal change | 01:16 |
tonyb | Yeah | 01:16 |
fungi | ah, yes i see it does do different things on xenial, bionic, and newer | 01:17 |
clarkb | re status:abandoned tasks there appears to be some researcher requesting details for all abandoned changes | 01:19 |
clarkb | grep for abandoned in the apache logs and you'll see them. I don't think they are impacting things negatively so seem to be throttling well and I'll leave it be | 01:19 |
clarkb | and now I need to find dinner | 01:20 |
*** fungi is now known as Guest9249 | 01:33 | |
*** kinrui is now known as fungi | 01:41 | |
opendevreview | Suzan Song proposed opendev/git-review master: Support running on systems setting GIT_SSH_COMMAND environment variable https://review.opendev.org/c/opendev/git-review/+/934745 | 02:25 |
opendevreview | Suzan Song proposed opendev/git-review master: Support running on systems setting GIT_SSH_COMMAND environment variable https://review.opendev.org/c/opendev/git-review/+/934745 | 02:29 |
tonyb | Gah this is way more complex than I'd have like as python is now 'externally-managed' so installing pip, and installing packages with that pip need additional args. | 03:10 |
tonyb | or perhaps setup. I suspect we don't really want to install pip and others into a venv | 03:11 |
opendevreview | Suzan Song proposed opendev/git-review master: Support running on systems setting GIT_SSH_COMMAND environment variable https://review.opendev.org/c/opendev/git-review/+/934745 | 04:51 |
opendevreview | Merged openstack/project-config master: Fix openstack developer docs promote job https://review.opendev.org/c/openstack/project-config/+/934832 | 07:31 |
opendevreview | Karolina Kula proposed openstack/diskimage-builder master: WIP: Add support for CentOS Stream 10 https://review.opendev.org/c/openstack/diskimage-builder/+/934045 | 08:09 |
opendevreview | Karolina Kula proposed openstack/diskimage-builder master: WIP: Add support for CentOS Stream 10 https://review.opendev.org/c/openstack/diskimage-builder/+/934045 | 09:51 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add ara and tzdata as installed requirements for the ansible-devel job https://review.opendev.org/c/opendev/system-config/+/934917 | 09:52 |
opendevreview | Tony Breeds proposed opendev/system-config master: Install ARA master in the ansible-devel job https://review.opendev.org/c/opendev/system-config/+/924012 | 09:52 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add some debugging commands to the post job https://review.opendev.org/c/opendev/system-config/+/925667 | 09:52 |
opendevreview | Tony Breeds proposed opendev/system-config master: Update pip3 role to work on Ubuntu Noble https://review.opendev.org/c/opendev/system-config/+/934937 | 09:52 |
frickler | I reenqueued the api-site promote job after corvus' fix merged earlier and it worked, yay https://zuul.opendev.org/t/openstack/build/74d666f516834e14814cc83ea0688e59 | 10:59 |
opendevreview | Karolina Kula proposed openstack/diskimage-builder master: WIP: Add support for CentOS Stream 10 https://review.opendev.org/c/openstack/diskimage-builder/+/934045 | 11:17 |
opendevreview | Tony Breeds proposed opendev/system-config master: Update pip3 role to work on Ubuntu Noble https://review.opendev.org/c/opendev/system-config/+/934937 | 11:23 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add ara and tzdata as installed requirements for the ansible-devel job https://review.opendev.org/c/opendev/system-config/+/934917 | 11:23 |
opendevreview | Tony Breeds proposed opendev/system-config master: Install ARA master in the ansible-devel job https://review.opendev.org/c/opendev/system-config/+/924012 | 11:23 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add some debugging commands to the post job https://review.opendev.org/c/opendev/system-config/+/925667 | 11:23 |
tonyb | Why do we have docker and docker-compose on bridge01/bridge99 ? ( I admit I haven't unlocked my keys to look on the live systems ) | 11:58 |
jaltman | https://www.openafs.org/pages/security/#OPENAFS-SA-2024-001 | 13:39 |
jaltman | https://www.openafs.org/pages/security/#OPENAFS-SA-2024-002 | 13:39 |
jaltman | https://www.openafs.org/pages/security/#OPENAFS-SA-2024-003 | 13:39 |
opendevreview | Daniel Bengtsson proposed opendev/irc-meetings master: Oslo update meeting time. https://review.opendev.org/c/opendev/irc-meetings/+/934648 | 13:43 |
fungi | thanks jaltman! doesn't look like those have hit the oss-security ml yet | 13:52 |
opendevreview | Joel Capitao proposed openstack/diskimage-builder master: WIP: Add support for CentOS Stream 10 https://review.opendev.org/c/openstack/diskimage-builder/+/934045 | 14:16 |
slaweq | frickler fungi ianw clarkb hi, thx for reviews and help with the rtd webhook job, it is working fine again now: https://zuul.opendev.org/t/openstack/build/dd2ab6308a9748aab3386555264d894b | 14:28 |
fungi | thanks for confirming, slaweq! | 14:28 |
opendevreview | Karolina Kula proposed openstack/diskimage-builder master: WIP: Add support for CentOS Stream 10 https://review.opendev.org/c/openstack/diskimage-builder/+/934045 | 14:33 |
opendevreview | Joel Capitao proposed openstack/diskimage-builder master: WIP: Add support for CentOS Stream 10 https://review.opendev.org/c/openstack/diskimage-builder/+/934045 | 16:12 |
clarkb | I've rechecked https://review.opendev.org/c/zuul/zuul-jobs/+/680178 to see if rax keystone is happier now. Their status page is a bit ambiguous as today is marked green but the last incident doesn't have a resolved timestamp | 16:49 |
clarkb | tonyb: some tools are run from docker iirc. I can't remember which ones those as openstackclient isn't | 16:50 |
clarkb | s/those/though/ | 16:50 |
clarkb | tonyb: re pip and being externally managed this would be for noble I'm guessing? I think maybe we do need to shift our install paradigm and use python3 -m venv targets for the python stuff we install | 16:51 |
clarkb | tonyb: then I would expect the venv module to install pip for us in that particular venv. So less have a venv for pip and more just put what we want in venvs? | 16:51 |
fungi | pip can also be used outside a venv. these days there's a directly importable zip too, so "installing" pip is as simple as dropping that file somewhere python can find it | 16:58 |
fungi | (used outside the venv it's managing, so you could even install pip in one venv and tell it to manage another venv that has no pip in it at all) | 16:59 |
clarkb | #status log Manually deleted /nodepool/images/debian-bookworm-arm64/builds/08c5105a1f5f44a487ad691cc9a97147 in nodepool zk database to remove corrupt entry prevent other record cleanup from occuring | 17:23 |
opendevstatus | clarkb: finished logging | 17:23 |
opendevreview | Jay Faulkner proposed openstack/diskimage-builder master: Reapply "Make sure dnf won't autoremove packages that we explicitly installed" https://review.opendev.org/c/openstack/diskimage-builder/+/934992 | 17:27 |
clarkb | I have rechecked 680178 for additional data but the first pass was happy. If second pass is also happy I'll push up a revert | 17:44 |
clarkb | I'm going to send the gerrit upgrade announcement momentarily | 17:52 |
clarkb | and sent | 18:01 |
opendevreview | Clark Boylan proposed opendev/base-jobs master: Revert "Disable rax job log uploads" https://review.opendev.org/c/opendev/base-jobs/+/934995 | 18:04 |
clarkb | not super urgent but it looks like rax swift log uploads are happy again per https://review.opendev.org/c/zuul/zuul-jobs/+/680178 so I think we can proceed with the revert when ready | 18:04 |
corvus | and then also we're clear for the dry run registry prune | 18:06 |
clarkb | yup I'm currently checking the image version on insecure-ci-registry | 18:07 |
clarkb | seems to match the latest tag in quay.io | 18:08 |
clarkb | next up is figuring out how to run the command. I don't think we want to docker exec in the existing container but instead run a new container? | 18:08 |
clarkb | (mostly because this may take some time)? then do something like tee the output to a logfile and also have it emit to the console in a screen session? | 18:08 |
corvus | clarkb: are you concerned we may want to restart the container while it's running? | 18:09 |
corvus | the automatic upgrade will reboot the whole server though, so even if we started a new container, it would get killed for that | 18:10 |
corvus | so if that's the only concern -- i think you could just run it in the existing container with an exec | 18:10 |
clarkb | corvus: ya mostly worried that we'd do out of band container updates. Note I think zuul-regsitry just updatse hourly and isn't part of the weekly upgrades and reboots | 18:11 |
clarkb | looks like `docker-compose run foo $cmd` should override the container command which I'm checking now | 18:11 |
clarkb | there is no entrypoint and only a zuul-registry CMD so I think we can run this using `docker-compose run registry /usr/local/bin/zuul-registry -c /conf/registry.yaml prune --dry-run` ? | 18:12 |
clarkb | alternatively replace run with exec if we think having a second container will cause problems | 18:12 |
corvus | we don't merge much to zuul-registry; i'd probably just do an exec. you can write the output to, say, the conf directory if you want to get it out of the container. | 18:13 |
clarkb | ack | 18:13 |
corvus | i don't think it'd cause problems, i just thought exec would be easier. either works i think. | 18:13 |
corvus | and yeah, that cmd lgtm | 18:13 |
clarkb | your note about logging paths has me wondering if tee would even work since it would be in the container context? I guess I could try running it in a subshell or something but keeping this simple is probably best | 18:14 |
clarkb | or actually no the parser in the parent shell should know everything after the | is in the host side | 18:15 |
clarkb | corvus: I've got a screen running on insecure-ci-registry now with a command queued up if you want to sanity check it there | 18:17 |
corvus | clarkb: that seems reasonable to be -- but i'm only like 51% confident those redirects are going to the right command. :) | 18:19 |
corvus | one way to find out | 18:19 |
clarkb | ya I can test it with ls or similar first actually | 18:20 |
clarkb | and now we're back to using exec :) | 18:21 |
clarkb | huh the failed docker compose run did tee as Iexpected but the exec didn't. Weird | 18:23 |
corvus | i think it did? | 18:23 |
corvus | wasn't the last line registry.yaml? | 18:23 |
clarkb | oh ya it did | 18:23 |
corvus | so yeah it got both | 18:23 |
clarkb | sorry I'm just reading poorly ok. I'll run this as an exec using the redirect and pipe then | 18:24 |
corvus | maybe rm and run again | 18:24 |
corvus | just to be surce :) | 18:24 |
clarkb | ack | 18:24 |
clarkb | ya that lgtm. | 18:25 |
clarkb | ready to run the queued up command? | 18:25 |
corvus | yep | 18:25 |
corvus | okay that's a lot of uploads :) | 18:26 |
clarkb | corvus: interesting that it seems to be saying it would prune the actual object files and the metadata but not the parent sha location | 18:26 |
corvus | the uploads are just a staging area | 18:27 |
corvus | so i think those are basically incomplete pushes | 18:27 |
clarkb | ah | 18:27 |
clarkb | should it prune the actual sha path too then? | 18:27 |
clarkb | (I'm just wondering if there is an underprune there based on that log output) | 18:28 |
corvus | oh like the containing directory? | 18:28 |
clarkb | but it might be best ot let it run to completion then investigate the log | 18:28 |
clarkb | ya | 18:28 |
clarkb | those objects will be tiny but they accumulate as objects to track | 18:28 |
clarkb | not sure if that is a big deal or not | 18:28 |
corvus | i think with swift it doesn't matter because i don't think that containing dir is a real object... but it probably should for the file backends | 18:28 |
opendevreview | Merged opendev/base-jobs master: Revert "Disable rax job log uploads" https://review.opendev.org/c/opendev/base-jobs/+/934995 | 18:29 |
clarkb | checking free on the server shows we're not filling memory tracking all of these records (one of my concerns with doing a prune after so much time) | 18:30 |
corvus | oh i think i see what you mean | 18:30 |
corvus | that "Keep" line is weird isn't it | 18:30 |
clarkb | ya | 18:30 |
corvus | it should only say that if it has a child under it that was kept (should be no) or its ctime is > the target time. | 18:32 |
corvus | maybe in swift, since it's not a real object, it's the ctime that's tripping it? | 18:32 |
corvus | (if that's the case, then i think it's notabug) | 18:32 |
corvus | yeah, we set the ctime to now in our swift handler for subdirs | 18:34 |
corvus | so they all basically show up as immediately created; but i think they should "disappear" once they have no children | 18:34 |
clarkb | got it so multiple passes would take care of it | 18:36 |
corvus | i don't think multiple passes are necessary | 18:37 |
corvus | like, it doesn't actually need to be removed because it's swift | 18:37 |
corvus | it doesn't exist :) | 18:37 |
clarkb | oh I see what you are saying they are virtual paths | 18:37 |
corvus | yep | 18:37 |
clarkb | they only exist because they have children I get it now | 18:37 |
clarkb | not that they exist within our pruning system because they have children but in swift itself | 18:37 |
corvus | and if we ran this with the file backend, it would say "Prune" instead of "Keep" because it would really exist and have a real ctime | 18:37 |
corvus | looking ahead at the other stuff -- zuul-operator is a good microcosm because it doesn't change much | 18:38 |
corvus | it looks like we're keeping a handful of manifests for the operator and deleting most of the others. that seems about right. | 18:38 |
corvus | actually zuul-registry may be more interesting... | 18:39 |
corvus | it looks like we're keeping no zuul-registyr manifists? shouldn't we keep like... 2? for our recent changes? | 18:40 |
clarkb | corvus: yes I would've expected the two most recent commits (bugfix and --dryrun) to have manifests and objects that are kept | 18:42 |
corvus | maybe they didn't get pushed to the intermediate registry... | 18:43 |
clarkb | corvus: we're keeping some of the quay.io/zuul/zuul-registry manifests | 18:43 |
corvus | oh was i looking at the old name? i bet i was | 18:44 |
clarkb | I think what that is doing is pruning all of the old docker hub prefixed (or lack of prefix) manifests but then after we migrated to quay.io it is deciding to keep some? | 18:44 |
corvus | https://zuul.opendev.org/t/zuul/build/9895a96eab724dbeb7abd04e76e1cce3/ | 18:44 |
corvus | i'm going to spot check that build | 18:44 |
clarkb | ++ | 18:45 |
corvus | 2024-11-13 18:28:10,522 DEBUG registry.storage: Keep _local/repos/quay.io/zuul-ci/zuul-registry/manifests/9895a96eab724dbeb7abd04e76e1cce3_latest | 18:45 |
corvus | now to find the layers for it | 18:45 |
corvus | docker pull insecure-ci-registry.opendev.org:5000/quay.io/zuul-ci/zuul-registry:9895a96eab724dbeb7abd04e76e1cce3_latest | 18:47 |
corvus | i think we need to keep all of these objects; https://paste.opendev.org/show/bqTsOb6Mz95mgZRS62R4/ | 18:49 |
corvus | this is probably the part of the prune where we start to use a lot of ram | 18:50 |
clarkb | free seems to report things are still happy | 18:52 |
clarkb | python also overallocates for things like lists pretty aggressively so we may have expanded usable memory in the process earlier on and now we're just actually going to use it for real records rather than python's worry for records | 18:52 |
corvus | we get a list of manifests at the start, then we get the contents of those manifests. using that, it looks like we're maybe 40-50% through the list of manifests | 18:55 |
corvus | so maybe another 20-30m to finish that, then it's on to removing blobs | 18:56 |
clarkb | corvus: another thought which I didn't consider when reviewing the code is: are prune runs resumable if interrupted and I think the answer is yes? but maybe we need to not delete manifests until after we delete blobs? Is the only way to get the blobs iwth a manifest? | 18:58 |
clarkb | actually no we list all blobs and then compare to those in manifets we want to keep removing the others right? | 18:58 |
clarkb | so it should be resumable without orphaning objects | 18:59 |
clarkb | (just thinking this will take a lot of time so being resumable is a good thing) | 18:59 |
corvus | yep, that's the idea | 19:00 |
corvus | it should be resumable | 19:00 |
clarkb | memory use is inching up but very slowly and we still have plenty of room | 19:01 |
corvus | biab | 19:03 |
corvus | this is going much slower than my initial estimate. | 19:36 |
corvus | oh actually maybe not that off. we're at blobs now. | 19:37 |
clarkb | and memory has remained stable | 19:37 |
clarkb | oh it completed | 19:47 |
clarkb | corvus: I'm not seeing 362a321e8da1d7ce58567f73e4eef005677d6d8a8b0115612d7b08fba97b60c0 in the log (no keep and no prune entries nothing at all) | 19:49 |
clarkb | maybe the format is different and I have to add some /'s or something | 19:49 |
clarkb | I wonder if it stopped early for some reason | 19:50 |
corvus | hrm i'm looking too | 19:50 |
clarkb | rc code was 0 | 19:51 |
clarkb | we didn't get OOMKilled | 19:52 |
corvus | clarkb: i think we don't log anything about blobs we keep | 19:52 |
clarkb | ah that would also explain it | 19:52 |
corvus | so absence of delete lines for all of the shas in that paste == success | 19:53 |
clarkb | 2024-11-13 19:41:09,730 DEBUG registry.storage: Keep _local/blobs/sha256:02bf457d870ff6d4e274ea79f5ce5b7bfdf5041d6ac11c5be1ba23b1cec59977/ <- that is a keep line fwiw but maybe we don't do it in every case? | 19:53 |
corvus | we deleted that blob | 19:53 |
corvus | the keep is the swift virtual subdir again | 19:54 |
clarkb | aha | 19:54 |
corvus | okay i checked all the shas in the paste from earlier and it looks like we did not delete them | 19:55 |
clarkb | excellent | 19:55 |
clarkb | the actual pruning will probably take much longer? I'm not sure if the delete calls are super slow but at the very least we would be adding a good chunk of them to the runtime we just had | 19:56 |
corvus | i think my inclination is to say this passes the sniff test, leave this log in place, then run the prune for real as scheduled and redirect it to a second log | 19:56 |
clarkb | and it took just over an hour and 15 minutes in dry run mode | 19:56 |
clarkb | ++ | 19:56 |
corvus | if something goes wrong with the real prune, we will have lots of info for debugging | 19:56 |
corvus | we would have deleted 184564 objects | 19:57 |
corvus | so... probably 185k by friday :) | 19:57 |
corvus | maybe fungi has a feeling for how long each individual delete call might take on average... | 19:58 |
corvus | if they take a second, that's an extra 51 hours. honestly not bad for this and i probably would just let it run; we're in no rush. :) | 19:59 |
fungi | i think when i was testing i was doing it through the rackspace dashboard, and it was timing out trying to do recursive operations, so i don't really have a good number | 20:01 |
clarkb | corvus: should I stop the screen now? | 20:05 |
Clark[m] | I guess we can leave it in place and use it on Friday | 20:33 |
opendevreview | Doug Goldstein proposed openstack/project-config master: Update more ironic project ACLs for editHashtags https://review.opendev.org/c/openstack/project-config/+/935022 | 20:47 |
clarkb | corvus: fwiw I double checked the playbook that manages the zuul cluster upgrades and reboots and it doesn't touch zuul registry as far as I can tell | 20:51 |
clarkb | so we should be in the clear for running this friday without interference from that | 20:51 |
gouthamr | o/ is there a document regarding multinode setups in a zuul context? I want to know if we have any other networking options beyond the public network.. | 21:00 |
gouthamr | the use case is to be able to bind a service on a "controller" node and expose it to OpenStack VMs on the compute node.. | 21:01 |
clarkb | gouthamr: https://docs.opendev.org/opendev/infra-manual/latest/testing.html is the closest thing we have to that document and no you don't get highly configurable networking because half the clouds don't let you configure it at all. Instead you have to build that within your test system | 21:01 |
clarkb | gouthamr: there are zuul roles (that devstack uses iirc) to build vxlan overlay networks that allow you to virtually create somewhat arbitrary network configurations | 21:01 |
clarkb | for example devstack multinode uses this to be able to ssh to VMs on the compute node(s) from the controller | 21:02 |
clarkb | thats very similar to your use case but instead of a client reaching a server on the VM its the other way around | 21:02 |
gouthamr | clarkb: ah yes, thank you; i can go look in that direction.. | 21:02 |
gouthamr | and i'm trying to build a devstack multinode job, so maybe i don't need to reinvent the wheel :) | 21:03 |
corvus | clarkb: i've detached; i don't think we need to keep it but i'll leave it up to you | 21:04 |
clarkb | corvus: ack I made note of the command I ran so I can just drop the --dry-run next time | 21:04 |
corvus | infra-root: i would like to examine the performance around the zuul time query, so i want to enable the slow query log on the production mariadb server that zuul uses. i don't think i'll need this in place for long (maybe 24h?), and the least disruptive way to enable it is to enable it at runtime with the cli. i'd like to do that today if there aren't any objections. | 21:10 |
clarkb | I don't have any. I guess set a reminder to turn it off so that we don't fill the disks with logs? | 21:11 |
corvus | ++ | 21:12 |
corvus | #status log enabled slow query log on zuul-db01 | 22:08 |
opendevstatus | corvus: finished logging | 22:08 |
*** tkajinam is now known as Guest9338 | 22:33 | |
opendevreview | Jeremy Stanley proposed opendev/infra-openafs-deb jammy: Update Jammy to 1.8.13 https://review.opendev.org/c/opendev/infra-openafs-deb/+/935025 | 22:47 |
opendevreview | Jeremy Stanley proposed opendev/infra-openafs-deb jammy: Update Jammy to 1.8.13 https://review.opendev.org/c/opendev/infra-openafs-deb/+/935025 | 22:59 |
opendevreview | James E. Blair proposed opendev/system-config master: Add basic docs on updating the OpenAFS ppa https://review.opendev.org/c/opendev/system-config/+/935026 | 23:08 |
clarkb | corvus: small thing on ^ | 23:10 |
opendevreview | James E. Blair proposed opendev/system-config master: Add basic docs on updating the OpenAFS ppa https://review.opendev.org/c/opendev/system-config/+/935026 | 23:12 |
opendevreview | Jeremy Stanley proposed opendev/infra-openafs-deb jammy: Update Jammy to 1.8.13 https://review.opendev.org/c/opendev/infra-openafs-deb/+/935025 | 23:36 |
opendevreview | Jeremy Stanley proposed opendev/infra-openafs-deb jammy: Update Jammy to 1.8.13 https://review.opendev.org/c/opendev/infra-openafs-deb/+/935025 | 23:40 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!