| @mnasiadka:matrix.org | #status log Replaced mirror01.iad3.openmetal.opendev.org (346dbb25-db6f-4459-9108-b5f2bc3d0b5c) with mirror02.iad3.openmetal.opendev.org | 05:05 |
|---|---|---|
| @status:opendev.org | @mnasiadka:matrix.org: finished logging | 05:05 |
| @noonedeadpunk:matrix.org | Hey folks. I know I am going with this topic back and forth for last.... 5 years?.. But... Do you know if publishing content to ansible galaxy was sorted out? | 07:54 |
| As last time I was on it, I think we agreed to have a role in zuul/zuul-jobs, but then there were other opinions and I got fed up with the process. Now wanna see if potentially smth has changed in the meanwhile, or pretty much the flow should start from the beginning? | ||
| @fungicide:matrix.org | Dmitriy Rabotyagov: i assume it works similarly to pypi? you have some file(s) locally and a client/command you run to upload to the site? | 13:13 |
| @noonedeadpunk:matrix.org | yes, pretty much | 13:13 |
| @fungicide:matrix.org | in theory we could model it on how pypi jobs work, in that case | 13:13 |
| @noonedeadpunk:matrix.org | cd colelction; ansible-galaxy collection build; ansible-galaxy collection publish *.tar.gz --api-key $ANSIBLE_GALAXY_TOKEN` | 13:14 |
| @fungicide:matrix.org | does `ansible-galaxy collection build` run any user-supplied code or does it just assemble/transform existing files? | 13:14 |
| @fungicide:matrix.org | just want to avoid having that expose the contents of the token | 13:15 |
| @noonedeadpunk:matrix.org | it packs the current folder and adds metadata based of galaxy.yaml it expects to see in a root folder | 13:15 |
| @noonedeadpunk:matrix.org | I believe this can be set as env var as well | 13:15 |
| @noonedeadpunk:matrix.org | eventually, ansible-collection-openstack does publish to galaxy | 13:15 |
| @noonedeadpunk:matrix.org | https://opendev.org/openstack/ansible-collections-openstack/src/branch/master/.zuul.yaml#L300-L305 | 13:16 |
| @noonedeadpunk:matrix.org | the problem was to streamline that | 13:17 |
| @noonedeadpunk:matrix.org | and re-encrypt the secret | 13:17 |
| @noonedeadpunk:matrix.org | and have a generic job in zuul to do that | 13:17 |
| @noonedeadpunk:matrix.org | Like https://review.opendev.org/c/zuul/zuul-jobs/+/899230 - is without any reviews for 2 years now | 13:18 |
| @noonedeadpunk:matrix.org | And I believe this was the last time I've checked on the topic as well... and back then I think we agreed that best thing to do would be to add a role to zuul-jobs, which apparently did not work out :( | 13:19 |
| @noonedeadpunk:matrix.org | (and I'm not sure if this role still has correct logic or not though, but I can test it out I guess) | 13:21 |
| @fungicide:matrix.org | looks like it just didn't get reviewed? | 13:21 |
| @noonedeadpunk:matrix.org | yup | 13:21 |
| @fungicide:matrix.org | it doesn't look especially complicated, but also i guess it's hard to include tests for it without a fake galaxy upload api | 13:25 |
| @noonedeadpunk:matrix.org | right... | 13:26 |
| @noonedeadpunk:matrix.org | I don't think galaxy has some analog to test pypi | 13:26 |
| @noonedeadpunk:matrix.org | and then second challange would be to enroll ansible-collections-openstack to use the role, and move the secret from it to a project-config | 13:28 |
| @noonedeadpunk:matrix.org | but it's a different story... | 13:29 |
| @fungicide:matrix.org | the commit message refers to a `ci/publish/publish_collection.yml` playbook that seems to be used in a ansible-collections-openstack-release zuul job? | 13:30 |
| @noonedeadpunk:matrix.org | yes, correct: https://opendev.org/openstack/ansible-collections-openstack/src/branch/master/.zuul.yaml#L300-L305 here is the job | 13:31 |
| @noonedeadpunk:matrix.org | and the playbook it uses: https://opendev.org/openstack/ansible-collections-openstack/src/branch/master/ci/publish/publish_collection.yml | 13:31 |
| @fungicide:matrix.org | i'm digging in the zuul builds list to see the last time it ran | 13:31 |
| @noonedeadpunk:matrix.org | eh, been a while since last tag | 13:31 |
| @fungicide:matrix.org | okay, so it's only run in a tag pipeline | 13:31 |
| @noonedeadpunk:matrix.org | Oct 21, 2025 | 13:32 |
| @noonedeadpunk:matrix.org | yes, sure | 13:32 |
| @noonedeadpunk:matrix.org | it's a release sequence basically | 13:32 |
| @noonedeadpunk:matrix.org | you do want to publish as part of the release process | 13:32 |
| @fungicide:matrix.org | got it, included in the tag pipeline at the bottom of the .zuul.yaml file there | 13:32 |
| @noonedeadpunk:matrix.org | so really very alike to pypi publish | 13:32 |
| @noonedeadpunk:matrix.org | except it's not mainstreamed and locked down only for one collection | 13:33 |
| @fungicide:matrix.org | i don't see any reason not to pick this back up, and would be fine taking it forward and fixing any problems that arise from attempts to use it | 13:40 |
| @noonedeadpunk:matrix.org | I wanted to publish https://opendev.org/openstack/ansible-config_template to begin with, so I could play there in a gate job to begin with | 13:42 |
| @noonedeadpunk:matrix.org | though it probably not gonna work outside of tag pipeline | 13:43 |
| @noonedeadpunk:matrix.org | as | 13:44 |
| ``` | ||
| - name: Discover tag version | ||
| ansible.builtin.set_fact: | ||
| version_tag: "{{ zuul.tag | default('no_version', true) }}" | ||
| - name: Fail if no tag version found | ||
| ansible.builtin.fail: | ||
| msg: "No tag was found in Zuul vars!" | ||
| when: version_tag == 'no_version' | ||
| ``` | ||
| @fungicide:matrix.org | yeah, you'd need a fallback value for that to deal with lack of a git tag | 13:46 |
| @noonedeadpunk:matrix.org | let me probably add smth quick to defaults to be able to override if needed | 13:48 |
| -@gerrit:opendev.org- Dmitriy Rabotyagov proposed: [zuul/zuul-jobs] 899230: Add role for uploading Ansible collections to Galaxy https://review.opendev.org/c/zuul/zuul-jobs/+/899230 | 13:53 | |
| -@gerrit:opendev.org- Dmitriy Rabotyagov proposed: [zuul/zuul-jobs] 899230: Add role for uploading Ansible collections to Galaxy https://review.opendev.org/c/zuul/zuul-jobs/+/899230 | 13:56 | |
| @noonedeadpunk:matrix.org | fungi: if you can re-vote, that would be appreciated :) | 14:10 |
| @fungicide:matrix.org | will do, just juggling meetings all morning | 14:11 |
| -@gerrit:opendev.org- Brian Haley proposed: [opendev/irc-meetings] 982430: Move neutron-drivers meeting earlier by 1 hour https://review.opendev.org/c/opendev/irc-meetings/+/982430 | 14:27 | |
| @mnasiadka:matrix.org | Clark: in OpenMetal we used a local flavor that was called opendev-mirror - for OVH/BHS1 should I use the same flavor as existing mirror02? | 14:55 |
| @fungicide:matrix.org | i'm going to be in and out for the next few hours, need to run some errands and have an appointment | 15:02 |
| @fungicide:matrix.org | i'll check back in when i can | 15:02 |
| -@gerrit:opendev.org- James E. Blair https://matrix.to/#/@jim:acmegating.com proposed: [zuul/zuul-jobs] 982436: WIP: Make max_workers configurable for prepare-workspace-git https://review.opendev.org/c/zuul/zuul-jobs/+/982436 | 15:02 | |
| -@gerrit:opendev.org- Michal Nasiadka proposed: [opendev/irc-meetings] 982437: Move kolla meeting earlier by 1 hour https://review.opendev.org/c/opendev/irc-meetings/+/982437 | 15:04 | |
| -@gerrit:opendev.org- Michal Nasiadka proposed: [opendev/irc-meetings] 982437: Move kolla meeting earlier by 1 hour https://review.opendev.org/c/opendev/irc-meetings/+/982437 | 15:05 | |
| @clarkb:matrix.org | mnasiadka: yes I think so. Give me a minute and I'll take a look | 15:20 |
| @clarkb:matrix.org | mnasiadka: yup I believe that the flavor ssd-osFoundation-3 in ovh affects scheduling things and puts our servers in the correct locations including for the mirror. Then we attach a 200GB volume to the instances to be mounted half for apache cache and half for openafs cache | 15:23 |
| @mnasiadka:matrix.org | Clark: Already created a new volume, so I should be fine - thanks :) | 15:23 |
| @clarkb:matrix.org | mnasiadka: https://opendev.org/opendev/system-config/src/branch/master/launch/src/opendev_launch/mirror_volumes.sh this is the script that sets that up on the mirror node once the node's initial boot and configuration is done (but before we add it to the system-config inventory) | 15:24 |
| @clarkb:matrix.org | mnasiadka: I think you boot the instance, then manually attach the volume, then download that script onto the node and run it on the mirror node locally. | 15:24 |
| @mnasiadka:matrix.org | Ah, ok, thanks for the heads up :) | 15:24 |
| @clarkb:matrix.org | I'm happy to walk through any of those steps if you like as well. Just let me know if questions or concerns come up | 15:27 |
| @mnasiadka:matrix.org | Now booting an instance, surely some questions will pop up :) | 15:27 |
| -@gerrit:opendev.org- Clark Boylan proposed: | 15:44 | |
| - [opendev/lodgeit] 982311: Uncap Werkzeug for python 3.14 compatibility https://review.opendev.org/c/opendev/lodgeit/+/982311 | ||
| - [opendev/lodgeit] 982307: Start testing python3.14 https://review.opendev.org/c/opendev/lodgeit/+/982307 | ||
| -@gerrit:opendev.org- Clark Boylan proposed: | 15:52 | |
| - [opendev/lodgeit] 982311: Uncap Werkzeug for python 3.14 compatibility https://review.opendev.org/c/opendev/lodgeit/+/982311 | ||
| - [opendev/lodgeit] 982307: Start testing python3.14 https://review.opendev.org/c/opendev/lodgeit/+/982307 | ||
| -@gerrit:opendev.org- Clark Boylan proposed: [opendev/system-config] 945143: DNM testing trixie python 3.14 build of lodgeit https://review.opendev.org/c/opendev/system-config/+/945143 | 16:10 | |
| @clarkb:matrix.org | I put an autohold in place for ^ so that we can check the big python3.14 and werkzeug update manually in addition to the testing | 16:12 |
| @tafkamax:matrix.org | Is there a way to meaningfully search in openstack docs without bumping into old releases? | 16:45 |
| @tafkamax:matrix.org | I cannot use the search bar for the life of me and tend to just go look in the repo itself via grep or something. | 16:45 |
| @clarkb:matrix.org | I typically use google and site: but that has the same problem. This is a solvable issue, but it requires that openstack provide hints to the indexers to indicate what should be preferred eg latest should be canonical or osmething along those lines. Zuul does this for its documentation. openstack does not do it | 16:46 |
| @tafkamax:matrix.org | aha okay | 16:47 |
| @clarkb:matrix.org | then because indexers prefer content that is linked they end up preferring older thinsg that show up in blog posts and in general I think the older docs are more likely to be cross linked | 16:47 |
| @tafkamax:matrix.org | Yeah... I wish this would work in google. kolla_base_distro site:https://docs.openstack.org/kolla-ansible/latest/ | 16:48 |
| @tafkamax:matrix.org | seems it just wants the name...kolla_base_distro site:https://docs.openstack.org | 16:48 |
| @tafkamax:matrix.org | oh well, atleast i found what i was looking for | 16:49 |
| @clarkb:matrix.org | yes seems like lastmod and canonical tags on content are important hints | 16:50 |
| @clarkb:matrix.org | but that needs to be done in the content itself ideally. Not something that opendev should try to bandaid over at the server level | 16:50 |
| @clarkb:matrix.org | following up again on the log storage situation. We're still running with ovh disabled right? And I'm guessing we haven't filed a ticket yet? | 17:05 |
| @clarkb:matrix.org | I wonder if we should go ahead and test things with base-test against ovh only and see if it is working again. Then if not figure out how to file a ticket (possibly after some manual upload testing with openstack client as a simple reproducer would be good if we can figure that out) | 17:05 |
| @fungicide:matrix.org | i do not know of an ovh ticket filed yet | 17:15 |
| @fungicide:matrix.org | but also base-test should exercise ovh swift for log uploads yes | 17:15 |
| @clarkb:matrix.org | I have rechecked https://review.opendev.org/c/zuul/zuul-jobs/+/680178 to test uploads to ovh | 17:16 |
| @clarkb:matrix.org | my job for system-config-run-paste is running in gra1 and stuck on apt cache update: https://zuul.opendev.org/t/openstack/stream/1d61f381c4e0476e9e26192da5b2136f?logfile=console.log | 17:17 |
| @clarkb:matrix.org | `[Fri Mar 27 12:38:18 2026] afs: Waiting for busy volume 536870983 () in cell openstack.org` is the last entry from `dmesg -T` but that was almost 5 hours ago | 17:18 |
| @clarkb:matrix.org | and that mirror seems to be browsable | 17:18 |
| @fungicide:matrix.org | could it be stuck pulling from a third-party package repository, maybe a ppa? | 17:20 |
| @clarkb:matrix.org | the test node for the fake paste seems alrgely idle. I wonder if we're hitting dpkg locks or something like that | 17:20 |
| @fungicide:matrix.org | launchpad has been struggling in the past week | 17:21 |
| @clarkb:matrix.org | the job is about to timeout. I have a hold in place for it if we want to look more closely. I will need to create another hold and recheck to try and get a running lodget on python3.14 | 17:21 |
| @mnasiadka:matrix.org | fungi: got a minute to have a look in https://review.opendev.org/c/opendev/system-config/+/980851 ? Need a second +2 ;-) | 17:23 |
| @fungicide:matrix.org | sure! | 17:25 |
| -@gerrit:opendev.org- Michal Nasiadka proposed: [opendev/zone-opendev.org] 982472: Add mirror03.bhs1.ovh.opendev.org https://review.opendev.org/c/opendev/zone-opendev.org/+/982472 | 17:26 | |
| @clarkb:matrix.org | mnasiadka: I ssh'd into the IP address listed there and it looks like we have the volume and lvm is all set up as well as the fstab. But the new volume are not mounted yet. You'll want to `mount -a` and/or reboot to ensure that things mount properly before we start running the ansible config against the server | 17:29 |
| @fungicide:matrix.org | i recommend testing with a reboot, just so we're not surprised by reboot-only mount behaviors in the future | 17:30 |
| @mnasiadka:matrix.org | rebooted, thanks :) | 17:30 |
| -@gerrit:opendev.org- Michal Nasiadka proposed: [opendev/system-config] 982473: Add new mirror in bhs1.ovh - mirror03 https://review.opendev.org/c/opendev/system-config/+/982473 | 17:32 | |
| @mnasiadka:matrix.org | ok, it's back up and looks fine (mounted) | 17:33 |
| @clarkb:matrix.org | fungi: https://zuul.opendev.org/t/zuul/build/141d61b65bf04e56a0242d3db587a6bf/logs is an example that uploaded to ovh from 680178. I suspect we can reenable that region | 17:33 |
| @clarkb:matrix.org | mnasiadka: agreed mount shows the two mounts and df shows the expected sizes so I think everything mapped through the kernel properly | 17:34 |
| @tafkamax:matrix.org | You have to be careful with the ovh. I am in an non-profit whose data got destroyed in the 2021 fire in their datacenter 😅 | 17:37 |
| -@gerrit:opendev.org- Clark Boylan proposed: [opendev/base-jobs] 982477: Revert "Disable job log uploads to ovh swift" https://review.opendev.org/c/opendev/base-jobs/+/982477 | 17:37 | |
| @clarkb:matrix.org | Taavi Ansper: in this case we're using them for short term job log storage with auto expiration after 30 days. And the contents are spread over 5 different cloud region and two different cloud providers | 17:38 |
| @fungicide:matrix.org | Taavi Ansper: in this case it's ci/cd logs that we set to expire and auto-delete in 30 days anyway, so not critical data | 17:38 |
| @fungicide:matrix.org | for critical data, we do remote cross-cloud backups to other providers anyway | 17:39 |
| @tafkamax:matrix.org | It more of a joke or so 😁 | 17:42 |
| @tafkamax:matrix.org | * It was more of a joke or so 😁 | 17:42 |
| @tafkamax:matrix.org | From my part | 17:42 |
| -@gerrit:opendev.org- Michal Nasiadka proposed: [opendev/system-config] 982473: Add new mirror in bhs1.ovh - mirror03 https://review.opendev.org/c/opendev/system-config/+/982473 | 17:54 | |
| @clarkb:matrix.org | looks like my test nodes hit gra1 again. And it looks slow again. Probably deserves from proper debugging. Let me start with apt-get update on the older held node | 18:02 |
| @clarkb:matrix.org | ok that fails because the ansible process that runs apt-get update appears to still be running | 18:04 |
| @clarkb:matrix.org | I don't think that process had lock problems as it appears to be the one holding the lock | 18:04 |
| @clarkb:matrix.org | I went ahead and killed the python process which released the lock and I'm running apt-get update manually. It is very slow like under 50kB/s | 18:06 |
| @clarkb:matrix.org | because this is using our production config we actually aren't talking to our local mirrors but instead us.archive.ubuntu.com. This server is in France so that may be part of the problem. But also I'd expect us.archive.ubuntu.com to be faster than 24kB/s anyway | 18:07 |
| @clarkb:matrix.org | Now it says `[Connecting to ubuntu-mirror-2.ps6.canonical.com (91.189.91.82)` and seems to be stuck there | 18:08 |
| @clarkb:matrix.org | ok it completed and the end result is `Fetched 20.4 MB in 2min 42s (126 kB/s)` so I think the issue is upstream of us. I guess one question is whether or not we should be using a different mirror name for the server in France. Fixing that might be annoying and if this problem goes away in a day not sure if it is worth it | 18:08 |
| @clarkb:matrix.org | most test jobs shouldn't have this problem as they will hit our local mirror. This is due to us running our prod config on the test nodes | 18:09 |
| @clarkb:matrix.org | so I'm not super concerned about this affecting general usage | 18:09 |
| @clarkb:matrix.org | doing a quick cross the atlantic test fetching https://mirror.dfw.rax.opendev.org/wheel/debian-11-x86_64/ansible/ansible-2.9.27-py3-none-any.whl from mirror.gra1.ovh does 10.2MB/s according to wget | 18:12 |
| @clarkb:matrix.org | so I don't think this is a general cross atlantic connectivity problem. It could be specific to ubuntu's servers or where their connectivity peers I guess. But still no evidence that general jobs will be impacted greatly | 18:12 |
| -@gerrit:opendev.org- Zuul merged on behalf of Michal Nasiadka: [opendev/system-config] 980851: repos: Use sources.list.d on Ubuntu Noble https://review.opendev.org/c/opendev/system-config/+/980851 | 18:22 | |
| @fungicide:matrix.org | that's already deploying | 18:22 |
| @fungicide:matrix.org | and infra-prod-base is running now | 18:24 |
| @fungicide:matrix.org | infra-prod-base failed | 18:34 |
| @clarkb:matrix.org | Possibly on dpkg locks if ubuntu mirrors are sad? | 18:35 |
| @fungicide:matrix.org | mirror02.iad.rax.opendev.org ze08.opendev.org zk02.opendev.org zuul-lb02.opendev.org | 18:35 |
| @fungicide:matrix.org | those are the 4 hosts with task failures in /var/log/ansible/base.yaml.log | 18:35 |
| @fungicide:matrix.org | i need to sprint out the door to my appointment though, so can dig any deeper at the moment | 18:36 |
| @fungicide:matrix.org | bbiab | 18:36 |
| @fungicide:matrix.org | er, can't dig any deeper | 18:36 |
| @clarkb:matrix.org | `Failed to lock apt for exclusive operation: Failed to lock directory /var/lib/apt/lists/` | 18:37 |
| @clarkb:matrix.org | that is from zuul-lb02 | 18:37 |
| @clarkb:matrix.org | I suspect that this is fallout from whatever is going on with ubuntu's mirrors | 18:37 |
| @clarkb:matrix.org | and it should resolve itself when ubuntu is happier. In the meantime I'll spot check a node like ze01 and see that its sources lists look good and I can update them | 18:38 |
| @clarkb:matrix.org | yes that seems to be working and I don't see the old sources fiel so I think this is generally ahppy | 18:40 |
| @clarkb:matrix.org | I do note that we are using http and not https. Probably should update to https? | 18:40 |
| @clarkb:matrix.org | Looks like jammy may also use http (and we didn't update jammy) so maybe this is intentional for load balancer reasons? we're relying on gpg anyway for checking package contents | 18:42 |
| @clarkb:matrix.org | so ya I think this is good. | 18:42 |
| @clarkb:matrix.org | zuul-lb02 says its lock is ancient. I'm not sure I believe that and I'll probably let fungi take a look before I intervene | 18:43 |
| @clarkb:matrix.org | zk02's is from this month so less concerning | 18:43 |
| @clarkb:matrix.org | fungi: mnasiadka: to tl;dr I think things largely applied cleanly and are fine. On those hosts we seem to have lock issues that may need itnervention but the lock is old enough on one of them that I'll hold off on doing so at the moment and give fungi a chance to look at it too | 18:44 |
| @clarkb:matrix.org | https://217.182.140.117/show/bQOIcjv579pgclVa6Tvl/ python3.14 lodgeit seems to work in a basic test | 19:01 |
| @fungicide:matrix.org | sounds good wrt the apt sources change | 19:38 |
| @fungicide:matrix.org | we can clean up stale locks if they persist but maybe we just caught them at a bad time | 19:39 |
| @fungicide:matrix.org | and yes, older ubuntu needed the apt-transport-https package installed, but regardless http vs https is irrelevant because the package indices contain strong checksums and are themselves pgp-signed | 19:39 |
| @fungicide:matrix.org | about the only thing https gets you in that case is a mild amount of added privacy (but not much, since traffic pattern analysis probably still reveals what packages you downloaded even over https) | 19:40 |
| @clarkb:matrix.org | fungi the processes are days old and one is from 2025 supposedly. I don't think they will self correct though that makes me wonder how this job was succeeding previously | 19:44 |
| @clarkb:matrix.org | If you get a chance taking a look to double check wouls be good | 19:44 |
| @fungicide:matrix.org | sure thing | 19:44 |
| @fungicide:matrix.org | yeah, on mirror02.iad.rax it was a hung apt.systemd.daily trying to apt-get update from yesterday. i killed the child process | 19:47 |
| @fungicide:matrix.org | i was able to apt update manually after it cleaned up | 19:47 |
| @fungicide:matrix.org | same on ze08 except it was only a few hours old | 19:48 |
| -@gerrit:opendev.org- Thales Elero Cervi proposed wip: [openstack/project-config] 982495: Add repo app-prometheus to StarlingX https://review.opendev.org/c/openstack/project-config/+/982495 | 19:49 | |
| @fungicide:matrix.org | and on zk02 except it's been running for almost 3 weeks | 19:49 |
| @fungicide:matrix.org | and on zuul-lb02 from some time last year | 19:50 |
| @fungicide:matrix.org | those two that had been hung for longer are taking longer to clean up | 19:52 |
| @clarkb:matrix.org | Ya that last one is old enough I would've expected issues previously | 19:52 |
| @fungicide:matrix.org | zuul-lb02 is in the inventory and not in the emergency disable list, so i don't know why this wasn't spotted previously | 19:53 |
| @clarkb:matrix.org | Is it possible the clock is wrong? | 19:54 |
| @fungicide:matrix.org | doesn't look like it now, at least | 19:54 |
| @fungicide:matrix.org | the resumed update on zuul-lb02 is still going, looks like, judging from the process list | 20:01 |
| @fungicide:matrix.org | okay, it's finally done | 20:08 |
| @fungicide:matrix.org | all 4 affected hosts are now able to update their package lists cleanly again | 20:08 |
| @clarkb:matrix.org | thanks! | 20:17 |
| @fungicide:matrix.org | no problem | 20:17 |
| @fungicide:matrix.org | just wish i knew how they sat so long unnoticed, especially the one that had been stuck for at least several months | 20:18 |
| @clarkb:matrix.org | ++ | 20:18 |
| @clarkb:matrix.org | I think that https://review.opendev.org/c/opendev/system-config/+/982180 should be good to go now that I've proven that python3.14 seems to work end to end with lodgeit. https://review.opendev.org/c/zuul/zuul-jobs/+/982306 should also be safe to add tox and nox jobs to zuul-jobs for python3.14. This does assume pyenv for now which we can convert to resolute later (or just wait for resolute if we prefer I guess) | 20:19 |
| @clarkb:matrix.org | https://groups.google.com/g/repo-discuss/c/4ib1RKnWOIg the Gerrit H2 problem is getting more attention | 20:52 |
| @fungicide:matrix.org | oh good | 20:52 |
| @clarkb:matrix.org | I responded and tried to call out what our needs are and the problems we've been havign (basically its nice if we don't have to run another service and we'd like to see better disk management and quicker shutdowns) | 20:54 |
| @fungicide:matrix.org | yeah, good summary | 20:56 |
| @fungicide:matrix.org | i agree, obviously | 20:56 |
Generated by irclog2html.py 4.1.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!