Friday, 2025-09-26

*** liuxie is now known as liushy05:58
dtantsurHey folks, how do I figure out why a post job never ran on https://review.opendev.org/c/openstack/ironic-python-agent/+/960732?09:31
dtantsurAlso https://zuul.opendev.org/t/openstack/builds?project=*ironic-python-agent&branch=stable%2F2025.2&skip=0 just hangs even though we've had patches merged there..09:34
tonybdtantsur: Looking09:38
tonybThe hanging is something corvus will need to look at  https://zuul.opendev.org/t/openstack/builds?project=*ironic-python-agent*&branch=stable%2F2025.2&skip=0 works (not the additional * in the project name) but adding more filters causes the same lack of results09:40
tonybdtantsur: for the post pipeline we need a SHA rather than chnage number do you have that to hand?09:42
tonybOh I see it09:42
tonybthe job ran: https://zuul.opendev.org/t/openstack/build/24b5390629844f95aca887886b9314f609:43
tonybdtantsur: what makes you think it didn't?09:43
dtantsurtonyb: https://zuul.opendev.org/t/openstack/builds?job_name=ironic-python-agent-build-image-dib-centos909:46
dtantsurnot a single mention of 2025.2 despite two patches merging there09:46
dtantsurit kept running on master after the merge date of https://review.opendev.org/c/openstack/ironic-python-agent/+/96073209:47
dtantsurso I'm quite puzzled09:47
tonybOkay so 960732 did merge and triggered the post pipeline, but that specific job, and perhaps others, didn't09:50
dtantsurCould it be caused by the fact that the job is defined in another project?09:51
tonybdtantsur: No I don't think so.  Zuul knows that job applies on that branch09:52
tonybhttps://zuul.opendev.org/t/openstack/job/ironic-python-agent-build-image-dib-centos909:52
tonybthat's the right job isn't it?09:53
dtantsurCould it be that stable/2025.2 on IPA-builder (where is the job is defined) was created after the IPA change merged?09:53
dtantsuryes, that's the one09:53
tonybThat's possible09:54
tonybTrying to think how to verify that09:54
dtantsurWe could merge a dummy patch to 2025.2 :)09:55
tonybWell yes you could :)  I was trying to think how to verify the hypothesis at hand.09:56
tonybI *think* I can re-enqueue that job, but again that doesn't verify why it didn't run 09:57
dtantsurtonyb: could you re-enqueue it nonetheless please? Ironic 2025.2 is now broken because it expects the artifacts to be generated already.09:57
tonybsure09:58
tonybdtantsur: I haven't done this before gimme 510:03
tonybif I can't figure it out you may want to merge a dummy change10:04
dtantsurof course10:04
tonybhttps://zuul.opendev.org/t/openstack/status?change=960732%2C1&pipeline=post10:07
tonybRunning now10:08
tonybOh yeah the branch creation for API-Builder was a week after IPA, that's almost certainly the root cause.10:12
dtantsurI see, so unfortunate timing..10:21
dtantsursucceeded, nice! thank you again tonyb10:22
tonybI think so.  We can ask elodilles if there is any easy way to ensure that IPA-Builder is tagged before IPA (without looking at the associated release models)10:22
tonybdtantsur: glad I could help10:22
elodillestonyb dtantsur : i'm not aware of any easy way to force tagging order :/ both project have type 'other' and the release patches are either created by the team at some stage around/after Milestone-3, or by release team with 'missing-releases', so there doesn't even exist any automatism for this11:44
tonybelodilles: That's more or less what I suspected.12:03
fungiwould adding an inline comment to the deliverable file in the release repo work as a reminder maybe?13:25
fungithough maybe that's only helpful if people are manually editing to propose releases, not if scripts are doing bulk release additions13:25
*** sfernand_ is now known as sfernand13:28
tonybfungi: Can't hurt.13:29
clarkbelodilles: you could use depends on then approve them one at a time. (I think if approved at the same time then the release tagging process can go in either order14:46
fungiyeah, the challenge is remembering that doing so is necessary when bulk release candidate tagging is being done14:48
elodillesyepp14:48
fungii'm heading out to an early lunch, won't be long, back soon14:48
elodillesenjoy your food o/14:50
clarkbfungi: when you get back https://review.opendev.org/c/opendev/zuul-providers/+/962233 https://review.opendev.org/c/opendev/zuul-providers/+/962235 and https://review.opendev.org/c/opendev/system-config/+/962237 are the followups to the raxflex double nic saga15:02
clarkbI think if we want to land those today I should be able to check the instances that get booted afterwards15:02
opendevreviewClark Boylan proposed opendev/system-config master: Remove ze11 from our inventory and management  https://review.opendev.org/c/opendev/system-config/+/96237115:21
opendevreviewMichal Nasiadka proposed openstack/diskimage-builder master: almalinux-container: Add support for building 10  https://review.opendev.org/c/openstack/diskimage-builder/+/96033615:49
opendevreviewMerged opendev/zuul-providers master: Fix test sjc3 region flavors  https://review.opendev.org/c/opendev/zuul-providers/+/96223316:05
fungiall three of those lgtm16:05
clarkbthanks. the last one won't actually apply until the launchers get restarted which I may just let the weekly updates do in the next 24 hours16:06
clarkbbut I can monitor instance boots after the second lands to ensure they still have one nic16:06
opendevreviewMerged opendev/zuul-providers master: Revert "Remove raxflex networks config"  https://review.opendev.org/c/opendev/zuul-providers/+/96223516:06
clarkbok instances booted after ~now16:06
clarkbevery single node in sjc3 right now has a hostname that ends with 416:09
clarkbwonder what the odds are of that happening16:09
clarkbI have not seen any double nics yet in sjc3. I'll check the other regions now16:10
clarkbfungi: ok I think DFW3 has double nics again. I'm so confused16:11
clarkbfungi: I workflow -1'd https://review.opendev.org/c/opendev/system-config/+/962237 while I try to make sense of this16:11
fungiokay16:13
fungiwere the launchers restarted onto the fix yet?16:13
clarkbfungi: yes I did that wednesday16:14
fungithat's what i thought. so strange16:14
clarkbok some of sjc3's nodes are double nic now and some are not. Maybe one of the launchers didn't update and the other did (though I pulled new images on both and restarted both on wednesday16:14
clarkbthey both claim to be running the same image16:15
fungieven stranger16:15
clarkband were restarted ~42 and ~41 hours ago16:15
clarkbI think the best thing here may be to revert the revert (ugh) and see if corvus has any further insight when he is able to look closer. I haven't been following the configuration inheritance changes super closely so not sure how all the bits may interact here16:17
fungithe components page shows them both running on the same version16:18
opendevreviewClark Boylan proposed opendev/zuul-providers master: Reapply "Remove raxflex networks config"  https://review.opendev.org/c/opendev/zuul-providers/+/96237916:18
fungiand on a newer version than everything else, as expected16:18
clarkbthe first chagne to fix flavors shouldn't be related to this and can stay16:18
fungiyep16:18
opendevreviewMerged opendev/zuul-providers master: Reapply "Remove raxflex networks config"  https://review.opendev.org/c/opendev/zuul-providers/+/96237916:19
clarkbI'm wondering if there is cached config in zookeeper maybe16:19
fungioh, and it's causing the fix not to take effect because the set of networks is a subset of what was cached?16:20
fungithough if that were the case, why did we stop getting the extra network when we dropped the configuration?16:20
fungisince the empty set is still a subset of the earlier set16:21
clarkbgood point. We did restart when we put the config back in clouds.yaml but I think we went straight to the "multiple networks found" error when removing it last time without config in clouds.yaml16:21
clarkbso ya I'm not sure. Maybe the fix was incomplete. It did get some new test case updates but possible that didn't full capture what is happening?16:22
fungiyeah, something's not adding up... but also it's friday afternoon so lots of things are no longer adding up for me at this point in the week16:22
clarkbI'm starting to see single nic instances again so ya updating the config in this direction did work16:23
clarkbok I'm reasonably confident we're back to "normal" now and we can pick this up again when we're able to more closely debug why it isn't working16:35
opendevreviewMerged opendev/system-config master: Remove ze11 from our inventory and management  https://review.opendev.org/c/opendev/system-config/+/96237116:41
clarkboh cool once that is done deploying I guess I'll look into fully deleting ze11 (and I think it has a volume attached too that can be deleted)16:42
clarkbthen I can push up a dns cleanup change16:42
stephenfinWhat component is responsible for determining "are there sufficient resources in the underlying cloud to run this job"? Is it zuul or nodepool? My google-fu is failing me and I vaguely recall reading something recently about nodepool being merged into zuul (but I may have imagined that)16:43
clarkbstephenfin: in opendev it is 'zuul-launcher', but most (all?) other zuul deployments will still be using ndoepool16:43
clarkbstephenfin: opendev is acting as the beta tester for zuul launcher16:43
stephenfinOkay, so likely still nodepool in the RDO CI. zuul-launcher gives me something new to google though16:44
fungistephenfin: i'll save google the work: https://zuul-ci.org/docs/zuul/latest/developer/specs/nodepool-in-zuul.html16:45
stephenfinand one other question: are there any jobs where resources are allocated from the underlying "CI cloud" (RAX or whatever) as part of the job itself, rather than by nodepool/zuul-launcher?16:45
stephenfinfungi++ thanks :)16:45
clarkbstephenfin: not in opendev, but if you had credetnials in your jobs to do that I don't think anything zuul does would prevent it16:45
fungiscs has a zuul that does what you're describing, i thino16:46
fungithink16:46
fungi(sovereign cloud stack)16:46
stephenfinDrat. I was hoping there would be, and I was going to ask how you were managing resources when Zuul wasn't the sole owner of them16:47
clarkbstephenfin: within opendev what typically happens is we deploy a nested cloud or kuberentes and test against that16:47
clarkba lot of jobs run devstack not to test openstack changes but to test integration with openstack APIs. Similar with k8s16:47
fungioh, zuul sharing resource pools/quota is orthogonal to jobs spinning up their own cloud resources, though i can see how they could be related16:48
stephenfinYeah, orthogonal but related. Something needs to be done to e.g. stop Zuul filling the cloud up and starving jobs of the resources they need16:51
clarkbnodepool will respect quotas16:51
stephenfinI figure projects + quota, host aggregates, or separate (potentially nested) clouds, but was looking for prior art16:51
stephenfinclarkb: The context is that we're running OpenShift jobs via Prow against a cloud that is also being used by RDO Zuul. We have quotas, but currently the sum of quotas for all projects exceeds what actually available on the cloud16:52
clarkbah yes openshift... I have strong opinions and well the openshift 3 -> 4 transition made testing openshift extremely painful16:53
clarkbit was once possible to deploy openshift within your zuul job in a straightfowrard manner as you could run openshift 3 atop preallocated resources16:54
opendevreviewMichal Nasiadka proposed openstack/diskimage-builder master: almalinux-container: Add support for building 10  https://review.opendev.org/c/openstack/diskimage-builder/+/96033616:54
stephenfinPrivate clouds cost money and lowering quotas means the cloud could go underutilised if some of the projects are quiet. So I'm guessing something need to mediate. But again, I was hoping there might be prior art16:54
clarkbbut then 4 switched to an more entire rack management model and now its very diffiuclt to deploy unless you had it cloud credetnials or ipmi details16:54
clarkbstephenfin: one option may be to have nodes in your zuul nodesets that are only there to grab the quota/resources then have your openshift deployment take over from zuul for them16:54
stephenfinI will hold my tongue on testability of OpenShift in general, but needless to say, I miss devstack16:54
clarkbI don't know if that would work as I've never tried it. But you may be able to do something along those lines and that would solve the underutilization problem as zuul would mediate all the quota usage16:55
stephenfinHmm, that's not a bad idea16:55
clarkbyou might need to put the nodes in a group that zuul ignores or something like that so that the zuul setup and teardown steps don't trip over the external management16:56
funginodepool (and zuul-launcher) can be told a maximum node count which you could manually set on the provider below the actual quota, if you don't want zuul using more than that amount of the capacity. as clarkb noted nodepool and zuul-launcher are capable of monitoring quota usage reported by the cloud even when it's not the only thing using it16:57
fungiso it shouldn't try to use more than what's remaining available even if it's sharing that resource pool16:58
clarkbfwiw complex systems should consider "how do we test this" as a major part of design input. That includes openstack fwiw16:59
clarkbI think trove mightily struggled with the whole run a nested virt database just to test the mangaement pieces none ofwhich were actually necessary to test the mangaement piece17:01
clarkband that pattern has been a common issue within openstack. Today it is less of an issue as nested virt is more reliable and available.17:01
fungia delicate balance between "test the software the way users will be running it" and making use of the available test resources17:05
stephenfinYeah, I wouldn't put any blame at the feet of zuul here. boskos (the resource manager used by k8s prow) doesn't have any insight into OpenStack resource utilisation so the best it can do it manage a static number of slices (one consumed per job)17:06
stephenfinEven if it did, there's no mechanism in OpenStack (at least without something like blazar) to reserve resources without consuming them. And we can't consume them early in the job since the job itself is testing the creation of those resources17:07
stephenfinSo yeah, I suspect I'll need some level of communication between prow and zuul so one of them can mediate17:08
stephenfinOr projects + quota, host aggregates, or separate (potentially nested) clouds17:08
fungimaybe prow can be improved upstream in order to become as smart as zuul/nodepool17:10
fungibecoming quota-aware is a means of communication, quota usage for various resources is essentially a communication channel for all systems using the available quota17:11
clarkbthe deploy buildset for 962371 failed, but not in a way that prevents me from removing the server I don't think17:13
clarkbhrm actually it didn't run the infra-prod-service-zuul job which is the one I wanted ti to to pick up firewall updates17:14
clarkbas soon as we delete the node its IP can be recycled so we want to ensure our firewall rules are updated before deleting then ode17:14
clarkbthe hourlies do run infra-prod-service-zuul so this may be fine anyway17:15
clarkbhttps://zuul.opendev.org/t/openstack/build/6da769919f4a47c9b39eab8bec36462d I think this may have updated firewall rules17:15
clarkbya I think we are good. Zookeeper doesn't show firewall rules for ze11 anymore17:17
clarkbspecifically zk0117:17
clarkbfungi: ^ let me know if you have any concerns with clearing out that server at this point17:19
clarkbstephenfin: fungi: theoretically nothing prevents you from just running the entire CI system from zuul too17:20
clarkbI assume the biggest issue there is simply users not wanting to go into unfamiliar waters17:20
stephenfinclarkb: Except the fact that all of OpenShift uses Prow, and by diverging we lose the ability to integrate and report into their "pipelines"17:21
fungii think he means using zuul to run prow17:22
opendevreviewClark Boylan proposed opendev/zone-opendev.org master: Remove the ze11 DNS records  https://review.opendev.org/c/opendev/zone-opendev.org/+/96238517:22
fungiso prow could still report things17:22
stephenfinSame issue though, if it's a different prow17:22
fungiclarkb: clearing out as in deleting it from the cloud provider? seems fine, it's out of the components list in zuul which should be all that matters now17:22
clarkbfungi: yup deleting the server and its cinder volume from the cloud provider. Once I'm done doing that I can land 96238517:23
clarkbI'll start looking at cleaning it out shortly17:24
clarkbstephenfin: ya I guess these are administrative issues. Openshift could just use zuul is what i was trying to get at17:24
stephenfinIf only ❤️17:24
stephenfinThe NIH is real17:25
fungiwell, zuul could run *the same* prow for everything17:25
fungiand then it's not *a different* prow17:25
clarkbthere is no ze11 cinder volume. It uses the ephemeral disk17:26
clarkbso I would just be deleting the one server instance named ze11.opendev.org17:26
clarkbif sticking with Prow making it quota aware seems like the most correct thing17:28
clarkbthe two systems will compete at times similar to what it was like for us to run zuul-launcher and nodepool at the same time. but it is workable as we proved17:28
fungiyeah, from a simple mathematical perspective i don't know of any "cooperative fair use" algorithm they could employ to avoid that17:30
clarkbfungi: so this deployment issue has affected us since september 1217:31
clarkbthe borg-backup job is failing consistently. I think we need to dig into this17:31
clarkbas it effectively means our daily catch up/enforcement runs are not running for half the things17:31
fungifrom a xkcd.com/927 perspective, what is needed is a new scheduler that coordinates the work of both zuul and prow, if one isn't going to coordinate the other17:32
clarkbThe task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'borg_user'. 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'borg_user'17:32
clarkbThe error appears to be in '/home/zuul/src/opendev.org/opendev/system-config/playbooks/roles/borg-backup-server/tasks/main.yaml': line 57, column 3, but may be elsewhere in the file depending on the exact syntax problem.17:33
fungihuh, that's a new one on me, looking...17:33
fungidid we leave borg_user out of an inventory group somewhere?17:33
fungior host17:33
clarkbI wonder if we have a server in the emergency file list that prevents it from populating that borg_user variable17:33
clarkbfungi: we auto generate ti from the hostname by deault17:34
clarkbI wish a million times that ansible would like the list iteration items17:34
fungiclarkb: could it be the ze11 entry? otherise there's nothing new in there17:34
clarkbI don't think ze11 does backups17:34
fungieverything else in the emergency list has been there for months, so i don't guess it's there17:35
clarkback17:36
clarkblooks like each backup host has processed users borg-etherpad02 borg-gitea09 borg-review03 borg-zuul01 and borg-zuul02 then whatever the next list item is breaks17:36
clarkbfungi: I think it is kdc0317:37
fungihuh...17:37
clarkbthat task loops over the borg-backup group of servers and if you look at https://opendev.org/opendev/system-config/src/branch/master/inventory/service/groups.yaml#L24-L45 the next one listed is kdc03 and it seems to go in order17:37
fungiit's been up 14 days, which matches the september 12 timeframe17:38
fungimaybe upgrades broke auth?17:38
clarkbfungi: https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/borg-backup/tasks/main.yaml#L1-L417:38
clarkbthis is where we should autogenerate the name17:39
clarkbfungi: well I think we're able to run the base playbook against kdc03. Its more like that specific fact isn't being set for some reason17:39
fungii did a `sudo rm /var/cache/ansible/facts/kdc0{3,4}.openstack.org` after upgrading those servers17:40
clarkbya which we thoguht we needed to clear out bad facts17:40
clarkbbut maybe that is the genesis fi we're not regenerating it properly17:40
fungii also did the same to the afs server fact caches though17:41
clarkbfungi: https://opendev.org/opendev/system-config/src/branch/master/playbooks/service-borg-backup.yaml#L11-L24 this is where we should genearte that info for each server then use it in that order17:41
clarkbfungi: I don't think we backup the afs servers17:41
clarkbfungi: I see it17:44
fricklerstephenfin: what we (osism, not scs) do for testing our deployment tooling is having static nodes each with their own cloud credentials. we still need core to set an extra flag to run these in order to avoid leaking credentials, but this way we can still run in check and are not restricted to post only like with zuul secrets17:44
clarkbfungi: in service-borg-backup we're failing to install borg backup before we get to the part where we set the borg_user tuple value from borg_username and the ssh key value17:45
clarkbfungi: I don't know why execution doesn't stop there and why it continues to the backup servers and then breaks harder then. But ModuleNotFoundError: No module named 'pip' is the error17:45
stephenfinfrickler: Does that mean you have wholly separate clouds?17:45
stephenfini.e. nested clouds17:45
stephenfinor just different tenants on the same shared cloud?17:46
fricklerstephenfin: different tenants on the same cloud. the overall capacity is large enough, since it is not only used for our CI, so global resource count is not an issue17:46
fungithough worth noting that opendev has two tenants/projects in every cloud provider, in our case so that we can isolate zuul test node usage from other more permanent resources17:47
clarkbfungi: I think the issue is that its trying to use the venv from the old deployment (its python3.10 not python3.12 from what I can tell)17:48
clarkbfungi: I thought when you were trying to fixup kdc03 that can moved aside but maybe I misremembered?17:48
fungioh that would do it, we should blow away the venv17:48
clarkbfungi: ya I think we need to clear out or move that venv then we can reenqueue the deploy job for my eearlier change to get ahead of the daily runs later today and check things are happy17:49
fungii can delete it, we don't keep any state in that tree right?17:49
clarkbfungi: I don't think there is any state in the borg venv (borg will keep that ina .dir in /root somewhere iir)17:49
clarkbI think /root/.cache/borg is the state carrying dir17:49
clarkbwhich we may also need to claer out I'm not sure17:49
clarkbbut we can take this one step at a time17:50
fungion august 20 i seem to have done a `mv /opt/borg{,.old}` which probably fixed it for jammy but didn't repeat it for noble17:50
fungimy bad17:50
clarkbaha that explains why I remembered this and why its python3.10 and not python3.817:50
clarkbI think that root causes it17:50
fungidone17:51
clarkbdo we want to reenqueue deploy jobs for https://review.opendev.org/c/opendev/system-config/+/962371 now?17:51
fungisure, i can do that now17:51
clarkbI don't think anything has merged since that ran (in project-config or system-config but we want to be careful about not going backwards)17:51
clarkbfungi: thanks17:51
clarkbonce we're happy with that I'll look at ze11 again and probably just go ahead and delete it (can't think of any reason not to at this point)17:52
fungidone17:52
clarkbfungi: probably want to check on kdc03 backups once this is all settled too (just to be sure they start running successfully with the new venv)17:53
fungiyep, will do17:53
clarkbinfra-prod-service-borg-backup is running now18:04
clarkbit succeeded!18:08
clarkbhopefully this gets us back to happy daily runs and we just need to check kdc03 backups are happy after this18:08
opendevreviewMichal Nasiadka proposed openstack/diskimage-builder master: almalinux-container: Add support for building 10  https://review.opendev.org/c/openstack/diskimage-builder/+/96033618:26
clarkbdown to the last job in the at deployment buildset. If that comes back green I'll proceed wtih ze11 deletion at that point18:30
clarkbhttps://zuul.opendev.org/t/openstack/buildset/92d1e4bf21014455b9d1612cb9829262 success18:39
clarkbnow to delete ze1118:39
clarkb#status log Deleted ze11.opendev.org (8b5220c9-e3cf-4f28-a2f9-5eded6f963be) as it had network issues cloning git repos. It can be replaced later if we need the extra executor.18:42
clarkbfungi: do you want to review https://review.opendev.org/c/opendev/zone-opendev.org/+/962385 before I approve it?18:42
clarkbI've removed ze11 from the emergency file on bridge too18:43
fungisorry, that one snuck past me, approved18:44
clarkbits ok I didn't want to approve it until I had deleted the server anyway so the timing works out18:45
clarkbthanks18:45
opendevreviewMerged opendev/zone-opendev.org master: Remove the ze11 DNS records  https://review.opendev.org/c/opendev/zone-opendev.org/+/96238518:50
clarkbgoing to find lunch now then I have some changes to review when I get back18:53
clarkbfungi: speaking of backups https://review.opendev.org/c/opendev/system-config/+/961752 is where I ended up after pruning the vexxhost backup server less than a month after the prior prune20:38
clarkbwe'll want to follow that up with some purges too but starting with the retirements made sense to me20:38
clarkbfungi: and then separately do we think we can safely land https://review.opendev.org/c/opendev/system-config/+/958666 now that your openafs cleanups have been compelted?20:38
fungiyeah, thanks for the reminder, approved it now20:45
clarkbI'm just trying to clear things out of my backlog. Granted big items like gerrit upgrades and summit presentations are not getting done but the collection of small items I've accumulated is slowly decreasing in size20:49
fungiyep20:50
opendevreviewMerged opendev/system-config master: Revert "reprepro: temporarily ignore undefinedtarget"  https://review.opendev.org/c/opendev/system-config/+/95866621:17
opendevreviewMerged opendev/system-config master: Retire eavesdrop01 and refstack01 backups on the smaller backup server  https://review.opendev.org/c/opendev/system-config/+/96175221:17
clarkbthe backups appear to be retired, disk usage hasn't dropped yet as that occurs during the next pruning21:38
clarkbI think we can either wait for normal pruning to be required and prune then, or if we wish to start purging some of these retired backups we can do a prune pass sooner than necessary then start purging21:40
clarkbhttps://mailarchive.ietf.org/arch/msg/ietf/q6A_anL1u-Y9iXe-vboiOYamsl0/ this may be useful to refer to in discussion around mailing lists21:54
clarkbI also found https://mailarchive.ietf.org/arch/msg/ietf/3tNNSIOc0BOO64Sm7xQLYmTwoco/ interesting in that same thread22:00
fungiyeah, on game development and modding forums i constantly see people talking about which games are playable on their phone, or what they're looking forward to playing when they get a computer22:05
fungii think there are more and more people not using computers at all as phones have grown increasingly capable of performing many of the same tasks22:05
fungiand so the remaining tasks that phones can't do well become a hindrance22:06
fungialso the idea of being "offline" (and so any benefits of software that can deal with such an apocalyptic catastrophe) just makes no sense to them22:07
clarkbone of the things I need to do (which I should start shortly) is upgrade my router22:08
clarkbspeaking of being offline22:08
clarkbI should just get that done with now22:09
fungihere's hoping you survive the darkness22:10
clarkbthat was easy. I appreciate that opnsense's smaller more frequent updates seem to reuslt in quicker upgrades22:22
clarkbfungi: the 22:00 UTC update for the debian mirror is running now and I think is the first check of the ingore missing removal22:23
clarkbI'm tailing the log now but I expect it to succeed22:24
clarkbreprepro just succeeded and it is working on the vos release22:26
clarkb2025-09-26 22:26:22  | Done.22:26
fungiperfect22:26
clarkbI think this is working happily now22:26
clarkbhttps://www.mail-archive.com/nginx@nginx.org/msg25485.html related to the why mailing lists exist discussion22:32
clarkbtldr is nginx is going to stop using theirs September 3022:32
fungithe python community semi-regularly gets long-timers complaining about how terrible discourse is compared to mailing lists/usenet22:34
fungimy personal experience is getting reprimanded by forum moderators for replying to a post that they had deleted. i was told that since i choose to follow the forum in "mailing list mode" i'm obligated to check the web interface for the forum before replying to make sure i don't violate the forum guidelines again22:36
fungii mostly stopped engaging after that and just treat it as a read-only information source22:38
clarkbseems like that is easily solvable by having discourse emit a "this message thread is closed" email22:39
clarkband then reject new posts with similar information22:39
fungiyeah, or reject replies where the in-reply-to header refers to a deleted post22:39

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!