| opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/957995 | 02:20 |
|---|---|---|
| *** liuxie is now known as liushy | 02:43 | |
| *** mrunge_ is now known as mrunge | 05:55 | |
| opendevreview | Merged opendev/glean master: Switch to new devstack based testing https://review.opendev.org/c/opendev/glean/+/953163 | 06:12 |
| bbezak | Hi, Mailing lists website is very slow for couple of days - https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/ | 07:14 |
| opendevreview | Merged openstack/project-config master: Switch devstack propose-updates job to noble https://review.opendev.org/c/openstack/project-config/+/959382 | 09:20 |
| dtantsur | "externally-managed-environment\n\n\u00d7 This environment is externally managed To install Python packages system-wide, try apt install" <-- is it us who did that to ourselves? | 10:33 |
| dtantsur | or debian playing a gatekeepr? | 10:33 |
| dtantsur | (this is from a zuul job I nearly regret trying to write) | 10:33 |
| mnasiadka | standard in recent python versions in Debian and Ubuntu IIRC - https://packaging.python.org/en/latest/specifications/externally-managed-environments/ | 10:34 |
| dtantsur | I miss the times when Linux did not try to be Windows.. | 10:34 |
| dtantsur | thanks | 10:34 |
| *** dhill is now known as Guest25908 | 11:49 | |
| fungi | dtantsur: that's why we do ~everything in venvs these days, `sudo pip install ...` and `pip install --user ...` haven't worked on lots of platforms for years now | 12:59 |
| fungi | bbezak: known problem, i'm approving https://review.opendev.org/959576 now which should help | 13:01 |
| fungi | in other news, the mirror.ubuntu-ports rw volume move to afs02.dfw took just over 14 hours to complete. `vos listvldb` indicates that docs-old is the only volume with an rw site on afs01.dfw now, and we're ignoring that one, so i'm proceeding with upgrading the server momentarily | 13:05 |
| fungi | focal packages are up to date on it so doing the pre-upgrade reboot and then i'll start `do-release-upgrade` in a root screen session | 13:08 |
| fungi | had to remove /var/lib/docker/aufs again to get it to proceed | 13:14 |
| fungi | thankfully this should be the last server where i'll need to do that | 13:15 |
| fungi | security.ubuntu.com returned a few http/500 errors, hopefully this isn't an indication as to how the day is shaping up | 13:18 |
| *** ralonsoh__ is now known as ralonsoh | 13:36 | |
| bbezak | Thx fungi | 13:40 |
| fungi | bbezak: once those additional filters get deployed, i'll restart mailman/apache services to free up some ram on the server so it's not swapping so hard, but this is mainly overload from the horde of llm training crawlers that have been running rampant on the internet for months now | 13:43 |
| fungi | easily 99%+ of the web traffic to most of our systems these days is ai crawlers, not human-initiated client requests | 13:44 |
| corvus | if we could just get rid of that last 1% ;) | 13:45 |
| fungi | much longer and the web is going to become as useless as usenet was during the deluge | 13:45 |
| fungi | looks like i'm not the only one having trouble upgrading a system to jammy today: https://askubuntu.com/questions/1555531 https://discourse.ubuntu.com/t/66688 https://forums.linuxmint.com/viewtopic.php?t=452578 | 14:12 |
| Clark[m] | It must be failing early enough in the process to not be much of an issue other than needing to wait to restart later? | 14:22 |
| Clark[m] | Or is it intermittent and not failing upfront on list updates? | 14:23 |
| fungi | Clark[m]: i suspect it's also breaking unattended upgrades on our jammy servers today, though i haven't gone looking for evidence | 14:27 |
| fungi | basically there's a new firmware package listed in the index, but attempting to download that deb is failing with a 500 isr from security.u.c | 14:28 |
| fungi | since the upgrade from focal to jammy can't download at least one of the packages it expects to install, it refuses to proceed | 14:29 |
| fungi | since this seems to be a problem just in the past few hours or so, i suspect it'll get fixed or correct itself rsn | 14:35 |
| fungi | i'll keep retrying occasionally | 14:35 |
| clarkb | oh so its a specific package not a general problem with the servers | 14:45 |
| fungi | yesd | 14:45 |
| fungi | looks like it's a package they just added but maybe haven't actually made downloadable yet or something, maybe some backend file serving error | 14:45 |
| opendevreview | Clark Boylan proposed zuul/zuul-jobs master: Use network=host on docker buildx build https://review.opendev.org/c/zuul/zuul-jobs/+/959863 | 14:53 |
| *** dhill is now known as Guest25917 | 15:07 | |
| clarkb | fungi: I just checked the most recent borg backup on lists and the diskcache path isn't listed at all. I guess that if there are no contents in the dir then borg doesn't bother backing up the empty dir anyway? | 15:14 |
| clarkb | fungi: either way I think this is workign in a way that works for us | 15:14 |
| clarkb | fungi: then for the UA update it runs the gitea job which isn't fast. If we want we could enqueue ti straight to the gate | 15:15 |
| fungi | sounds great | 15:15 |
| fungi | i just went for the lowest-effort means of reenqueuing since i'm juggling a bunch of stuff at once | 15:16 |
| clarkb | ya no worries. Do you want me to enqueue it? | 15:16 |
| fungi | feel free | 15:16 |
| clarkb | done | 15:17 |
| fungi | linux-firmware all 20220329.git681281e4-0ubuntu3.39 is now appearing in the index for jammy-updates/main on http://us.archive.ubuntu.com/ubuntu but it's downloading super, super, super slowly | 15:18 |
| fungi | 5,332 B/s | 15:19 |
| fungi | estimates 17 more minutes to complete | 15:19 |
| fungi | (316 MB file) | 15:19 |
| clarkb | need a typing competition to see if we can type faster | 15:20 |
| fungi | it's waaaay faster than i can type, but it's slower than a 57.6k baud modem | 15:22 |
| clarkb | I guess ubuntu puts all of the firmware blobs into one package? tumbleweed splist them up by manufacturer which you'd think is smart so you can install only the packages that affect your hardware, but then it seems to install them all by default so the end result is likely the same | 15:24 |
| fungi | debian splits them up as well, there's a big(ish) one with all the open source firmware that's included in the mainline linux kernel, another for the non-freely-licensed firmware blobs from mainline linux, and then the other firmwares are in separate packages for each upstream project that supplies them | 15:27 |
| fungi | though debian also (as of trixie) has a separate package repository specifically for non-free firmware separate from all the other non-free software packages | 15:28 |
| fungi | so that you can easily enable installing/upgrading non-free firmware while not accidentally installing any non-free software packages | 15:29 |
| clarkb | looking on the mailman list the most recent email about people struggling with crawlers if from january and their solutio nwas to update robots.txt to disallow /mailman3/ and set crawl delay to 10 | 15:31 |
| clarkb | essentially removing their archives from indexing. Not a solution that we would like to use I don't think | 15:32 |
| clarkb | that said I notice that a lot of traffic that I think is questionable is coming from SEO company crawlers. | 15:32 |
| fungi | also i don't think that will help, since a lot of the problem crawlers are entirely ignoring honor-system mechanisms like robots.txt disallows | 15:32 |
| clarkb | (this is based on the total counts of requests as well as spot checks of the types of requests they are making). It is possible we want to disable those specific crawlers as I'm not suer what value we get from the SEO companies | 15:32 |
| clarkb | fungi: yes, though all of the SEO bots I checked do claim to respect robots.txt | 15:33 |
| clarkb | ahrefs and semrush in particular. But there was a small set of others in the longer tail | 15:33 |
| fungi | oh, and i misread the download estimate from apt, it's saying 16 hours not minutes (though i don't know if that's because it's extrapolating the current rate to all the packages it still needs to download) | 15:34 |
| clarkb | I think personally I want to hit the "bad" bots first which the UA change should do. See if that helps. If so then maybe we're done. if not then we can consider being a bit tougher on the "good" bots too | 15:34 |
| corvus | the real question is: is the throughput more or less than chatgpt's token output rate? maybe instead of downloading packages from ubuntu, we should just ask chatgpt to output them. wcpgw? | 15:50 |
| clarkb | I once wrote a kernel driver with a double free in it and couldn't reboot the system I loaded it on (the filesystem got trashed). I can only imagine the fun that would come out of halluciated firmware blobs | 15:56 |
| opendevreview | Merged opendev/system-config master: Update UA filter rules https://review.opendev.org/c/opendev/system-config/+/959576 | 15:57 |
| clarkb | the deploy jobs for ^ are running ahead of the hourly jobs that will start soon | 15:59 |
| clarkb | fungi: the lists deployment is done and it should be running the updated UA filter set | 16:08 |
| clarkb | looks like iowait continues to be high so your plan of restarting things to get out of swap is a good one | 16:30 |
| clarkb | fungi: fwiw I'm noticing problems getting content from snapcraft.io. I wonder if there is a consistent backend issue with canonical storage | 16:36 |
| *** dhill is now known as Guest25924 | 16:36 | |
| fungi | quite likely, that seems like three different things we've seen problems fetching from canonical now | 16:46 |
| opendevreview | Clark Boylan proposed zuul/zuul-jobs master: Use network=host on docker buildx build https://review.opendev.org/c/zuul/zuul-jobs/+/959863 | 17:00 |
| opendevreview | Clark Boylan proposed zuul/zuul-jobs master: Make microk8s jobs non voting https://review.opendev.org/c/zuul/zuul-jobs/+/959884 | 17:00 |
| fungi | came across https://askubuntu.com/questions/1555546 which seems to have a reliable recount of the problem and what's going on | 17:07 |
| fungi | tl;dr outage hours ago which left a bunch of mirrors broken mid-sync and created a massive processing backlog that will take a while to clear | 17:08 |
| opendevreview | Clark Boylan proposed zuul/zuul-jobs master: Revert "Make microk8s jobs non voting" https://review.opendev.org/c/zuul/zuul-jobs/+/959887 | 17:11 |
| clarkb | noted this in the zuul room but I'm going to pop out shortly for a bike ride to get that done before it is hot today | 17:12 |
| clarkb | I'll be back in a bit. | 17:12 |
| fungi | cool, i'll probably disappear for an early-ish dinner but not for a few more hours still | 17:13 |
| fungi | we should be dropping our openeuler mirror from afs, right? doesn't seem like it's been used for anything for about a year now | 18:23 |
| fungi | and it's using 317gb of space | 18:23 |
| fungi | that's 8% of our afs data | 18:25 |
| fungi | (though only 6% of our quota) | 18:26 |
| opendevreview | Jeremy Stanley proposed opendev/system-config master: Stop mirroring OpenEuler packages https://review.opendev.org/c/opendev/system-config/+/959892 | 18:37 |
| opendevreview | Jeremy Stanley proposed openstack/project-config master: Remove mirror.openeuler utilization graph https://review.opendev.org/c/openstack/project-config/+/959893 | 18:41 |
| fungi | i'll go ahead and restart containers and apache on lists01, it's got a little over a gig paged out at the moment with python processes eating a lot of ram | 18:44 |
| fungi | oh yeah, apache took several minutes to stop | 18:46 |
| fungi | #status log Restarted mailing list services on lists01 in order to free up memory now that we've implemented some additional user agent filters to weed out more bad crawlers | 18:48 |
| opendevstatus | fungi: finished logging | 18:49 |
| fungi | hyperkitty is loading for me again now. swap use is nominal and about half the ram is free at this point | 18:52 |
| fungi | though lists.openstack.org is still displaying "rest api not available" errors at the moment | 18:55 |
| fungi | ah, that's postorius erroring though not hyperkitty, i'll give it a few more minutes before i assume something is wrong | 18:57 |
| fungi | granted it's been about 10 minutes already since the containers started | 18:58 |
| fungi | i'll try restarting the containers again, i wonder if doing that while apache was down might have caused it some grief | 19:00 |
| fungi | yeah, i think that was it because this time postorius loaded within a couple of minutes | 19:01 |
| fungi | everything's really snappy in the webui now too, even things that are normally a little slow like keyword searching archives and changing search result sort order | 19:03 |
| fungi | though now it's getting beaten senseless by something claiming to be "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36" crawling all the archives at a breakneck pace | 19:07 |
| fungi | from an ip address in google cloud | 19:10 |
| clarkb | fungi: another thing I noticed i that available memory was still fairly high considering how much we were swapping. I'm not sure if it was very low at one point then fell off but maybe we should reduce the swappiness value there? | 19:14 |
| clarkb | fungi: re openeuler there was a time where people were trying to resurrect it then I get the sense once they ralized the scope of what was required gave up | 19:15 |
| clarkb | (thoug h Idon't know that for sure) | 19:15 |
| fungi | clarkb: for mailman memory use, it was consumed by buffers/cache, causing other stuff to get paged out | 19:16 |
| fungi | i don't think changing swappiness necessarily affects that | 19:17 |
| fungi | maybe mariadb tuning could help? top claims mariadbd is currently using 381792 resident and 2638412 virtual memory | 19:19 |
| fungi | by far the largest virtual memory consumer, though two uwsgi processes are vying for second and third place | 19:20 |
| fungi | i saw another ppa download failure in https://zuul.opendev.org/t/openstack/build/1e5881aae35b4e1c8eceec8994c6294a | 19:21 |
| opendevreview | Merged zuul/zuul-jobs master: Make microk8s jobs non voting https://review.opendev.org/c/zuul/zuul-jobs/+/959884 | 19:24 |
| fungi | the crawler i mentioned earlier seems to be using multiple addresses in google cloud actually | 19:25 |
| fungi | hah, also these are spoofing the referrer as "https://indeed.com/" | 19:26 |
| fungi | i bet that skews a lot of web stats | 19:26 |
| fungi | and another crawler is spoofing a referrer of "https://barclays.co.uk/" | 19:28 |
| fungi | smarter ones claim "https://google.com/" as the referrer, some even including a search parameter | 19:30 |
| fungi | e.g. "https://www.google.com/search?q=lists" | 19:30 |
| fungi | i wonder if a referrer filter would be useful | 19:31 |
| clarkb | huh that is unexpected. I guess for a lot of sites they would let that through because they want th referrals | 19:33 |
| clarkb | but ya I don't thnik we care about traffic from indeed or barclays | 19:33 |
| opendevreview | Merged zuul/zuul-jobs master: Use network=host on docker buildx build https://review.opendev.org/c/zuul/zuul-jobs/+/959863 | 19:33 |
| fungi | well, it's pretty clear the splash page on those sites doesn't link to random messages in our mailing list archives. and certainly not thousands of them | 19:34 |
| clarkb | I'm going to recheck a few of my docker image build changes now just to sanity check ^ | 19:34 |
| clarkb | fungi: oh https://review.opendev.org/c/openstack/project-config/+/959396 is another one that you might want to look at. ruamel.yaml made another release and we're back to not merging the normailization results due to weird trailing spaces on line wrap fixups | 19:35 |
| fungi | i'm not even sure why we use it for normalization in the first place, vs just a regular pyyaml dumper. i mean, that file has no comments, and we want it alpha-ordered anyway | 19:41 |
| fungi | if the concern is outputting yaml that yamllint will be happy with, that's possible to do with pyyaml, just need to override the dumper to fix its non-corming list indenting choice | 19:43 |
| fungi | er, non-conforming | 19:43 |
| opendevreview | Merged openstack/project-config master: Disable ruamel automagic line wrapping https://review.opendev.org/c/openstack/project-config/+/959396 | 19:43 |
| clarkb | fungi: I know that the yaml that is produced for the old nodepool configuration by ansible via pyyaml is pretty atrocious for humans to deal with | 19:45 |
| fungi | yeah, i have a recipe that solves those issues | 19:45 |
| clarkb | also I think I'm realizing that the ci jobs on updates to the yaml stuff dont' actually run the yaml stuff so we might find new and unexpected behaviors from that change | 19:45 |
| clarkb | its simple enough to revert if we need to so not a big deal | 19:46 |
| clarkb | if the zuul-jobs update looks good and no one complains between now and early enxt week I think I'd like to start landing the move to quay changes that I pushed up | 19:47 |
| clarkb | I'm reasonably confident things are working the way opendev needs them to with all of the changes I've pushed and their results | 19:47 |
| clarkb | (so mostly just a matter of ensuring we don't need to roll back for others before committing at this point) | 19:48 |
| clarkb | as a side note none of these changes have failed due to docker hub rate limit errors since I pushed them up. Lots of other failuers until I tracked down the problems but I think that is a really good sign we'll be in good shape once this is done | 19:49 |
| clarkb | hrm we may still have problems with intermediate registry content aging out more quickly than expected still | 19:50 |
| clarkb | I'm not sure I have the energy to debug that on a Friday though | 19:50 |
| fungi | clarkb: https://paste.opendev.org/show/blxPttoIfvBZKMh4MZrc/ is how i've solved the yaml emitter/dumper issue | 19:51 |
| clarkb | I wouldn't be opposed to using something like that instead of ruamel if it produces readable results | 19:52 |
| clarkb | fungi: is that linux firmware package still downlodaing? Or are things improving as the backend synchronization gets further along? | 19:53 |
| fungi | oh and then yaml.dump(self.data, Dumper=_IBSDumper, allow_unicode=True, default_flow_style=False, explicit_start=True, indent=4, stream=file_descriptor) | 19:53 |
| fungi | or whatever | 19:53 |
| fungi | clarkb: 31% downloaded so far, speed has varied a bit but current estimate is 9 hour remaining | 19:54 |
| clarkb | ack I guess its running in screen so you can get back to it later? | 19:55 |
| fungi | yes | 19:55 |
| fungi | screen-within-screen actually, because i start a screen session to run do-release-upgrade under, and then it creates a screen session within that to do its various tasks | 20:03 |
| fungi | okay, knocking off early to go get dinner, but i'll check back in on the afs01.dfw upgrade later this evening/tomorrow and see if i can't still get it wrapped up and the rw volume moves back underway | 20:08 |
| clarkb | sounds good, enjoy dinner. I'll be around this afternoon but will have to do a school run in about an hour | 20:09 |
| clarkb | shouldn't be a big deal | 20:09 |
| opendevreview | Goutham Pacha Ravi proposed openstack/project-config master: Add toggleWipState permission to all OpenStack repos https://review.opendev.org/c/openstack/project-config/+/959904 | 20:31 |
| gouthamr | don't hate me :P i just think this is useful to everyone, ty for the suggestion fungi ^ | 20:32 |
| clarkb | gouthamr: I'm going to -1 it because you should only edit the meta project if you want it to be global for openstack | 20:33 |
| clarkb | unless you are scoping it to specific groups? | 20:33 |
| gouthamr | i thought so, but, how do i define a core group? | 20:33 |
| clarkb | ya nevermind you're scoping it to each project group | 20:33 |
| clarkb | I wonder if we could just let registered users toggleWipState and if that would create problems | 20:33 |
| gouthamr | yeah, just less disruptive, and in line with current expectations i think | 20:34 |
| clarkb | ya I'm mostly wondering if we need to be so restrictive | 20:34 |
| gouthamr | wouldn't want everyone with that ability in case there are disagreements | 20:34 |
| clarkb | what damage is done if someone unwips a change: basically none. You can land a change earlier than expected but that still requires core approval | 20:34 |
| clarkb | gouthamr: except that core review code review is still the disagreement stopgap | 20:35 |
| JayF | Right now, with our ACLs as they are, I generally instruct new gerrit users to ignore the WIP feature entirely. It'd be nice to have that default changed so it's more generally useful. | 20:35 |
| clarkb | essentially this feels redundant to me if we're keeping core reviewers around | 20:35 |
| JayF | gouthamr: if one core is landing something when there's obvious disagreement, that's a social problem not a technical one | 20:36 |
| clarkb | JayF: yes I personally use workflow -1 which enables anyone to unset it by pushing a new patchset | 20:36 |
| clarkb | (whcih again goes bac kto my wondering if there is any value in restricting this) | 20:36 |
| JayF | clarkb: that is what I teach, CR-1/W-1 | 20:36 |
| JayF | clarkb: if it's not clear: I'm 100% on the loosen up the ACL side | 20:36 |
| clarkb | JayF: yup I got that. | 20:37 |
| * gouthamr thinks | 20:37 | |
| clarkb | I think my preference would be to update the openstack meta project acl to allow toggleWipState for Registered Users. And if that goes well we can probably even set it globally in the All-Projects ACL | 20:37 |
| clarkb | at least personally the way I use workflow -1 I dno't think I would lose any functionality that way as anyone else can already unset my wip -1 this way | 20:38 |
| clarkb | gouthamr: also the case that triggered this change was worked around using a system that any one can use too | 20:38 |
| gouthamr | ack, if you disagree, you can add it back: "Hey please don't unset the WIP, i'll do so when i think so" :P | 20:38 |
| clarkb | (they created a new change) | 20:38 |
| gouthamr | yeah, but, i didn't like that approach | 20:39 |
| gouthamr | all the prior comments are in a different place | 20:39 |
| clarkb | gouthamr: I agree the approach is bad but not because anyone could do it. But because it breaks the review context chain | 20:39 |
| gouthamr | yeah | 20:39 |
| gouthamr | i wish they'd chimed in here and gotten one of you to drop the WIP | 20:39 |
| gouthamr | anyway, i can revise this change to let anyone drop the WIP | 20:40 |
| gouthamr | ty for sharing your reasoning | 20:40 |
| clarkb | we can see what fungi thinks too. If general consensus is to keep it limited to core groups then we can land the big change. But I'd be happy to try the more global registered uses acl change | 20:40 |
| clarkb | gouthamr: one reason I'm allergic to changes like this if we can avoid them is that the next project to be created has to remember to set these options too | 20:41 |
| gouthamr | true! | 20:41 |
| clarkb | the more we can standardize the less minor bits we're likely to get wrong later | 20:41 |
| clarkb | also TIL gerrit will list up to 200 files by default | 20:43 |
| opendevreview | Goutham Pacha Ravi proposed openstack/project-config master: Allow registered users to toggleWipState in OpenStack https://review.opendev.org/c/openstack/project-config/+/959905 | 20:44 |
| opendevreview | Goutham Pacha Ravi proposed openstack/project-config master: Add toggleWipState permission to all OpenStack repos https://review.opendev.org/c/openstack/project-config/+/959904 | 20:46 |
| opendevreview | Goutham Pacha Ravi proposed openstack/project-config master: Allow registered users to toggleWipState in OpenStack https://review.opendev.org/c/openstack/project-config/+/959904 | 20:46 |
| * gouthamr needs another coffee | 20:47 | |
| clarkb | as a side note you don't need claude to edit 202 ini files :) there are tools for this don't don't introduce potential legal questions :P | 20:51 |
| gouthamr | 😆 I read the OIF AI policy again as I submitted that | 20:54 |
| clarkb | for X in `ls acls/openstack/` ; do python3 -c "import ConfigParser; c = ConfigParser.ConfigParser(); f = open('$X'); c.load(f); c['refs/heads/*']['toggleWipState'] = 'Registered Users'; f = open('$X', 'w'); c.write(f)" | 20:55 |
| clarkb | or something like that. I can fit it into one irc message | 20:55 |
| clarkb | oh it needs the ; done | 20:56 |
| clarkb | I'm not sure which I'd spend more time doing: prompting claude to do what I want it to do in a manner I'm confident in and then verifying the output or writing the oneliner and iterating on it a handful of times until I get it right | 20:59 |
| clarkb | the jobs I rechecekd to exercise the zuul-jobs docker buildx build change all appear to be happy at this point. So I'm happy with it | 21:02 |
| clarkb | if no one else screams about broken docker image builds over the next few days I think we can proceed with the slow transition to quay for these images | 21:02 |
| clarkb | (also yay after the first attempt we may finally get this done) | 21:02 |
| clarkb | only like 2 or 3 years later | 21:03 |
| gouthamr | tbh, i didn't need claude to make the mistakes - those were entirely mine; the earlier incarnation of the change needed a repetitive/special task - identify one or more core groups used by each project, and have them be the ones the permissions are granted for.. and do this in the background when i'm resolving a CI issue elsewhere... | 21:28 |
| gouthamr | so somewhat useful to multitask/get more done atm :) | 21:29 |
| Clark[m] | Oh ya the group mattered. I keep forgetting that for some reason. Friday brain. And fwiw my concern was that I would have to check claudes rules etc to see if they align with the policy. My read is that it's ... Ambiguous. Claude says you cannot use its output to compete with Claude. Does that invalidate it for all open source work? Or does it just risk banning your account. I don't knw | 21:29 |
| Clark[m] | I think for this change it doesn't matter much given it's mechanical and not really novel. But if it was used elsewhere I would probably have to ask for clarification | 21:30 |
| gouthamr | ah true; in this case, it was grokking (heh) the existing config and repeating things, not novel at all.. that clause seems weird. | 21:32 |
| fungi | i don't have a problem with the idea of updating openstack's acls to give core review teams toggle wip, though it would be a good idea to do the same for abandon/restore perms for consistency | 21:41 |
| fungi | i also don't object to opening such things to all registered users, whatever the tc prefers is really fine by me | 21:42 |
| fungi | i agree opening it to all registered users can essentially be done in one place in the inherited config, which makes it a way simpler change than per-team perms | 21:44 |
| fungi | with the exception of acls that set exclusive permissions overriding the inherited ones | 21:45 |
| clarkb | fungi: looks like lists might be slow again. Top show iowait is high, but we haven't really swapped like before | 21:59 |
| clarkb | I wonder if this is just mroe requests than the iops for our disks can handle (and swap makes that worse but isn't the culprit?) | 21:59 |
| fungi | yes, i think it's just crawlers re-re-re-requesting every single possible variant url for each and every message in all the list archives on all our mailman sites | 22:01 |
| clarkb | "Mozilla/5.0 (compatible; Thinkbot/0.5.8; +In_the_test_phase,_if_the_Thinkbot_brings_you_trouble,_please_block_its_IP_address._Thank_you.)" <- thats a new one | 22:28 |
| clarkb | that user agent is making requests from more than one IP address too | 22:34 |
| clarkb | I'm sure thats just a grammar error in the UA but stifll | 22:34 |
| clarkb | https://apnews.com/article/anthropic-copyright-authors-settlement-training-f294266bc79a16ec90d2ddccdf435164 | 23:21 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!