opendevreview | ASHWIN A NAIR proposed openstack/swift master: Add X-Open-Expired to recover expired objects https://review.opendev.org/c/openstack/swift/+/874710 | 16:30 |
---|---|---|
timburke | paladox, i saw your question in -infra -- are the segments for the SLO in the same container as the manifest? if not, do the container ACLs match between the two containers? | 20:51 |
paladox | well there's a seperate container it created e.g. miraheze-obeymewiki-dumps-backup_segments | 20:51 |
paladox | i noticed it doesn't copy perms so i set it so anons could access it as well but that didn't fix it | 20:52 |
paladox | https://www.irccloud.com/pastebin/IJ6WQXdj/ | 20:52 |
paladox | normal files work. It just SLO objects that don't unless i use a swift command | 20:53 |
fungi | hah, i was about to suggest asking in here | 20:55 |
paladox | on the SLO i see: | 21:02 |
paladox | https://www.irccloud.com/pastebin/ua1UPcqZ/ | 21:02 |
paladox | accessing the segment directly worked | 21:05 |
timburke | paladox, looks like the object was uploaded as a DLO, not an SLO -- DLO also needs listing permissions on the segment container | 21:05 |
paladox | oh... | 21:05 |
* paladox tries fixing that | 21:05 | |
timburke | i'd either re-upload as an SLO, or adding .rlistings to the ACL on the segments container | 21:05 |
paladox | that worked!!! | 21:08 |
paladox | timburke: which is recommended? slo or dlo? And i'm not sure how you would upload as a slo, i think we just used the swift upload command | 21:08 |
timburke | i'd recommend SLO -- it's the default (assuming the cluster supports it) in recent versions of python-swiftclient, but on old versions you should just need to add --use-slo to the command line | 21:10 |
paladox | We use debian buster so which ever swift version that dist have | 21:11 |
paladox | *bullseye | 21:11 |
timburke | correction, recent *version* -- there's only been 4.3.0 with it | 21:12 |
paladox | hmm, looks like it should be default for us? we use version 2.26 | 21:14 |
paladox | and i see no --use-slo | 21:14 |
timburke | sounds like a server version, i was talking client. bullseye has swiftclient 3.10, even sid is 4.2.0 -- so no 4.3.0 on debian yet | 21:16 |
paladox | oh | 21:17 |
timburke | "i see no --use-slo" -- is that in the CLI help? you get different help text depending on whether you're running `swift --help` or `swift upload --help` | 21:18 |
paladox | oh | 21:25 |
paladox | i see it in the later | 21:25 |
paladox | what's the difference between slo and dlo | 21:25 |
paladox | is slo more performance downloading & uploading? | 21:25 |
timburke | (apologies for the delay) slo offers better consistency guarantees by putting the complete list of segments (and their expected MD5s) in the manifest object itself -- dlos rely on container listings, so if there are any delays/inconsistencies there, users may download incomplete data (and likely not realize it!) | 23:08 |
paladox | timburke: do you know how i can get swift-replicator working with an out of storage node? I need to transfer data off it but it doesn't seem to happening? Or am i missing something. I just see Error syncing partition: [Errno 28] No space left on device in the service status. | 23:13 |
paladox | i changed the weight | 23:13 |
paladox | Unable to read '/srv/node/sda31/objects/938/hashes.pkl'#012Traceback (most recent call last):#012 File "/usr/lib/python3/dist-packages/swift/obj/diskfile.py", line 1247, in __get_hashe> | 23:15 |
timburke | paladox, is that an rsync error? sounds like the destination drive may be full | 23:15 |
paladox | hmm | 23:15 |
paladox | Nah, i don't think so. | 23:15 |
timburke | ah -- no space when rehashing... ick | 23:15 |
timburke | i think you'll need to delete some data -- i'd pick some arbitrary diskfiles, run swift-object-info to find the current assignments for them, check that they've been replicated there, then delete them (via the filesystem, not the API! don't want to go deleting it cluster-wide ;-) | 23:18 |
timburke | it'd be nice if we could tolerate the ENOSPC better, though -- i think i remember seeing that bug once... | 23:19 |
timburke | https://bugs.launchpad.net/swift/+bug/1491676 | 23:20 |
paladox | found a 5g rsyncd.log file | 23:21 |
paladox | i've emptied it | 23:21 |
paladox | hmm that still causes it to say out of storage... | 23:24 |
paladox | we don't use replicas (we only have one set of data, i know bad practice but cannot afford to have >=2 replicas) | 23:35 |
opendevreview | Tim Burke proposed openstack/swift master: Green GreenDBConnection.execute https://review.opendev.org/c/openstack/swift/+/866051 | 23:42 |
opendevreview | Tim Burke proposed openstack/swift master: tests: Fix replicator test for py311 https://review.opendev.org/c/openstack/swift/+/886538 | 23:42 |
opendevreview | Tim Burke proposed openstack/swift master: tests: Stop trying to mutate instantiated EntryPoints https://review.opendev.org/c/openstack/swift/+/886539 | 23:42 |
opendevreview | Tim Burke proposed openstack/swift master: fixup! Green GreenDBConnection.execute https://review.opendev.org/c/openstack/swift/+/886540 | 23:42 |
opendevreview | Tim Burke proposed openstack/swift master: CI: test under py311 https://review.opendev.org/c/openstack/swift/+/886541 | 23:42 |
timburke | the rsync log was probably on the root drive, not the data drive. the issue will be one of the disks mounted under /srv/node/* -- i'd start with a `df -h` to confirm which disk is full | 23:44 |
paladox | timburke: will it still try and rsync the data to other servers? Or is it now stuck? | 23:44 |
paladox | there's only one drive (we don't have a seperate one for the data) | 23:44 |
paladox | https://www.irccloud.com/pastebin/MNPKHphu/ | 23:45 |
timburke | oh. hm. i would've thought the 5G would have helped :-/ | 23:45 |
timburke | did we just immediately fill it up again? i guess check for other logs we can throw away? | 23:45 |
paladox | seems that, that was the only large file. | 23:47 |
paladox | for some reason using fallocate_reserve didn't stop it from filling up | 23:47 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!