*** mvenesio has joined #openstack-swift | 00:10 | |
mvenesio | Hi guys, if i have a cluster with 3 zones nearly full, adding datanodes in the 3 zones at the same time, can result in data unabailability or even in data lost ? | 00:11 |
---|---|---|
mvenesio | do i have to add nodes just in one zone at a time | 00:12 |
mvenesio | or maybe two | 00:12 |
DHE | you can add them all | 00:12 |
DHE | as long as you don't yank anything out prematurely you should be fine. | 00:13 |
mvenesio | but suppose that the cluster moves the same object the 3 copies at the same time for rebalance | 00:13 |
DHE | that's why the 'min_part_hours' parameter during ring creation is for | 00:14 |
mvenesio | so the same copy can't be moved at the same time for the same object | 00:15 |
mvenesio | ? | 00:15 |
DHE | well it's by partition, not object... | 00:15 |
mvenesio | yes well partition | 00:15 |
DHE | and to the best of its ability the ring builder won't move more than 1 replica per partition for each $min_part_hours time period | 00:15 |
DHE | this is why not yanking anything is important | 00:16 |
DHE | well in your situation you wouldn't do that, so you should be fine | 00:16 |
mvenesio | great to hear that, because now i'm having some 404 in the application caused by swift GET | 00:20 |
DHE | these are unexpected? | 00:21 |
mvenesio | yes | 00:22 |
mvenesio | but since the cluster is rebalancing all zones at the same time, don't know if it can be caused by this movements. | 00:23 |
mvenesio | or there's a major issue | 00:23 |
DHE | hmmm... wondering if your full disks actually caused upload issues you are unaware of... | 00:24 |
DHE | (anyone else have other ideas?) | 00:24 |
mvenesio | Guys another question | 02:05 |
mvenesio | as far as i know the object replicator is the service that delete the object from the disk, i'm launching several deletes in batch to set free space in the cluster but despite the delete its ok, i'm not seeing that free space amount in my disks | 02:08 |
mvenesio | this is something regarding the replicator default reclaim_age ? | 02:08 |
*** cshastri has joined #openstack-swift | 05:06 | |
*** silor has joined #openstack-swift | 05:57 | |
*** prasen has joined #openstack-swift | 06:00 | |
*** itlinux_ has joined #openstack-swift | 06:22 | |
*** silor has quit IRC | 06:24 | |
*** itlinux_ has quit IRC | 06:37 | |
*** armaan has joined #openstack-swift | 07:59 | |
*** cshastri has quit IRC | 08:07 | |
*** armaan has quit IRC | 08:15 | |
*** armaan has joined #openstack-swift | 08:16 | |
*** armaan has quit IRC | 08:21 | |
*** geaaru has joined #openstack-swift | 08:45 | |
*** armaan has joined #openstack-swift | 09:12 | |
*** armaan has quit IRC | 09:17 | |
*** armaan has joined #openstack-swift | 09:18 | |
*** armaan_ has joined #openstack-swift | 09:21 | |
*** armaan has quit IRC | 09:22 | |
*** armaan_ has quit IRC | 09:22 | |
*** armaan has joined #openstack-swift | 09:22 | |
*** armaan has quit IRC | 09:27 | |
*** geaaru_ has joined #openstack-swift | 09:36 | |
*** geaaru has quit IRC | 09:37 | |
*** hoonetorg has joined #openstack-swift | 11:39 | |
mvenesio | Hi guys good morning | 12:32 |
mvenesio | as far as i know the object replicator is the service that delete the object from the disk, i'm launching several deletes in batch to set free space in the cluster but despite the delete its ok, i'm not seeing that free space amount reflected in my disks | 12:33 |
mvenesio | this is something regarding the replicator default reclaim_age ? | 12:33 |
mvenesio | i didn't changed it, it has de default value, but now i have to delete objects as soon as possible | 12:33 |
*** SkyRocknRoll has joined #openstack-swift | 13:33 | |
*** mikecmpbll has joined #openstack-swift | 14:47 | |
*** prasen has quit IRC | 18:15 | |
*** SkyRocknRoll has quit IRC | 18:29 | |
*** geaaru__ has joined #openstack-swift | 18:29 | |
*** geaaru_ has quit IRC | 18:31 | |
*** silor has joined #openstack-swift | 18:46 | |
*** mvk_ has quit IRC | 18:57 | |
*** mvk_ has joined #openstack-swift | 19:02 | |
*** armaan has joined #openstack-swift | 19:32 | |
*** linkmark has quit IRC | 20:07 | |
*** silor has quit IRC | 20:11 | |
*** armaan has quit IRC | 20:11 | |
*** armaan has joined #openstack-swift | 20:11 | |
*** silor has joined #openstack-swift | 20:12 | |
*** linkmark has joined #openstack-swift | 20:12 | |
*** armaan has quit IRC | 20:16 | |
openstackgerrit | Tim Burke proposed openstack/swift master: Add support for multiple root encryption secrets https://review.openstack.org/577874 | 20:16 |
openstackgerrit | Tim Burke proposed openstack/swift master: Multi-key KMIP keymaster https://review.openstack.org/586455 | 20:16 |
openstackgerrit | Tim Burke proposed openstack/swift master: Define keymaster log routes on the class https://review.openstack.org/586900 | 20:16 |
openstackgerrit | Tim Burke proposed openstack/swift master: Move keymaster_config_path parsing out of _get_root_secret https://review.openstack.org/586901 | 20:16 |
openstackgerrit | Tim Burke proposed openstack/swift master: Allow multiple keymasters https://review.openstack.org/586902 | 20:16 |
openstackgerrit | Tim Burke proposed openstack/swift master: Display crypto data/metadata details in swift-object-info https://review.openstack.org/586903 | 20:16 |
*** silor has quit IRC | 20:49 | |
*** bigdogstl has joined #openstack-swift | 22:06 | |
*** rcernin has joined #openstack-swift | 22:14 | |
*** bigdogstl has quit IRC | 22:25 | |
*** bigdogstl has joined #openstack-swift | 22:27 | |
*** itlinux_ has joined #openstack-swift | 22:47 | |
*** kei_yama has joined #openstack-swift | 23:12 | |
*** mikecmpbll has quit IRC | 23:14 | |
*** bigdogstl has quit IRC | 23:14 | |
*** bigdogstl has joined #openstack-swift | 23:18 | |
mattoliverau | morning | 23:21 |
*** bigdogstl has quit IRC | 23:27 | |
*** itlinux_ has quit IRC | 23:29 | |
*** bigdogstl has joined #openstack-swift | 23:29 | |
*** bigdogstl has quit IRC | 23:34 | |
*** bigdogstl has joined #openstack-swift | 23:44 | |
*** bigdogstl has quit IRC | 23:53 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!