opendevreview | Matthew Oliver proposed openstack/swift master: Sharding: root audit epoch reset warning https://review.opendev.org/c/openstack/swift/+/799554 | 04:50 |
---|---|---|
*** redrobot4 is now known as redrobot | 04:52 | |
opendevreview | Matthew Oliver proposed openstack/swift master: reconciler: PPI aware reconciler https://review.opendev.org/c/openstack/swift/+/799561 | 05:18 |
opendevreview | Matthew Oliver proposed openstack/swift master: sharder: If saving own_shard_range use no_default=True https://review.opendev.org/c/openstack/swift/+/799966 | 07:27 |
mattoliver | ^ That last one is really a belts and braces patch.. I'm not even convinced we can get to either of those code paths without an own_shard_range being present in the broker.. but in any case we probably shouldn't allow the possibility of a default own_shard_range so definitely should be using no_default=True | 07:29 |
opendevreview | Alistair Coles proposed openstack/swift master: sharder: add more validation checks on config https://review.opendev.org/c/openstack/swift/+/797961 | 11:55 |
opendevreview | Ghanshyam proposed openstack/pyeclib master: Moving IRC network reference to OFTC https://review.opendev.org/c/openstack/pyeclib/+/800052 | 13:28 |
timss | Hi, I'm playing around with overload and doing a `swift-ring-builder object.builder set_overload <factor>` seems to trigger the minimum part hours, making it so I can't rebalance immediately afterwards and push out the updated rings. Is the intended behavior to actually wait, do a write_ring first and then follow up with a rebalance later, or..? | 14:05 |
DHE | the idea is that after moving data around you need to give the cluster time to actually move the data around and settle | 14:21 |
DHE | if you move too many copies of the data around too fast you can end up in a situation where swift can't actually find the data. it exists, but not where it expects to be | 14:22 |
timss | Aye, however set_overload doesn't seem to update the .ring.gz-file itself, just the builder, and tells you that "the change will take effect after the next rebalance", i.e. it seems like you're waiting for nothing at that stage | 14:24 |
DHE | setting the overload value is just a setting for the rebalance operation. it tells it that it's okay to violate the weights of devices in the name of better region/zone/host placement separation. | 14:26 |
DHE | but since doing a rebalance will restart the min_part_hours timer and there may be other changes you want to make, the rebalance command is its own thing. apply all the updates you want one by one, then do the rebalance to actually update the cluster configuration | 14:26 |
timss | Hm ok, thanks. In this scenario the rings in question had already been rebalanced and in use days ago (24h min part hours) and the set_overload triggered the timer. If however I made an entirely new ring, added some dummy devices, and then changed the overload, I could proceed with rebalancing afterwards. I guess I expected changing the overload for an existing ring and rebalancing it | 14:36 |
timss | as being one change "committed" | 14:36 |
timss | I don't really need to change the overload on the existing ring as I'll be redeploying this cluster and configuring the overload in the initial ring setup, it just made unsure if I did it correctly in the first place :) | 14:38 |
opendevreview | Merged openstack/swift master: sharder: avoid small tail shards https://review.opendev.org/c/openstack/swift/+/794582 | 17:01 |
opendevreview | Merged openstack/swift master: Sharding: root audit epoch reset warning https://review.opendev.org/c/openstack/swift/+/799554 | 19:50 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!