*** benj_ has joined #openstack-swift | 00:54 | |
*** benj_ has quit IRC | 01:01 | |
*** benj_ has joined #openstack-swift | 01:02 | |
*** rcernin has quit IRC | 01:12 | |
*** rcernin_ has joined #openstack-swift | 01:12 | |
*** benj_ has quit IRC | 01:17 | |
*** benj_ has joined #openstack-swift | 01:17 | |
*** benj_ has quit IRC | 02:06 | |
*** benj_ has joined #openstack-swift | 02:07 | |
*** rcernin_ has quit IRC | 03:14 | |
*** rcernin_ has joined #openstack-swift | 03:21 | |
*** psachin has joined #openstack-swift | 03:42 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-swift | 05:33 | |
*** m75abrams has joined #openstack-swift | 06:36 | |
*** timburke__ has quit IRC | 06:59 | |
*** rcernin_ has quit IRC | 07:14 | |
*** rpittau|afk is now known as rpittau | 08:23 | |
*** mugsie__ is now known as mugsie | 10:11 | |
openstackgerrit | Alistair Coles proposed openstack/swift master: Unify sharder and manage-shard-ranges options https://review.opendev.org/c/openstack/swift/+/778989 | 11:54 |
---|---|---|
openstackgerrit | Alistair Coles proposed openstack/swift master: Add absolute values for shard shrinking config options https://review.opendev.org/c/openstack/swift/+/778990 | 11:54 |
*** psachin has quit IRC | 13:10 | |
*** jv_ has quit IRC | 14:30 | |
*** jv_ has joined #openstack-swift | 14:42 | |
*** m75abrams has quit IRC | 15:50 | |
*** gmann is now known as gmann_afk | 16:01 | |
*** klamath_atx has quit IRC | 16:49 | |
openstackgerrit | Alistair Coles proposed openstack/swift master: WIP relinker: make cleanup checks more robust https://review.opendev.org/c/openstack/swift/+/779308 | 17:22 |
*** rpittau is now known as rpittau|afk | 18:16 | |
*** timburke has joined #openstack-swift | 18:59 | |
*** ChanServ sets mode: +v timburke | 18:59 | |
*** stand has quit IRC | 19:15 | |
*** stand has joined #openstack-swift | 19:22 | |
*** gmann_afk is now known as gmann | 19:32 | |
openstackgerrit | Merged openstack/swift master: s3api: Allow lower-cased regions when making buckets https://review.opendev.org/c/openstack/swift/+/743425 | 20:48 |
*** gyee has joined #openstack-swift | 20:56 | |
*** rcernin has joined #openstack-swift | 21:54 | |
*** renich has joined #openstack-swift | 23:08 | |
renich | good day | 23:09 |
renich | So, I have a two node cluster; with one device each. I want to take the second node offline but I want to re-balance things so that the data in that node gets copied to the first one. Is it enough to set the weight of the device to 0 and start a rebalance? | 23:10 |
renich | https://paste.centos.org/view/f1ecabb4 | 23:11 |
renich | If so, is there a way to track progress? I see the network activity using iftop but I dunno how much remains | 23:11 |
*** renich_ has joined #openstack-swift | 23:51 | |
renich_ | Sorry, I think I got disconnected. If anyone answered or mentioned me, please, repeat. | 23:52 |
*** renich has quit IRC | 23:52 | |
DHE | if the disk is empty, /srv/node/$DISKNAME/object will be empty, or at least devoid of directories. similar for account, container, and object-1 etc. | 23:53 |
renich_ | DHE: OK. So, I should check the object directory in order to see if stuff is getting moved, right? I'll do that then. | 23:53 |
DHE | it's probably the simplest way, short of just running 'df' and watching the number decline over time | 23:54 |
DHE | but in terms of "when is it truly done?" I'd go with that | 23:54 |
renich_ | DHE: does it matter if I have only 1 replica? | 23:54 |
DHE | oh my | 23:54 |
renich_ | yeah... :S | 23:54 |
DHE | you have have multiple replicas even with small clusters. my test lab was a single machine. it just had 30 disks in it. | 23:55 |
renich_ | DHE: In our case, we vae like 80 disks, all exposed via ZFS (raidz2 vdes) and... <gulp>... yeah, we have only one device exposed to swift. | 23:56 |
*** renich_ is now known as renich | 23:56 | |
DHE | oh I see... | 23:56 |
DHE | large objects? there's an interaction with ZFS and swift which caused my test system using that to tank the performance. but we use small, ~4 MB objects | 23:57 |
DHE | I don't necessarily mean large meaning > 5 GB, just substantially larger than 4 MB | 23:57 |
renich | Yeah, pretty large. Satellite data. ~600 TiB of it. And, yeah, performance sucks... | 23:57 |
DHE | I would recommend disabling tmpfile support. unfrotunately it's not a setting and would require a code patch. it's as simple as setting something like 'self.tmpfile = False' or such | 23:58 |
renich | OK. Thank you for the recommendation. | 23:59 |
renich | I just hope I did the rebalancing right. | 23:59 |
renich | Stuff is getting copied at ~8 Gbps per second according to iftop... but I dunno. And ZFS is super slow to update the stats. | 23:59 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!