*** henriqueof2 has quit IRC | 01:20 | |
*** henriqueof2 has joined #openstack-swift | 01:21 | |
*** henriqueof2 has quit IRC | 01:52 | |
kota_ | good morning | 02:28 |
---|---|---|
*** psachin has joined #openstack-swift | 02:52 | |
mattoliverau | kota_: o/ | 02:56 |
vbramhankar | mattoliverau: let's try vagrant all in one installer..thank you | 05:16 |
*** tkajinam_ has joined #openstack-swift | 06:36 | |
*** tkajinam has quit IRC | 06:38 | |
kota_ | mattoliverau: o/ | 06:56 |
*** hseipp has joined #openstack-swift | 07:49 | |
*** admin6 has joined #openstack-swift | 08:09 | |
*** pcaruana has joined #openstack-swift | 08:25 | |
*** e0ne has joined #openstack-swift | 08:34 | |
*** tkajinam_ has quit IRC | 08:48 | |
*** rchurch_ has joined #openstack-swift | 08:51 | |
*** rchurch has quit IRC | 08:53 | |
*** e0ne has quit IRC | 09:40 | |
*** jistr is now known as jistr|sick | 09:42 | |
*** gkadam has joined #openstack-swift | 09:50 | |
*** e0ne has joined #openstack-swift | 10:16 | |
*** e0ne has quit IRC | 11:10 | |
*** e0ne has joined #openstack-swift | 11:11 | |
*** e0ne has quit IRC | 12:00 | |
*** e0ne has joined #openstack-swift | 12:08 | |
*** jistr|sick is now known as jistr|sick|mtg | 13:01 | |
*** e0ne has quit IRC | 13:04 | |
*** pcaruana has quit IRC | 14:31 | |
*** gkadam has left #openstack-swift | 14:33 | |
*** henriqueof2 has joined #openstack-swift | 14:56 | |
*** e0ne has joined #openstack-swift | 14:59 | |
*** e0ne has quit IRC | 15:02 | |
*** jistr|sick|mtg is now known as jistr|sick | 15:02 | |
*** e0ne has joined #openstack-swift | 15:04 | |
*** pcaruana has joined #openstack-swift | 15:25 | |
*** [diablo]2 has quit IRC | 15:28 | |
*** [diablo] has joined #openstack-swift | 15:30 | |
*** e0ne has quit IRC | 16:06 | |
admin6 | Hi team, one of my object server is going to be full as all it’s 30 disks are used at 99%. It is part of an erasure coding ring (9+3). The other 7 object servers are not filled more that 90%. I don’t really understand why because the disks weights are the same as a lot of disks in other zones that are filled only at 90%. In fact, it looks like this server disks are growing 5 time faster than other disks. Strange. | 16:08 |
admin6 | I’ve made a rebalance reducing the weight of all disks on this server and growing disks on another server in the same zone but it has not the behavior expected and the disks still continue to fill up. | 16:08 |
*** henriqueof3 has joined #openstack-swift | 16:10 | |
*** e0ne has joined #openstack-swift | 16:12 | |
*** henriqueof2 has quit IRC | 16:12 | |
*** henriqueof2 has joined #openstack-swift | 16:17 | |
*** henriqueof3 has quit IRC | 16:20 | |
*** e0ne has quit IRC | 16:20 | |
*** e0ne has joined #openstack-swift | 16:22 | |
*** e0ne has quit IRC | 16:23 | |
*** henriqueof2 has quit IRC | 16:29 | |
*** gyee has joined #openstack-swift | 16:32 | |
openstackgerrit | Thiago da Silva proposed openstack/python-swiftclient master: Update release to 3.7.0 https://review.openstack.org/640819 | 17:02 |
tdasilva | admin6: i'd suggest to start by double-checking that you have the same ring in all the nodes in the cluster: swift-recon --md5 can help with that | 17:03 |
tdasilva | admin6: but also rember to verify the proxy nodes as swift-recon will not check those (if they are proxy-nodes only and not storage nodes) | 17:04 |
*** psachin has quit IRC | 17:09 | |
admin6 | tdasilva: the ring are the same on all nodes (8/8 hosts matched, 0 error[s] while checking hosts) and also on my 4 proxys | 17:15 |
admin6 | tdasilva: my ring is not well balanced at all because I’m trying to remove some zone, while growing some other zones. However zone 3 and 4 and declared the same way with about the same global weights bt zone 3 has one server full and zone 4 is only filled at 90%. | 17:22 |
admin6 | tdasilva: would you say that a ring with a required overload of 358.386218% is a complete mess ? | 17:25 |
*** _david_sohonet has joined #openstack-swift | 17:29 | |
*** zaitcev has joined #openstack-swift | 17:46 | |
*** ChanServ sets mode: +v zaitcev | 17:46 | |
*** mvkr has quit IRC | 17:48 | |
*** e0ne has joined #openstack-swift | 17:59 | |
*** e0ne has quit IRC | 18:17 | |
*** pcaruana has quit IRC | 18:56 | |
*** e0ne has joined #openstack-swift | 19:13 | |
*** openstackgerrit has quit IRC | 19:23 | |
clayg | timburke: can you point me at the versioning related patch with the etag/listing plumbing? I didn't see it on https://wiki.openstack.org/wiki/Swift/PriorityReviews | 19:29 |
timburke | clayg, https://review.openstack.org/#/c/633094/ | 19:34 |
patchbot | patch 633094 - swift - Allow "harder" symlinks - 4 patch sets | 19:34 |
timburke | then i've got https://review.openstack.org/#/c/633857/ to start to try to use that... but i'm'a need to work on it some more | 19:35 |
patchbot | patch 633857 - swift - WIP: symlink-backed versioned_writes - 1 patch set | 19:35 |
*** hseipp has quit IRC | 19:49 | |
*** e0ne has quit IRC | 20:22 | |
zaitcev | If someone does not raise and alarm, I'm going to land https://review.openstack.org/640518 | 21:19 |
patchbot | patch 640518 - swift - py3: fix copying unicode names - 1 patch set | 21:19 |
zaitcev | This changes py2 by making it a little stricter, so a previously undefined behaviour would traceback now. I _think_ it looks safe. | 21:20 |
*** mvkr has joined #openstack-swift | 21:29 | |
*** openstackgerrit has joined #openstack-swift | 21:31 | |
openstackgerrit | Pete Zaitcev proposed openstack/swift master: py3: port proxy account controller https://review.openstack.org/637653 | 21:31 |
*** timss- has quit IRC | 21:53 | |
*** timss- has joined #openstack-swift | 21:54 | |
*** kota_ has quit IRC | 21:54 | |
*** kota_ has joined #openstack-swift | 21:57 | |
*** ChanServ sets mode: +v kota_ | 21:57 | |
openstackgerrit | Tim Burke proposed openstack/swift master: Autovivify X-Versions-Location container https://review.openstack.org/265015 | 22:04 |
mattoliverau | morning | 22:05 |
*** tkajinam has joined #openstack-swift | 22:56 | |
*** openstackgerrit has quit IRC | 23:28 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!