*** tosky has quit IRC | 01:11 | |
*** bkopilov has quit IRC | 03:10 | |
*** Supun has joined #openstack-swift | 04:01 | |
*** Supun has quit IRC | 04:02 | |
*** Supun has joined #openstack-swift | 04:03 | |
*** bkopilov has joined #openstack-swift | 04:29 | |
*** yashmurty has joined #openstack-swift | 04:56 | |
*** psachin has joined #openstack-swift | 05:24 | |
*** SkyRocknRoll has joined #openstack-swift | 05:30 | |
*** SkyRocknRoll has quit IRC | 05:52 | |
*** SkyRocknRoll has joined #openstack-swift | 05:53 | |
*** zaitcev has joined #openstack-swift | 05:53 | |
*** ChanServ sets mode: +v zaitcev | 05:53 | |
*** hoonetorg has quit IRC | 06:02 | |
*** hoonetorg has joined #openstack-swift | 06:15 | |
*** SkyRocknRoll has quit IRC | 06:27 | |
*** yashmurt_ has joined #openstack-swift | 06:36 | |
*** yashmurty has quit IRC | 06:39 | |
*** yashmur__ has joined #openstack-swift | 06:39 | |
*** yashmur__ has quit IRC | 06:40 | |
*** yashmurt_ has quit IRC | 06:42 | |
*** Supun has quit IRC | 07:00 | |
*** Supun has joined #openstack-swift | 07:01 | |
*** psachin has quit IRC | 07:05 | |
*** Supun has quit IRC | 07:08 | |
*** armaan has joined #openstack-swift | 07:39 | |
*** yashmurty has joined #openstack-swift | 08:17 | |
*** yashmurty has quit IRC | 08:21 | |
*** yashmurty has joined #openstack-swift | 08:36 | |
*** yashmurty has quit IRC | 08:41 | |
*** yashmurty has joined #openstack-swift | 08:44 | |
*** yashmurty has quit IRC | 08:49 | |
*** SkyRocknRoll has joined #openstack-swift | 08:49 | |
*** yashmurty has joined #openstack-swift | 09:00 | |
*** yashmurty has quit IRC | 09:05 | |
*** Supun has joined #openstack-swift | 09:28 | |
*** Supun has quit IRC | 09:48 | |
*** armaan has quit IRC | 10:10 | |
*** armaan has joined #openstack-swift | 10:17 | |
*** armaan has quit IRC | 10:18 | |
*** armaan has joined #openstack-swift | 10:48 | |
*** Supun has joined #openstack-swift | 10:50 | |
*** Supun has quit IRC | 11:07 | |
*** bkopilov has quit IRC | 11:28 | |
*** tosky has joined #openstack-swift | 12:41 | |
*** bkopilov has joined #openstack-swift | 13:12 | |
*** links has joined #openstack-swift | 13:42 | |
*** silor has joined #openstack-swift | 14:16 | |
*** Supun has joined #openstack-swift | 15:31 | |
*** yashmurty has joined #openstack-swift | 15:41 | |
DHE | from a design standpoint, swift is all about handling failures gracefully. do you think this could be extended to using systems known to be old and (theoretically) more likely to fall over of natural causes? | 15:42 |
---|---|---|
*** Supun has quit IRC | 15:42 | |
*** yashmurty has quit IRC | 15:46 | |
*** Supun has joined #openstack-swift | 15:54 | |
-openstackstatus- NOTICE: Zuul has been restarted and queues were saved. However, patches uploaded after 14:40UTC may have been missed. Please recheck your patchsets where needed. | 15:56 | |
*** Supun has quit IRC | 16:05 | |
*** Supun has joined #openstack-swift | 16:06 | |
notmyname | DHE: I'm not sure what you mean, exactly. you're right that siwft is all about handing failures. failed hardware is normal and expected. are you talking about putting swift "on top of" (whatever that means) other storage systems? | 16:21 |
*** Supun has quit IRC | 16:29 | |
*** Supun has joined #openstack-swift | 16:29 | |
DHE | notmyname: I have a number of ~8-10 year old servers I'm thinking of throwing into a swift stack. New hard drives though. while the machines have been fairly good, they are well past their intended lifespan | 16:42 |
notmyname | no worries. if they run python 2, they'll be fine | 16:44 |
notmyname | if they are 32 bit, you'll have to lower the max file size limit to at most (2**32)-1 | 16:44 |
notmyname | for the whole cluster, not just those machines | 16:44 |
DHE | they're 64 bit, but only just. Xeons based on the Core2 Duo CPU | 16:44 |
notmyname | also, slower machines could end up being a bottleneck in the cluster for throughput or replication cycles | 16:45 |
notmyname | but it'll be fine! :-) | 16:45 |
notmyname | I've got swift running at home on some 32bit ARM chips. I've got it at work on fancy xeon-D chips. I've put it on a raspberry pi before. it all works | 16:46 |
notmyname | (note: "works" doesn't imply any particular performance guarantee ;-) ) | 16:47 |
DHE | well, these machines have 1gig NICs and I'm not planning on changing that. as storage bricks I think that'll be okay. | 16:47 |
DHE | so I think that'll do | 16:47 |
* DHE is just building up his first lab now (not related to these machines) so I have much to do | 16:48 | |
*** silor has quit IRC | 16:49 | |
*** silor has joined #openstack-swift | 16:49 | |
*** Supun has quit IRC | 17:01 | |
*** links has quit IRC | 17:51 | |
*** armaan has quit IRC | 17:58 | |
*** links has joined #openstack-swift | 18:31 | |
*** SkyRocknRoll has quit IRC | 18:36 | |
*** armaan has joined #openstack-swift | 18:40 | |
*** yashmurty has joined #openstack-swift | 18:42 | |
*** yashmurty has quit IRC | 18:46 | |
*** links has quit IRC | 20:03 | |
*** silor has quit IRC | 20:04 | |
*** bkopilov has quit IRC | 20:59 | |
*** bkopilov has joined #openstack-swift | 21:12 | |
DHE | first lab network is up, but I think I've made a mistake somewhere. I have 2 regions with 2 availability zones each (and 6 hosts total) configured with 3-way replication, but a partition got an object put onto 2 devices on the same host. that seems wrong. | 21:21 |
*** nucleo has joined #openstack-swift | 21:25 | |
notmyname | yes it does. check the output of `swift-ring-builder` for that ring. make sure you have set things as you expect | 21:25 |
DHE | slight correction... 5 hosts, 7 disks. might be due to the weighting maybe? one of the zones has low-weight devices. | 21:30 |
notmyname | oh, do you mean on the same server or the same drive? | 21:31 |
DHE | no, but same server different drives. with 3-way replication and 4 availability zones total I assumed this would not happen | 21:32 |
notmyname | should never be on the same drive (as reported by swift-get-nodes or anything; in reality it would be the same file on disk) | 21:32 |
notmyname | could be because of weighting, if they are not all equal | 21:32 |
DHE | I guess that makes sense | 21:32 |
notmyname | what's your balance number? is it zero? | 21:32 |
DHE | 0.00 balance, 12.64 dispersion | 21:33 |
notmyname | if you were adding capacity to an existing ring, it may take a few rebalances to get moved | 21:33 |
notmyname | ah | 21:33 |
notmyname | so you have different weights? | 21:33 |
DHE | yes, 5x100, 1x50, 1x30 | 21:33 |
DHE | where the 30 and 50 comprise a single availability zone | 21:33 |
notmyname | yeah, ok | 21:37 |
notmyname | if you are really worried about it, you can use the "overload" parameter to force balance over dispersion | 21:37 |
notmyname | balance will target even percentage used on each drive (ie every drive has the same % used). dispersion will target unique failure domains (but wehn the smallest drive is 100% full, you're done) | 21:38 |
DHE | I can work with that | 21:39 |
notmyname | overload is a parameter that let's you adjust between which one you want | 21:39 |
notmyname | ie if you overload a drive, it will take more partitions (ie dispersion) at the expense of balance | 21:39 |
DHE | this is my lab, so pathological cases are business as usual | 21:39 |
*** rcernin has joined #openstack-swift | 21:43 | |
DHE | notmyname: thanks a lot | 21:46 |
notmyname | np | 21:46 |
*** zaitcev_ has joined #openstack-swift | 21:54 | |
*** ChanServ sets mode: +v zaitcev_ | 21:54 | |
*** zaitcev has quit IRC | 21:58 | |
*** threestrands has joined #openstack-swift | 22:17 | |
*** threestrands has quit IRC | 22:17 | |
*** threestrands has joined #openstack-swift | 22:17 | |
*** armaan has quit IRC | 22:38 | |
mattoliverau | morning | 22:52 |
*** nucleo has quit IRC | 23:11 | |
*** yashmurty has joined #openstack-swift | 23:13 | |
*** yashmurty has joined #openstack-swift | 23:14 | |
*** kei_yama has joined #openstack-swift | 23:15 | |
*** Lei_ has joined #openstack-swift | 23:22 | |
DHE | it's working, in spite of my stupidly disabling xattrs on one of the object servers.. | 23:44 |
mattoliverau | \o/ | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!