*** kyles_ne has quit IRC | 00:31 | |
*** shri has quit IRC | 00:41 | |
*** openstackgerrit has quit IRC | 00:50 | |
*** kyles_ne has joined #openstack-swift | 00:50 | |
*** lorrie has joined #openstack-swift | 00:53 | |
lorrie | Hi, I have a question about using Swift. Currently I installed a swift server using SAIO. I split 4 nodes on the local stoage, let's call it Hard_drive_1. Now Hard_drive_1 is almost full so I add Hard_drive_2 to it and create one node on that. I add it to the ring and rebalance it, it works fine. I have tested by uploading some files and see the used space of Hard disk 2 is increasing, which means there are really files being | 00:57 |
---|---|---|
lorrie | But my question is, it looks like the uploading also keep writing to Hard Disk 1, which is almost full. | 00:58 |
lorrie | So I want to know, what will happen if it Hard disk 1 is really full? will swift automatically notice that and then only write data to Hard Disk2? | 00:59 |
lorrie | Because if it still keeps writing to a full disk, it will cause the writing failure for sure. | 00:59 |
lorrie | thanks | 00:59 |
MooingLemur | lorrie: when you rebalance, it should start redistributing the data so that each drive is roughly equally full | 01:08 |
MooingLemur | that is, if you assigned the weights proportionally | 01:09 |
MooingLemur | how many devices total do you have configured now? | 01:10 |
lorrie | i have 2 devices, hard disk 1 has 4 node, hard disk 2 has 1 node | 01:11 |
lorrie | and all the nodes are equal wighted | 01:11 |
MooingLemur | I'm confused. nodes should have disks, disks should not have nodes :) | 01:11 |
zaitcev | should've moved 2 devices to "Hard_drive_2" instead of adding the 5th | 01:12 |
zaitcev | without any changes to the ring | 01:12 |
lorrie | sorry for my mistyping | 01:12 |
lorrie | i mean, disk 1 has 4 partition | 01:13 |
*** kyles_ne has quit IRC | 01:13 | |
lorrie | disk 2 has 1 partition | 01:13 |
MooingLemur | SAIO is good for learning concepts, but for a real swift cluster, you'll obviously want to have one physical disk mapped to one device in the ring | 01:14 |
lorrie | i only have one node, at first it only has one device with 4 partition. and they are almost full | 01:14 |
lorrie | so i add one new device with 1 partition to this node | 01:15 |
lorrie | so now it has two devices | 01:15 |
lorrie | and i do a rebalance | 01:16 |
zaitcev | You can still trick this around with symlinks from /srv/node. Or better yet, bind mounts. | 01:16 |
MooingLemur | actually.. are you talking about fdisk partitions or swift partitions? | 01:16 |
zaitcev | Move 2 of 4 to the new drive. Once you verfiry it works, remove the 5th device you added. | 01:16 |
lorrie | swift partition | 01:16 |
zaitcev | okay that escalated quickly | 01:17 |
MooingLemur | you don't directly specify how many swift partitions are on each drive, but you can specify the weight so that the rebalance fills them proportionally | 01:17 |
lorrie | but i just did a test and found the disk space of disk 1 is still increasing | 01:18 |
zaitcev | of course, duh | 01:18 |
MooingLemur | can you pastebin the output of: swift-ring-builder object.builder | 01:18 |
lorrie | sure | 01:18 |
lorrie | object.builder, build version 6 1024 partitions, 3.000000 replicas, 1 regions, 5 zones, 5 devices, 0.39 balance The minimum number of hours before a partition can be reassigned is 1 Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 127.0.0.1 6010 127.0.0.1 6010 sdb1 1.00 615 0.10 | 01:19 |
MooingLemur | run from whatever directory object.builder is in, usually /etc/swift | 01:19 |
*** occup4nt has joined #openstack-swift | 01:20 | |
lorrie | sdb* is the old disk | 01:20 |
lorrie | sdc1 is the new add device | 01:20 |
MooingLemur | didn't see sdc1 in your output | 01:20 |
lorrie | 0 1 1 127.0.0.1 6010 127.0.0.1 6010 sdb1 1.00 615 0.10 | 01:21 |
lorrie | 1 1 2 127.0.0.1 6020 127.0.0.1 6020 sdb2 1.00 615 0.10 | 01:21 |
lorrie | 2 1 3 127.0.0.1 6030 127.0.0.1 6030 sdb3 1.00 615 0.10 | 01:21 |
*** btorch_ has joined #openstack-swift | 01:21 | |
*** occupant has quit IRC | 01:21 | |
*** kevinbenton has quit IRC | 01:21 | |
*** mordred has quit IRC | 01:21 | |
*** dosaboy has quit IRC | 01:21 | |
*** echevemaster has quit IRC | 01:21 | |
*** otherjon has quit IRC | 01:21 | |
*** btorch has quit IRC | 01:21 | |
lorrie | 3 1 4 127.0.0.1 6040 127.0.0.1 6040 sdb4 1.00 615 0.10 | 01:21 |
lorrie | 4 1 5 127.0.0.1 6050 127.0.0.1 6050 sdc1 1.00 612 -0.39 | 01:22 |
*** kevinbenton_ has joined #openstack-swift | 01:22 | |
*** mordred has joined #openstack-swift | 01:22 | |
MooingLemur | oh, so you do have that many "physical" disks.. | 01:22 |
*** kevinbenton_ is now known as kevinbenton | 01:22 | |
*** dosaboy has joined #openstack-swift | 01:22 | |
lorrie | yes | 01:22 |
MooingLemur | your weight is 1 on each, which means they will all be preferred equally | 01:22 |
lorrie | yes that's what i expect | 01:23 |
MooingLemur | so each partition is roughly the same size? | 01:23 |
lorrie | yes | 01:23 |
MooingLemur | each device, that is | 01:23 |
*** otherjon has joined #openstack-swift | 01:23 | |
lorrie | actually sdb* is partitioned by swift | 01:24 |
MooingLemur | so the used space on each device is not roughly equal? | 01:24 |
lorrie | under /dev/ it only has sdb1 | 01:24 |
MooingLemur | there's a problem there then | 01:24 |
*** echevemaster has joined #openstack-swift | 01:24 | |
MooingLemur | well, you have /srv/node/sdb1 /srv/node/sdb2, etc? | 01:25 |
lorrie | i configured 4 partition for it on /etc/swift/account-server/ /container /object folder | 01:25 |
lorrie | yes | 01:25 |
MooingLemur | what devices are mounted on those? | 01:25 |
lorrie | d | 01:26 |
lorrie | do i disconnect | 01:26 |
MooingLemur | if you only have a real sdb1, I don't understand what you're doing having a mountpoint /srv/node/sdb2 etc | 01:26 |
zaitcev | ends right in root probably :-) | 01:27 |
*** lorrie_ has joined #openstack-swift | 01:27 | |
zaitcev | and he probably has mount_check=false because SAIO docs say that | 01:27 |
lorrie_ | sorry looks like i just lose connection | 01:27 |
MooingLemur | 2014-11-07 18:25:39 < MooingLemur> what devices are mounted on those? | 01:27 |
MooingLemur | 2014-11-07 18:26:42 < lorrie> d | 01:27 |
MooingLemur | 2014-11-07 18:26:50 < lorrie> do i disconnect | 01:27 |
MooingLemur | 2014-11-07 18:26:53 < MooingLemur> if you only have a real sdb1, I don't understand what you're doing having a mountpoint /srv/node/sdb2 etc | 01:27 |
*** lorrie_ has quit IRC | 01:27 | |
MooingLemur | oh jeez | 01:28 |
MooingLemur | I give up | 01:28 |
zaitcev | MooingLemur: http://www.penny-arcade.com/comic/2014/09/19/take-it-from-my-hands | 01:28 |
lorrie | i m back | 01:28 |
lorrie | sorry, are u still there? | 01:28 |
lorrie | i just follow the saio instruction to do it, they create 4 mount points for one device | 01:29 |
zaitcev | I am still holding a hope that he missed all I said about moving the 2 of the original 4 to sdc1 and then dropping the 5th zoen. | 01:29 |
zaitcev | But it's a very small hope | 01:30 |
lorrie | sure i can just use one mount points for one device though, that's what i did for disk 2 | 01:30 |
MooingLemur | I don't have time to stick around, but I'm wondering if you simply don't have the rsyncds running properly, or the swift-object-replicator, etc | 01:30 |
zaitcev | okay, headdesk time | 01:30 |
MooingLemur | otherwise you're not understanding that all 5 of the mount points are going to all receive the same amount of data each, which means your sdb device will be getting 4x the amount of data as sdc is. | 01:31 |
lorrie | rsyncds runs ok, and i can see the additional disk works | 01:32 |
MooingLemur | and if you tweak that, you're probably still not going to be properly balanced until you add even more devices, because the 3 replica policy is going to supercede your weighting | 01:32 |
zaitcev | well even with 100 to 1 balance it's gonna happen anyway if replication is 3 | 01:32 |
lorrie | yes i understand what you mean | 01:32 |
lorrie | but i just don't know if sdb is full | 01:32 |
lorrie | but sdc is empty | 01:32 |
lorrie | will sdb still receive data? | 01:33 |
MooingLemur | shouldn't be empty. | 01:33 |
zaitcev | because of the unique-as-possible spread, some partitions getting into each zone no matter what balance is ordered | 01:33 |
zaitcev | and he's gog 5 zones | 01:33 |
MooingLemur | if absolutely empty, make sure permissions /srv/node/sdc1 are set properly | 01:34 |
MooingLemur | ownership | 01:34 |
MooingLemur | otherwise watch your syslog for clues :) | 01:34 |
zaitcev | naah, no need. "I have tested by uploading some files and see the used space of Hard disk 2 is increasing, which means there are really files bein" | 01:34 |
lorrie | or let's say, how can we expand the size of a swift server if the original disk space is almost full? | 01:34 |
MooingLemur | to expand the size, you add more devices | 01:35 |
MooingLemur | once you rebalance, the usage on the existing disks will go down | 01:35 |
zaitcev | lorrie: I told you twice, just move some of the old 4 devices to sdc, jeez. Don't touch the ring | 01:35 |
zaitcev | well yeah, but why let replicators struggle with that when you can just run cp -a once | 01:35 |
lorrie | thanks a lot for MooingLemur and zaitcev, my swift is working fine after i add new device to it | 01:36 |
lorrie | @MooningLemur, you mean actually i don't need to do anything and rebalance will help me to do that right? | 01:37 |
zaitcev | look, a rebalance moves some of partitions (of which you have 1024) | 01:38 |
zaitcev | so yeah | 01:39 |
zaitcev | But even so you cannot balance it so that 4/5th ends in 1 device (sdc1) | 01:39 |
lorrie | @zaitcev, when you say move some of the old 4 devices, do you mean? | 01:39 |
lorrie | @zaitcev, if i divide the new added sdc1 to 4 swift partition like i did for sdb1, will it then be rebalanced? | 01:41 |
zaitcev | look, they are all mounted under /src/node, right? so... just relocate some of that stuff with just Linux commands. Although I'm having second thoughts about it, considering the trouble you're having already, this may end in a disaster easily. | 01:41 |
zaitcev | You don't even need to do that | 01:41 |
zaitcev | I'd say just 2 is fine | 01:42 |
zaitcev | Then make the weight 2.0 for each of new 2... You already have 1.0 on each of existing 4, right? | 01:42 |
lorrie | @zaitcev, this is just a test system so actually i can just rebuild it | 01:42 |
lorrie | so it is fine and i just need to know the right way to do it | 01:43 |
zaitcev | 3 1 4 127.0.0.1 6040 127.0.0.1 6040 sdb4 1.00 615 0.10 | 01:43 |
zaitcev | That 1.00 is weight | 01:43 |
lorrie | @zaitcev yes i have 1.0 for 4, i can use 2.0 for 2 more | 01:44 |
lorrie | oh i see, in this way the new files will be writen to the new disk more | 01:45 |
zaitcev | yeah, you have "build version 6 1024 partitions, 3.000000 replicas", so R=3 (well it's float number nowadays) | 01:46 |
zaitcev | so... if you have 6 zones, 4 of 1.0 and 2 of 2.0, it should spread itself | 01:46 |
zaitcev | once you said "swift-ring-build xxxxx rebalance", you push the ring into /etc/swift | 01:46 |
zaitcev | then replicators will copy stuff from overfilled zones to the new, underfilled ones | 01:47 |
lorrie | ok so then i don't have to copy files from one device to another right? | 01:47 |
zaitcev | they delete whatever does not match the new ring, so amount of stuff in old drives should decrease | 01:47 |
lorrie | cool, got it! | 01:48 |
zaitcev | assuming all your replicators and auditors run | 01:48 |
lorrie | thank you very much for your help! | 01:48 |
zaitcev | e.g. don't just do "swift-init main start", you must have 'swift-init rest start' too | 01:48 |
lorrie | currently i did :swift-init all stop | 01:49 |
lorrie | then : startmain | 01:49 |
zaitcev | oh, and make sure rsync works fine | 01:49 |
zaitcev | I usually look with tail -f /var/log/rsync.log | 01:50 |
lorrie | yes it works fine | 01:51 |
lorrie | so after rebalance i need to stop swift, start it, and do swift-init rest start? | 01:52 |
zaitcev | no | 01:52 |
zaitcev | swift is designed so you don't need to restart | 01:53 |
zaitcev | well, actually since you stopped already | 01:53 |
zaitcev | just do swift-init all start | 01:53 |
lorrie | so i don't need to stop, just do swift-init main start, and swift-init rest start? | 01:53 |
zaitcev | you can stop... we used to do this 2 years ago, a restart to pick the new ring | 01:54 |
zaitcev | now the daemons check the timestamps and load new rings | 01:55 |
lorrie | ok, stop and then start. thank you so much | 01:55 |
*** lorrie has quit IRC | 02:19 | |
*** tsg has quit IRC | 03:03 | |
*** goodes has quit IRC | 03:12 | |
*** goodes has joined #openstack-swift | 03:14 | |
*** zhiyan has quit IRC | 03:18 | |
*** zhiyan has joined #openstack-swift | 03:21 | |
*** abhirc has quit IRC | 03:33 | |
*** echevemaster has quit IRC | 04:12 | |
*** foexle has quit IRC | 04:27 | |
*** zaitcev has quit IRC | 05:06 | |
*** haomaiwang has joined #openstack-swift | 05:12 | |
*** anticw has quit IRC | 05:19 | |
*** SkyRocknRoll has joined #openstack-swift | 06:43 | |
*** nottrobin has quit IRC | 06:44 | |
*** nottrobin has joined #openstack-swift | 06:47 | |
*** sfineberg has quit IRC | 08:08 | |
*** nellysmitt has joined #openstack-swift | 08:19 | |
*** kopparam has joined #openstack-swift | 11:17 | |
*** exploreshaifali has joined #openstack-swift | 11:21 | |
*** haomaiwang has quit IRC | 11:34 | |
*** haomaiwang has joined #openstack-swift | 11:35 | |
*** mkollaro has joined #openstack-swift | 11:40 | |
*** nellysmitt has quit IRC | 12:06 | |
*** kopparam has quit IRC | 12:25 | |
*** mkollaro has quit IRC | 12:29 | |
*** exploreshaifali has quit IRC | 12:30 | |
*** haomaiw__ has joined #openstack-swift | 13:06 | |
*** haomaiwang has quit IRC | 13:06 | |
*** exploreshaifali has joined #openstack-swift | 13:14 | |
*** haomaiwang has joined #openstack-swift | 13:20 | |
*** tsg_ has joined #openstack-swift | 13:23 | |
*** haomaiw__ has quit IRC | 13:24 | |
*** kopparam has joined #openstack-swift | 13:26 | |
*** kopparam has quit IRC | 13:31 | |
*** kopparam has joined #openstack-swift | 13:37 | |
*** kopparam has quit IRC | 13:43 | |
*** kopparam has joined #openstack-swift | 13:54 | |
*** kopparam has quit IRC | 13:58 | |
*** SkyRocknRoll has quit IRC | 14:02 | |
*** SkyRocknRoll has joined #openstack-swift | 14:03 | |
*** tab____ has joined #openstack-swift | 14:14 | |
*** kopparam has joined #openstack-swift | 14:21 | |
*** kopparam has quit IRC | 14:26 | |
*** nshaikh has joined #openstack-swift | 14:42 | |
*** tgohad has joined #openstack-swift | 14:57 | |
*** tsg_ has quit IRC | 15:00 | |
*** tgohad has quit IRC | 15:11 | |
*** kopparam has joined #openstack-swift | 15:18 | |
*** exploreshaifali has quit IRC | 15:25 | |
*** mkollaro has joined #openstack-swift | 15:33 | |
*** kopparam has quit IRC | 15:58 | |
*** haomaiwang has quit IRC | 16:02 | |
*** haomaiwang has joined #openstack-swift | 16:02 | |
*** nellysmitt has joined #openstack-swift | 16:11 | |
*** tsg_ has joined #openstack-swift | 16:22 | |
*** mkollaro has quit IRC | 16:29 | |
*** SkyRocknRoll has quit IRC | 16:34 | |
*** nellysmitt has quit IRC | 16:35 | |
*** nellysmitt has joined #openstack-swift | 16:35 | |
*** shakamunyi has joined #openstack-swift | 16:37 | |
*** nshaikh has quit IRC | 16:43 | |
*** 1JTAAXF1S has joined #openstack-swift | 16:48 | |
*** haomaiwang has quit IRC | 16:52 | |
*** nellysmitt has quit IRC | 16:53 | |
*** nellysmitt has joined #openstack-swift | 16:57 | |
*** exploreshaifali has joined #openstack-swift | 16:57 | |
*** mkollaro has joined #openstack-swift | 17:05 | |
*** kopparam has joined #openstack-swift | 17:20 | |
*** kopparam has quit IRC | 17:26 | |
*** tsg_ has quit IRC | 17:28 | |
*** EmilienM has quit IRC | 17:39 | |
*** EmilienM has joined #openstack-swift | 17:39 | |
*** EmilienM has quit IRC | 17:53 | |
*** EmilienM has joined #openstack-swift | 17:53 | |
*** EmilienM has quit IRC | 17:55 | |
*** EmilienM has joined #openstack-swift | 17:56 | |
*** shakamunyi_ has joined #openstack-swift | 18:20 | |
*** shakamunyi has quit IRC | 18:24 | |
*** brnelson has quit IRC | 18:42 | |
*** brnelson has joined #openstack-swift | 18:42 | |
*** Tyger_ has joined #openstack-swift | 19:14 | |
*** tsg_ has joined #openstack-swift | 19:14 | |
*** nshaikh has joined #openstack-swift | 19:27 | |
*** nshaikh has left #openstack-swift | 19:53 | |
*** infotection has quit IRC | 20:02 | |
*** infotection has joined #openstack-swift | 20:07 | |
*** slDabbler has joined #openstack-swift | 20:11 | |
*** slDabbler has quit IRC | 20:15 | |
*** Tyger_ has quit IRC | 20:17 | |
*** exploreshaifali has quit IRC | 20:19 | |
*** tsg_ has quit IRC | 20:28 | |
*** kopparam has joined #openstack-swift | 20:30 | |
*** tsg has joined #openstack-swift | 20:31 | |
*** kopparam has quit IRC | 21:22 | |
*** tsg has quit IRC | 21:36 | |
*** tab____ has quit IRC | 22:10 | |
*** nellysmitt has quit IRC | 23:02 | |
*** TaiSHi has quit IRC | 23:15 | |
*** TaiSHi has joined #openstack-swift | 23:16 | |
*** TaiSHi has joined #openstack-swift | 23:16 | |
*** portante has quit IRC | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!