*** gyee has quit IRC | 00:42 | |
tdasilva | anybody know/remember why we pop policy_type from info? https://github.com/openstack/swift/blob/master/swift/common/storage_policy.py#L287 | 00:47 |
---|---|---|
*** two_tired2 has joined #openstack-swift | 00:51 | |
clayg | fatema_: yes, after that patch x-timestamp because relevant and useful on all proxy->object requests | 00:56 |
*** cloudnull has quit IRC | 01:00 | |
*** mathiasb has quit IRC | 01:00 | |
*** Guest58757 has joined #openstack-swift | 01:03 | |
notmyname | tdasilva: we've historically hidden the details of the durability from the api clients. I think the argument is that an operator can expose that via the storage policy name, if they choose. | 01:44 |
notmyname | in general, an API client cannot make a smart decision based on the value of that field. there's nothing they can do differently if it's "replicated", "ec", or "fancy_flash_optimized" or anything else we choose it have in the future | 01:45 |
notmyname | so therefore we hide it | 01:45 |
notmyname | (at least, that's why I think we hide it, and that argument makes sense to me) | 01:46 |
*** hoonetorg has quit IRC | 02:04 | |
*** hoonetorg has joined #openstack-swift | 02:17 | |
*** AJaeger has left #openstack-swift | 03:17 | |
notmyname | mattoliverau: kota_: maybe one final test update follow-on from last week is https://review.openstack.org/#/c/603870/. I just left a comment there showing why the patch is good (ie lower-constraints no longer skip ~1400 tests) | 04:23 |
patchbot | patch 603870 - swift - set up a lower constraints job that uses an XFS tm... - 2 patch sets | 04:23 |
notmyname | zaitcev: ^ | 04:23 |
* notmyname is now off to bed | 04:24 | |
*** two_tired2 has quit IRC | 05:05 | |
*** Guest58757 is now known as cloudnull | 05:32 | |
*** gkadam has joined #openstack-swift | 06:46 | |
*** rcernin has quit IRC | 07:02 | |
*** e0ne has joined #openstack-swift | 07:16 | |
alecuyer | seongsoocho: We also have 64GB of memory on the object servers. I don't think we saw performance issues before hitting 200m+ files (over 36 disks), but maybe your workload is different, or you have finer monitoring :) i didn't ask how much degradation you're seeing ?. More RAM would probably help (in memory inode would use about 300 bytes) . Also, we are working on a way to store small objects in large files on disk. It's not ye | 07:57 |
alecuyer | t ready but if you'd like to try there is a dev environment available here : https://github.com/alecuyer/vagrant-swift-all-in-one/tree/losf-v2.18.0 | 07:57 |
seongsoocho | alecuyer: In my case, 100m+ files per disk (and there are 300disks on cluster). The most of workload are very small object(<100kb). Thank you for sharing your experience. I will try to have more memory. | 08:01 |
alecuyer | seongsoocho: ouch, I assumed per-server, not per disk. We've never been over 70millions inodes per disk. So I don't know how many disks per server your have, but yes more RAM will help. Let us know how that goes | 08:47 |
seongsoocho | Ok . I have 11 disks per server and there are 29 object-servers. Thanks! | 08:50 |
*** jlvillal has quit IRC | 09:54 | |
*** jlvillal has joined #openstack-swift | 09:54 | |
*** pcaruana has joined #openstack-swift | 10:04 | |
*** jlvillal has quit IRC | 10:50 | |
*** jlvillal has joined #openstack-swift | 10:53 | |
*** pcaruana has quit IRC | 11:15 | |
*** pcaruana has joined #openstack-swift | 11:20 | |
onovy | alecuyer: seongsoocho: fyi, we have 6m inodes / disk, 23 disks / store, 128G RAM | 11:23 |
*** pcaruana has quit IRC | 11:32 | |
*** pcaruana has joined #openstack-swift | 11:39 | |
*** pcaruana has quit IRC | 11:50 | |
*** gkadam has quit IRC | 12:46 | |
*** gkadam has joined #openstack-swift | 12:48 | |
*** gkadam has quit IRC | 12:48 | |
*** arete0xff has joined #openstack-swift | 12:56 | |
seongsoocho | onovy: oh thanks. Do you have any trouble with rebalance? (when add a new disk or new node, there is a problem that degrades the performance of object server) | 13:05 |
onovy | seongsoocho: no | 13:08 |
onovy | but we are adding weight by 10% per round | 13:08 |
onovy | or maybe 15% - 20% :) | 13:08 |
seongsoocho | what does it mean 'round' ?? time? | 13:09 |
onovy | round=waiting for rebalance finish | 13:09 |
onovy | we are using swift-dispersion for checking, if cluster is rebalanced yet | 13:11 |
onovy | so if we want to add new server, we add it with weight '1' then check it. Then we set weight to ~10% of final value, rebalance and waiting for rebalance end | 13:11 |
onovy | and then add another 10%, and so on | 13:11 |
onovy | it's described in docs why it's good idea to do it this way | 13:12 |
onovy | afk | 13:12 |
seongsoocho | a ha... I was added weight by 5% with 11disks, there are huge degrades the performance. It takes about 10~50 seconds tp upload 50kb file . (frequently) | 13:13 |
seongsoocho | swift-dispersion.. I have never used it. thanks. | 13:14 |
seongsoocho | The way to know whether rebalance is finished or not is using that tools? (swift-dispersion) | 13:18 |
DHE | you'd have to do log parsing to be sure it's done, or run the replicator/reconstructor synchronously | 13:20 |
seongsoocho | ok. Actually, It is the first time to add new node. I built a huge size cluster (1PB) at first. | 13:22 |
DHE | raw storage or useful storage ? | 13:23 |
seongsoocho | For public storage (like aws s3) | 13:23 |
*** jistr is now known as jistr|call | 13:32 | |
*** arete0xff has left #openstack-swift | 13:36 | |
*** e0ne has quit IRC | 13:48 | |
onovy | seongsoocho: are you 'shaping' rsync+replicator? | 14:17 |
onovy | it's good idea to limit number of threads and rsync incoming connection | 14:18 |
onovy | after this, you will not notice replicator at all :) | 14:18 |
*** nguyenhai has quit IRC | 14:18 | |
*** nguyenhai has joined #openstack-swift | 14:19 | |
seongsoocho | Yes . | 14:19 |
onovy | we limit max connections=4 per disk | 14:19 |
onovy | in rsync | 14:19 |
seongsoocho | concurrency of replicator is 1 and bwlimit is 8192 | 14:20 |
onovy | bwlimit is useless :). you need to limit request concurency | 14:20 |
seongsoocho | oh.. ok i will check it. .. | 14:20 |
onovy | https://github.com/openstack/swift/blob/master/etc/rsyncd.conf-sample#L25 | 14:21 |
onovy | are you using this? | 14:21 |
onovy | rsync_module per disk? | 14:21 |
seongsoocho | yes yes. | 14:21 |
seongsoocho | not same, but simillar with it | 14:22 |
onovy | cool | 14:22 |
onovy | 7k or 10k disks? | 14:22 |
seongsoocho | 10k disk | 14:22 |
seongsoocho | and rsync per disk | 14:22 |
onovy | same as we | 14:22 |
onovy | do you have dedicated con/acc servers and/or disk for them? | 14:22 |
seongsoocho | yes I seperate the node for con/acc. | 14:23 |
onovy | cool. we are using same servers, dedicated disk (SSD) | 14:23 |
onovy | and concurrency: 2 for object-replicator | 14:24 |
seongsoocho | And I have a 10G network replication, but the traffic is not that much.. | 14:24 |
onovy | we are using 1x 10G with trunk (two vlans). one for traffic and one for replicator | 14:25 |
onovy | and in old stores 2+2 1G | 14:25 |
onovy | did you tuned vm.vfs_cache_pressure ? | 14:26 |
onovy | net.ipv4.tcp_tw_recycle and net.ipv4.tcp_tw_reuse and net.ipv4.tcp_syncookies ? | 14:26 |
seongsoocho | cool. Do you use apache for running object server? or just running with swift-init ? | 14:26 |
onovy | no apache, no swift-init :) | 14:26 |
onovy | using Debian packages, which runs daemons directly | 14:26 |
seongsoocho | vfs_cache_pressure is default. I think 100? | 14:27 |
onovy | so almost same as swift-init, but without swift-init | 14:27 |
onovy | try vfs_cache_pressure 75 | 14:27 |
onovy | works much better for us | 14:27 |
seongsoocho | ok I will try. | 14:27 |
onovy | it will keep more inodes in cache and less data | 14:27 |
onovy | which is what you want | 14:27 |
onovy | you really want/need to have all inodes in memory | 14:27 |
seongsoocho | Is there any good example of kernel configuration for object server? | 14:28 |
seongsoocho | Yes, I want to have all inodes in memory, so I will try 512GB RAM for all object server | 14:29 |
onovy | vm.vfs_cache_pressure=75, net.ipv4.tcp_tw_recycle=1, net.ipv4.tcp_tw_reuse=1, net.ipv4.tcp_syncookies=0 | 14:29 |
onovy | this is our "good example" :) | 14:29 |
onovy | (for stores) | 14:29 |
alecuyer | "you really want/need to have all inodes in memory" <- this :-) | 14:30 |
seongsoocho | :-) Cool | 14:31 |
seongsoocho | onovy: thank you for your help! It's time to go to the bed here. I will tell you more story about my swift cluster later. thanks. | 14:33 |
tdasilva | seongsoocho: please do share, when you have a chance, it's always good to know how people are using swift :) | 14:34 |
*** jistr|call is now known as jistr | 14:34 | |
seongsoocho | tdasilva: Ok. I really want to share my experience to all of our community. | 14:34 |
*** e0ne has joined #openstack-swift | 14:56 | |
*** silor has joined #openstack-swift | 15:48 | |
*** SkyRocknRoll has joined #openstack-swift | 15:58 | |
*** gyee has joined #openstack-swift | 15:58 | |
notmyname | good morning | 16:26 |
notmyname | check the ML! all* the openstack mailing lists are combining into one new list | 16:36 |
notmyname | *not really "all", but probably all the ones you care about | 16:36 |
notmyname | http://lists.openstack.org/pipermail/openstack/2018-September/047005.html | 16:37 |
notmyname | tdasilva: next outreachy round will kick off after the new year, so if you're still interested in being a mentor, there are a few months to figure out some project options | 16:39 |
*** silor has quit IRC | 16:40 | |
tdasilva | notmyname: thanks for the heads up, i'm assuming i'd need to sign up before that? | 16:41 |
*** e0ne has quit IRC | 16:41 | |
notmyname | tdasilva: no, not really. from what I remember, it's just important to write down the project(s) in the right place, let the organizers (ie mahatic) know you're interested, and then if someone signs up as an intern, it's only then that you need to get registered on the outreachy site | 16:42 |
notmyname | sharding state is dumped into recon files, right? so I can use swift-recon (or the /recon endpoint) to get that info from a particular storage node? | 16:49 |
openstackgerrit | Tim Burke proposed openstack/swift master: s3 secret caching https://review.openstack.org/603529 | 17:11 |
*** gkadam has joined #openstack-swift | 17:14 | |
timburke | notmyname: yeah, as i recall. i think when i was testing i just went straight to the json file | 17:40 |
notmyname | timburke: thanks. turns out I didn't have a new-enough swift, which is why I wasn't seeing anything :-) | 17:40 |
*** e0ne has joined #openstack-swift | 18:10 | |
*** SkyRocknRoll has quit IRC | 18:29 | |
*** e0ne has quit IRC | 19:03 | |
*** e0ne has joined #openstack-swift | 19:30 | |
*** e0ne has quit IRC | 19:34 | |
*** e0ne has joined #openstack-swift | 19:43 | |
*** gkadam has quit IRC | 20:10 | |
*** zaitcev has quit IRC | 20:28 | |
*** zaitcev has joined #openstack-swift | 20:40 | |
*** ChanServ sets mode: +v zaitcev | 20:40 | |
openstackgerrit | Tim Burke proposed openstack/swift master: s3api: Increase max body size for Delete Multiple Objects requests https://review.openstack.org/604208 | 20:41 |
*** mrjk has quit IRC | 21:05 | |
*** mrjk has joined #openstack-swift | 21:06 | |
*** e0ne has quit IRC | 21:36 | |
*** timss has quit IRC | 22:05 | |
*** timss has joined #openstack-swift | 22:33 | |
*** spsurya has quit IRC | 22:48 | |
*** spsurya has joined #openstack-swift | 22:50 | |
*** rcernin has joined #openstack-swift | 22:53 | |
openstackgerrit | Merged openstack/swift master: Use templates for cover and lower-constraints https://review.openstack.org/600732 | 23:19 |
openstackgerrit | Tim Burke proposed openstack/swift master: s3api: Increase max body size for Delete Multiple Objects requests https://review.openstack.org/604208 | 23:31 |
*** rcernin has quit IRC | 23:36 | |
*** rcernin has joined #openstack-swift | 23:36 | |
*** gyee has quit IRC | 23:41 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!