| jaltman | fungi clark: regarding the openstack-helm performance question from days/weeks ago. fetching the directory entries will be fast because a directory object is a maximum of 2MB in size and the object data is always fetched in a single RPC. | 19:37 |
|---|---|---|
| jaltman | A directory entry consists of a name and a partial file id: vnode-id, unique. These are combined with the cell-id and the volume-id of the directory's file id to construct the full File ID. | 19:39 |
| jaltman | Any operation such as "ls -l" which requires the meta data (aka "stat" info) of each File ID must perform a separate RPC to FetchStatus that information. | 19:41 |
| jaltman | The time to perform the operation is the number of directory entries multiplied by the round-trip time required to FetchStatus a single object. | 19:42 |
| jaltman | The afsd cache configuration will impact the performance if the total number of vnode cache (aka "vcache") entries is too small. | 19:44 |
| jaltman | There is a BulkFetchStatus RPC which can be used to fetch the stat info of up to 50 vnodes at a time. However, its not an RPC which can be requested via the Linux syscall interface since the stat info is fetched via that interface one inode at a time. | 19:47 |
| *** darmach7 is now known as darmach | 21:55 | |
| Clark[m] | Thanks that is helpful info. I think there is an easy way to shard and mitigate the problem since helm uses its own index.yaml file there is a lot of control | 22:41 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!