by B. Welch et al., FAST 2008.
Abstract:
The Panasas file system uses parallel and redundant access to object storage devices (OSDs), per-file RAID, distributed metadata management, consistent client caching, file locking services, and internal cluster management to provide a scalable, fault tolerant, high performance distributed file system. The clustered design of the storage system and the use of client- driven RAID provide scalable performance to many concurrent file system clients through parallel access to file data that is striped across OSD storage nodes. RAID recovery is performed in parallel by the cluster of metadata managers, and declustered data placement yields scalable RAID rebuild rates as the storage system grows larger. This paper presents performance measures of I/O, metadata, and recovery operations for storage clusters that range in size from 10 to 120 storage nodes, 1 to 12 metadata nodes, and with file system client counts ranging from 1 to 100 compute nodes. Production installations are as large as 500 storage nodes, 50 metadata managers, and 5000 clients.
Link to the full paper:
http://www.cse.buffalo.edu/faculty/tkosar/cse710_spring13/papers/panasas.pdf
How is file locking handled in Panasas file system?
ReplyDeleteIn paper it is mentioned that UPS is used for safe shutdown. What if UPS backup power is not sufficient for a safe shutdown of the system?
ReplyDeleteWhat exactly is the contribution of uniform random sample in the system?
ReplyDeleteAs system grows dynamically, newly added blades will be given priority over existing blades. But what is the case with reducing the load on already present blades. How is load balanced?
ReplyDeleteWhat is XOR power of the system? How does it affect the system performance?
ReplyDeletePaper Says "In some workloads, recently created files may be hotter than files created several weeks or months ago" - Does duration of file creation impacts the workload, or it just mean recently created file may be accessed several time than file created week ago.
ReplyDelete