1 / 18

Serverless Network File Systems

Serverless Network File Systems. Network File Systems. Allow sharing among independent file systems in a transparent manner Mounting a remote directory in NFS Use remote procedure calls Traditional Network File Systems, like NFS, use a central server to provide the file system services.

Download Presentation

Serverless Network File Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Serverless Network File Systems

  2. Network File Systems • Allow sharing among independent file systems in a transparent manner • Mounting a remote directory in NFS • Use remote procedure calls • Traditional Network File Systems, like NFS, use a central server to provide the file system services. • This work present an alternative, serverless network file system called xFS.

  3. NFS Requirements / Metrics • Performance / Speed • Availability • Scalability • Fault Tolerance / Recovery

  4. Limitations of Central Server Systems • All read misses and disk writes go to the server – Performance Bottleneck • Not scalable – Too many clients can hurt performance. • Expensive to increase server hardware or add servers. • Require server replication for high availability – increases cost and complexity. Also, latency to duplicate the data.

  5. Serverless Network File Systems • Increased Performance - distributes control processing and data storage among cooperating workstations. • Scales easily to simplify system management. • Fault tolerance through distributed RAID and Log structured file system. • Migrates responsibility of failed components to other workstations.

  6. Background • RAID – Write portion of data to each disk. • High performance (parallel accesses) • Availability • xFS uses RAID striping for files across a stripe group. • Small writes hurt performance – must do parity update. • Log-structured File System (LFS) • Append-only file system. • Leaves holes – need cleanup. • Quick writes can be delayed to help RAID • Helps recovery (checkpoints on disk)

  7. Background (con’t) • Multiprocessor Cache Consistency • Statically divide physical memory evenly among processors. • Each processor manages the cache consistency state for its own physical memory. • xFS does this for files. The node storing the files keeps up with consistency. • In xFS it is dynamic – files can be managed by different nodes.

  8. Goals of xFS • Provide a scalable way to subset storage servers into groups to provide efficient storage. • Scalable, distributed metadata and cache consistency management. • Flexibility to dynamically reconfigure responsibilities after failures.

  9. System entities • Clients – want to access data in the system • Storage Servers – store the system’s files • Metadata Managers – hold cache consistency state and disk location metadata. • Cleaners – clean up the LFS after writes • Entities may lie on the same system or on different systems.

  10. Serverless File Service • “Anything, Anywhere” – all data and metadata can be located on and move to any node in the system. • File access is faster because they are distributed across multiple workstations! • How does the system locate the data? • Key maps: manager map, imap, file directories, stripe group maps.

  11. Manager Map • Table indicating which machines manage which file indices • File indices listed in the parent directory file. • Globally Replicated • Updated dynamically • On machine failure or reconfiguration of file managers • Can work as a load balancing mechanism. • Not yet implemented, but a possibility

  12. Imap • Imaps are held by a file’s manager • Maps a file’s index number to the disk address of the index node (inode). • The index node gives the file offset and pointers to each data block. • Similar to standard OS implementation

  13. RAID Stripe Groups • Better to stripe files over a group of servers instead of all servers in the system. • Improves availability – Each group stores its own parity. Allows recovery from multiple failures. • Stripe Group Map – tells which nodes are a member of the group. • Must reference this map before reading or writing data to the file system.

  14. Cache Consistency • Token-based scheme • Client must request and acquire write ownership from the file’s manager. • Manager invalidates other cached copies

  15. Cache Consistency (con’t) • Client keeps write ownership until another client requests it. It then must flush the changes to the disk. • xFS guarantees that the up-to-date copy is given to the node requesting the data. • Traditional network file systems do not always guarantee this.

  16. Management Distribution Policies • xFS tries to assign files used by a client to a manager co-located on that machine. • When a client creates a file, xFS assigns the manager on that machine to the file. • Improves locality • Reduces network hops to satisfy requests - 40%

  17. Reconfiguration • Not yet implemented in this version… • When system detects configuration change, a global consensus algorithm is envoked. • Leader is chosen to run the algorithm given a list of active nodes. • Generates a new manager map and distributes it across the nodes.

  18. Security in xFS • Only appropriate in a restricted environment • Machines cooperating over a fast network • Must trust one another’s kernels to enforce security

More Related