1 / 41

Distributed File Systems

Distributed File Systems. Mark Stanovich Operating Systems COP 4610. Distributed File System. Provides transparent access to files stored on a remote disk Recurrent themes of design issues Failure handling Performance optimizations Cache consistency. No Client Caching.

wilmet
Download Presentation

Distributed File Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed File Systems Mark Stanovich Operating Systems COP 4610

  2. Distributed File System • Provides transparent access to files stored on a remote disk • Recurrent themes of design issues • Failure handling • Performance optimizations • Cache consistency

  3. No Client Caching • Use RPC to forward every file system request to the remote server • open, seek, read, write Server cache: X read write Client A cache: Client B cache:

  4. No Client Caching + Server always has a consistent view of the file system - Poor performance - Server is a single point of failure

  5. Network File System (NFS) • Uses client caching to reduce network load • Built on top of RPC Server cache: X Client A cache: X Client B cache: X

  6. Network File System (NFS) + Performance better than no caching - More difficult to handle failures - Has to handle consistency

  7. Failure Modes • If the server crashes • Uncommitted data in memory are lost • Current file positions may be lost • The client may ask the server to perform unacknowledged operations again • If a client crashes • Modified data in the client cache may be lost

  8. NFS Failure Handling 1. Write-through caching 2. Stateless protocol: the server keeps no state about the client • readopen, seek, read, close • No server recovery after a failure 3. Idempotent operations: repeated operations get the same result • No static variables

  9. NFS Failure Handling 4. Transparent failures to clients • Two options • The client waits until the server comes back • The client can return an error to the user application • Do you check the return value of close?

  10. NFS Weak Consistency Protocol • A write updates the server immediately • Other clients poll the server periodically for changes • No guarantees for multiple writers

  11. NFS Summary + Simple and highly portable - May become inconsistent sometimes • Does not happen very often

  12. Andrew File System (AFS) • Developed at CMU • Design principles • Files are cached on each client’s disks • NFS caches only in clients’ memory • Callbacks: The server records who has the copy of a file • Write-back cache on file close. The server then tells all clients that own an old copy. • Session semantics: Updates are only visible on close

  13. AFS Illustrated Server cache: X Client A Client B

  14. read X AFS Illustrated callback list of X client A Server cache: X Client A Client B read X

  15. read X AFS Illustrated callback list of X client A Server cache: X Client A cache: X Client B read X

  16. read X AFS Illustrated callback list of X client A Server cache: X Client A cache: X Client B read X

  17. read X AFS Illustrated callback list of X client A client B Server cache: X Client A cache: X Client B read X

  18. read X AFS Illustrated callback list of X client A client B Server cache: X Client A cache: X Client B cache: X read X

  19. AFS Illustrated Server cache: X Client A cache: X Client B cache: X write X, X X

  20. X X AFS Illustrated Server cache: X Client A cache: X Client B cache: X close X

  21. X X AFS Illustrated Server cache: X Client A cache: X Client B cache: X close X

  22. AFS Illustrated Server cache: X Client A cache: X Client B cache: X close X

  23. X AFS Illustrated Server cache: X Client A cache: X Client B cache: X open X

  24. X AFS Illustrated Server cache: X Client A cache: X Client B cache: X open X

  25. AFS Failure Handling • If the server crashes, it asks all clients to reconstruct the callback states

  26. AFS vs. NFS • AFS • Less server load due to clients’ disk caches • Not involved for read-only files • Both AFS and NFS • Server is a performance bottleneck • Single point of failure

  27. Serverless Network File Service (xFS) • Idea: construct a file system as a parallel program and exploit the high-speed LAN • Four major pieces • Cooperative caching • Write-ownership cache coherence • Software RAID • Distributed control

  28. Cooperative Caching • Uses remote memory to avoid going to disk • On a cache miss, check remote memory, before checking the disk • Before discarding the last cached memory copy, send the content to remote memory if possible

  29. Cooperative Caching Client A cache: X Client B cache: Client C cache: Client D cache:

  30. X Cooperative Caching Client A cache: X Client B cache: Client C cache: Client D cache: read X

  31. X Cooperative Caching Client A cache: X Client B cache: Client C cache: X Client D cache: read X

  32. Write-Ownership Cache Coherence • Declares a client to be a owner of the file at writes • No one else can have a copy

  33. Write-Ownership Cache Coherence owner, read-write Client A cache: X Client B cache: Client C cache: Client D cache:

  34. Write-Ownership Cache Coherence owner, read-write Client A cache: X Client B cache: Client C cache: Client D cache: read X

  35. X Write-Ownership Cache Coherence read-only Client A cache: X Client B cache: Client C cache: Client D cache: read X

  36. X Write-Ownership Cache Coherence read-only Client A cache: X Client B cache: Client C cache: X Client D cache: read-only

  37. Write-Ownership Cache Coherence read-only Client A cache: X Client B cache: Client C cache: X Client D cache: read-only write X

  38. Write-Ownership Cache Coherence Client A cache: Client B cache: Client C cache: X Client D cache: owner, read-write write X

  39. Other components • Software RAID • Stripe data redundantly over multiple disks • Distributed control • File system managers are spread across all machines

  40. xFS Summary • Built on small, unreliable components • Data, metadata, and control can live on any machine • If one machine goes down, everything else continues to work • When machines are added, xFS starts to use their resources

  41. xFS Summary - Complexity and associated performance degradation - Hard to upgrade software while keeping everything running

More Related