1 / 31

Disconnected Operation in the Coda File System

Disconnected Operation in the Coda File System. James J. Kistler and M. Satyanarayanan Carnegie Mellon University. Presented by Cong. Content. I Motivation II Coda Overview III Implementation IV Evaluation V Conclusion. I Motivation.

arnaud
Download Presentation

Disconnected Operation in the Coda File System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Disconnected Operation in the Coda File System James J. Kistler and M. Satyanarayanan Carnegie Mellon University Presented by Cong

  2. Content • I Motivation • II Coda Overview • III Implementation • IV Evaluation • V Conclusion

  3. I Motivation In 1980s, AFS served about 1000 clients in CMU Network is slow and not stable

  4. In 1990s, people had • “Powerful” client: • 33MHz CPU, 16MB RAM, 100MB hard drive • Mobile Users appeared: • 1st IBM Thinkpad in 1992 (Thinkpad 700C) We can do sth. at client without network!

  5. Birth of Coda -- • Disconnected Operation

  6. II Coda Overview (I) – Purpose & Features • Successor of the Andrew File System (AFS) First DFS aimed at a campus-sized user community • Features • open-to-close consistency • callbacks

  7. Coda Overview (II) – How it works • Clients view Coda as a single location-transparent shared Unix file system • Coda namespace is mapped to individual file servers at the granularity of subtrees called volumes • Each client has a cache manager (Venus)

  8. Coda Overview (III) – How it works All the clients see “/coda” #cat /coda/tmp/foo Bang! Network Client Program Venus Coda Server RPC User level Return from syscall Read/Write System Call Coda FS VFS Kernel

  9. Continue critical work when network/server/repository is inaccessible. • Key idea: Caching data • Performance • Availability

  10. High availability is achieved through • Server replication • Client replication (Cache) • Disconnected Operations

  11. Client replication • Server replication • Set of replicas of a volume is VSG(Volume Storage Group) • At any time, client can access AVSG (Available Volume Storage Group) - Low quality relatively +Cheap + Persistent, Secure physically - Expensive

  12. Design Rationale –Replica Control • Pessimistic • Disable all partitioned writes • Require a client to acquire control of a cached object prior to disconnection + Acceptable for voluntary disconnections • Optimistic • Assuming no others touching the file • sophisticated: conflict detection + fact: low write-sharing in Unix + high availability: access anything in range

  13. III IMPLEMENTATION • Venus states • Hoarding • Hoard walking • Prioritized algorithm • Emulation • Reintegration • Conflicts handling

  14. Client Structure

  15. Venus States (I) • Hoarding:Normal operation mode • Emulating:Disconnected operation mode • Reintegrating:Propagates changes and detects inconsistencies

  16. Venus States (II)

  17. Hoarding • Hoard useful data for disconnection • How useful is the data? • Prioritized algorithm: Cache manage • How to keep data updated? • Hoard walking : Reevaluate objects • Balance the needs of connected and disconnected operation • Cache size is restricted • Unpredictable disconnections

  18. Prioritized algorithm • User defined hoard priority p: how interest it is? • Recent Usage q • Object priority = f(p,q) • Kick out the one with lowest priority + Fully tunable Everything can be customized

  19. Hoard Walking • Equilibrium – uncachedobj < cached obj • Walking: restore equilibrium • Reloading HDB (changed by others) • Reevaluate priorities in HDB and cache

  20. Emulation • Act like a server • Record modified objects • Replay update activity Preparation • Log based per volume • Persistence • Meta-data  Recoverable virtual memory (RVM) • Exhaustion • Compress?

  21. Reintegration • Replay algorithm • Execute in parallel to all AVSG • Transaction based • Succeed? • Yes. Free logs, reset priority • No. Save logs to a tar. Ask for help

  22. Conflict Handling • Only care write/write confliction • File vs Directory • File: Halt entire reintegration process • Dir: investigate more • Manual repair

  23. Coda Evaluation • Hardware • 386 laptop, IBM Decstation 3100s • 350MB disk • How …? • How long does reintegration take? • How large a local disk does one need? • How likely are conflicts?

  24. Answers • Duration of Reintegration • Requires very large data transfers • A few hours disconnection ->1 min • Cache size • 100MB at client is enough for a “typical” workday • Conflicts • Over 99% modification by the same person • Two users modify the same obj within a day: <0.75%

  25. Conclusion • Disconnected operation is a simple idea • Hard to implement in each stage • An extended version of write-back cache • Feasible, efficient and usable

  26. Q1 Can Coda be easily extended to become a code repository? Q2 They do not handle any conflict resolution between simultaneously modified files and state that even on collaborative projects, most files were modified by the same person. Wouldn't this not work with rigorously change logged code like in many projects today? Also is aborting the entire reintegration because of a single file a good idea?

  27. Q3 Coda conflict resolution mechanism seems complex and time consuming even though it is providing a tool for this purpose. What kind of conflicts may be difficult for CODA to resolve without user intervention?

  28. Q4 Do we need to address logical extreme problems like this in the file systems? Because this system will be worth nothing during disconnection if a user cannot work with the set of files he is having already. E.g: Suppose a developer may continue development with his local copy of the code but to compile the code he may need shred libraries and other statically linked libraries in the server. In this situation it is unable to continue because of part of the source code and libraries are in the sever. (I am assuming the libraries and source files are large in both size and numbers)

  29. Q5 Why is back-fetching special? Why is it done in a separate stage? If the metadata is immediately updated to point at the shadow file (as the text suggests), why can't writes be pushed to final storage immediately?

  30. Thanks!

More Related