1 / 41

Frangipani: A Scalable Distributed File System

Frangipani: A Scalable Distributed File System. C. A. Thekkath , T. Mann, and E. K. Lee Systems Research Center Digital Equipment Corporation. Presented by: Zhiyong (Ricky) Cheng. fran·gi·pani (fran′ji pan ′ ē, pän ′ ē).

nevan
Download Presentation

Frangipani: A Scalable Distributed File System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Frangipani: A Scalable Distributed File System C. A. Thekkath, T. Mann, and E. K. Lee Systems Research Center Digital Equipment Corporation Presented by: Zhiyong (Ricky) Cheng

  2. fran·gi·pani (fran′ji pan′ē, pän′ē) From youdictionary.com : any of a genus (Plumeria) of tropical American shrubs and trees of the dogbane Family, with large, funnel-shaped flowers and milky sap; specif., a small tree with fragrant, reddish flowers that are used, in Hawaii, to make leis.

  3. Outline • Background • Introduction • System Structure • Disk Layout • Logging and Recovery • The Lock Service • Easy Administration • Performance • Conclusions • Questions

  4. Background • About the main author Personal website: http://research.microsoft.com/users/thekkath/ ChanduThekkath is the Director of the Platforms and Distributed Systems Group in ISRC within Microsoft Research. Before that he was a Principal Researcher at Microsoft Research in Silicon Valley. At DEC, Thekkath’s most influential work was the Petal/Frangipani project jointly done with E. Lee and T. Mann. This system supported a scalable, distributed virtual disk and file system. It was completed (and made public) in 1997 and influenced the design of Compaq’s VersaStore (Self-developed storage virtualization strategy) products (HP: Storage Apps) and predates many of the storage and NAS (Network-attached storage) appliances in the industry today.

  5. Background (cont'd) • Original slides: http://ftp.digital.com/pub/Digital/SRC/publications/thekkath/talk/frangipani-sosp.ppt • This paper is built on top of two related papers: Edward K. Lee , Chandramohan A. Thekkath, Petal: distributed virtual disks, Proceedings of the seventh international conference on Architectural support for programming languages and operating systems, p.84-92, October, 1996, Cambridge, Massachusetts, United States. Leslie Lamport. The Part-Time Parliament. Technical Report 49, Digital Equipment Corporation, Systems Research Center, 130Lytton Ave., Palo Alto, CA943011044, September 1989.

  6. Introduction – What's the problem? • Many distributed file systems already there: • VMS Cluster file system, Echo, Calypso, and etc. • Generally, large-scaled distributed file systems are hard to manage. Lots of file systems administration work require human intervention – have to be done manually. • The administration problem is caused by • Growing computer installation. • More disks attached to more machines. (components)

  7. Introduction – Solution • Frangipani • A new scalable distributed file system. • Two layered model: build on top of Petal, a distributed storage system. • Can also be viewed as a cluster file system. • It can solve the administration problem by • Give all users a consistent view of files. • Frangipani servers can be easily added to existing installation to improve the performance. • Add users without manually configuration. • Dynamic/hot backup support • Fault tolerance. (machine, network, disk failures)

  8. Physical disks Introduction – Layered structure • Distributed lock service

  9. System Structure – Common case workstations Petal virtual disk

  10. System Structure – Components • User programs access Frangipani through the standard operating system call interface. (Digital Unix vnode interface) • Frangipani file server module runs within OS kernel. • Changes to file contents are staged through the local kernel buffer pool. Could be volatile until next fsync/sync system call. • Metadata changes are logged in Petal and be guaranteed non-volatile. (Write ahead redo log, discuss later)

  11. Components (cont'd) • Read/write Petal virtual disks using local Petal device driver. • Exploit Petal’s large virtual space. • More details in a separate paper. • The lock services • Multi-reader/single-writer lock • Lock with leases (discuss later)

  12. Client/Server configuration • Security issues: • Any Frangipani machine can read/write any block of the shared Petal virtual disk. • Eavesdropping on the network interconnecting the Petal and Frangipani machines • Solution: run Frangipani, Petal and lock servers on trusted network, machines and OSs . • Client/Server configuration. • All the servers are interconnecting with a private network. • Remote, untrusted clients talk to Frangipani servers through a separate network. (have no access to Petal) • Bonus: Clients can use Frangipani without modifying OS kernel.

  13. Client/Server configuration (cont'd)

  14. System Structure – Design issues • Why not use an old file system on Petal? • Petal works with old file systems. • Traditional file systems such as UFS, AdvFS (target in performance section) cannot share a block device. • The machine runs the file system can be a bottleneck. • Why choose two layer structure? • Two layer structure is not unique. e.g. Universal File Server. • Modularity. Frangipani machines can be added and deleted transparently. • Consistent backup without halting the system. • Depends on the design goal of the file system.

  15. Design issues (cont'd) • Three aspects of the Frangipani design can be problematic: • Duplicated logging. Sometimes logged both by Petal and Frangipani. • Doesn’t use disk location information in placing data. • Frangipani locks entire files and directories rather than blocks.

  16. Disk Layout • 264bytes of address space provided by Petal • Commits/decommits in large chunks – 64K • Six regions in address space: • 1st region stores shared configuration parameters and housekeeping information – 1TB • 2nd region stores logs. Each Frangipani server has one. Reserved 1TB, partitioned into 256 logs. • 3rd region is used for allocation bitmaps, to describe which blocks in the remaining regions are free – 3TB • 4th region holds inodes. 1 TB inode space, each inode 512 bytes long.

  17. Disk Layout (cont'd) • 5th region hold small data blocks, each 4KB in size. Allocated 7TB • The remainder holds for large data blocks. 1 TB for each large block. 224 large files limit. • Frangipani takes advantage of Petal’s large, sparse disk address space to simplify its data structure.

  18. Logging and Recovery • Frangipani uses a write ahead redo log for metadata • Metadata: any on-disk data structure other than the content of an ordinary file. • Log records are kept on Petal. • Logs are bounded in size – 128 KB • Data is written to Petal • On fsync/sync system calls, or every 30 seconds. • On lock revocation or then the log wraps. • Each Frangipani machine has a separate log • Reduces contention • Independent recovery

  19. Logging and Recovery (cont'd) • Frangipani server crashes can be detected in two ways: • Detected by a client of failed server; • When the lock service asks the failed server to return a lock it is holding. • Generally, recovery is initiated by the lock service. • Recovery demon will take the ownership of the failed server’s logs and locks. • After recovery, releases all the locks and frees the logs. • Recovery can be carried out on any machine. • Log is distributed and available via Petal.

  20. Lock Services • Multiple reader/single writer lock mechanism • Read lock allows a server to read data and cache it. • Write lock allows a server to read or write data . • When a write lock is downgraded or released, the server must flush its dirty data to disk. • Locks are moderately coarse-grained • Lock for each logical segments • Each file, directory or symbolic link is one segment. • protects entire file or directory

  21. Lock Services (cont'd) • Avoiding deadlock by globally ordering these locks. • And acquiring these locks in two phases: • A server determine what locks it needs. Which file or directory? Read lock or write lock? • The server sorts the locks by inode address and acquires each lock in turn. • Then checks whether any objects identified in phase one were modified while their locks were released. If so, the server releases locks and loops back to phase one.

  22. Lock Services (cont'd) • The lock service deal with client failure using leases • Client obtain a lease together with the lock. If the lease expires, the client either renew the lease or the lock will become invalid. • Three different implementations: (Key problem: where to store the lock state?) • 1st : A single, centralized server. All lock states are keep in the server volatile memory. • 2nd: Primary/backup server. Store the lock state on a Petal virtual disk, so in case of server crash, the lock state can be recovered. Poor performance.

  23. Lock Services (cont'd) • 3rd and final: A set of mutually cooperating lock servers, and a clerk module linked into each Frangipani server. Result: fully distributed for fault tolerance and scalable performance. • Highlights of final implementation: • The lock servers maintain a lock table for each Frangipani server. Clerk module is responsible for communications. (via asynchronous messages) • A small amount of global state information is replicated across all lock servers using Lamport’sPaxos algorithm. (Also used in Google chubby lock service http://labs.google.com/papers/chubby.html)

  24. Easy Administration (adding/removing servers) • Adding another Frangipani server requires a minimal amount of administrative work: • Which Petal virtual disk to use • And where to find lock service. • Removing a Frangipani server is even easier. • Simply shut the server off. Lock servers will invalid the locks hold by the server after the lease expired and initiate recovery service to run the redo logs.

  25. Easy Administration – backup • Petal’s snapshot feature provides a convenient way to make consistent full dump of a Frangipani file system • Uses copy-on-write techniques • Crash consistent: a snapshot reflects a coherent state. • Backup a Frangipani file system: • Taking a Petal snapshot. • And copying it to tape.

  26. Performance – Experimental setup • Non-volatile memory (NVRAM) • Solved Frangipani server latency problems. • Placed in between physical disks and Petal server. • Ideal testbed: • 100 Petal nodes. (small array controllers) • 50 Frangipani servers. (typical workstations) • Reality: • 7 333Mhz DEC Alpha 500 5/333 as Petal servers. • Each has 9 DIGITAL RZ29 disks, 4.3 GB each. • Connected to 24 port ATM switch 155 Mbit/s link.

  27. Single Machine Performance • Why AdvFS? • Significantly faster than BSD-derived UFS file system. • Can stripe files across multiple disks. • Uses a write-ahead log like Frangipani. • Frangipani FS doesn’t use local disks while AdvFS using locally attached disks. • For MAB, unmount file system at end of each phase. Same reason as the tests performed for log-based FS.

  28. Single Machine Performance Table 1: Modified Andrew Benchmark with unmount operations Table 2: Frangipani Throughput and CPU Utilization

  29. Scaling Frangipani Scaling on Modified Andrew Benchmark Elapsed time (secs) Frangipani Machines

  30. Scaling (cont'd) Frangipani scaling on Uncached Read throughput(MB/s) Frangipani Machines

  31. Scaling (cont'd) Frangipani scaling on write. throughput(MB/s) Frangipani Machines

  32. Conclusions • Frangipani is feasible to build because of its two-layer structure. • all shared state is on a Petal diskeasy to add, delete, and recover servers • Frangipani servers do not communicate with each other: simple to design, implement, debug, and test • Frangipani performance is comparable to a productions DIGITAL Unix file system (AdvFS). • Still in early prototype stage, need more experience to improve scalability, finer-grained locking and etc. • Applications: • Design of Compaq’s VersaStore products • predates many of the storage and NAS appliances in the industry today.

  33. Questions – Two layer structure • What are the advantages of using a two level Frangipani/Petal approach? It seems that the authors developed Petal first, and then worked to design a file system that work with the storage abstraction provided by Petal. The two-level approach has some limitations, e.g., the disability to use disk location information in placing data as mentioned in the end of Section 2. Is it possible to combine these two things together? • Why is there even a need for Frangipani? By the looks of Figure 2, it seems like Petal exports the all the disks as one giant virtual disk. Couldn't a normal file system (like ext3) be put on top of Petal (much like LVM, but distributed)?

  34. Two layer structure (cont'd) • Maybe I'm missing the point of Frangipani... but in Section 7 they talk about adding additional servers to a machine. It sounded like Frangipani is a file system interface you could use to access a Petal virtual disk, so why would you need more than one such interface running on a machine? • Why the authors stopped with only two layers, but not more number of layers? Maybe by splitting parts of one layer can simply the concepts of the file system. Would more layers increase complexity of the system? • Do you believe the two layered approach is the ideal way to design a distributed file system?

  35. Security • In Section 2.2 they talk about security and say that "it would not be sufficient for a Frangipani machine to authenticate itself to Petal as acting on behalf of a particular user." Why not? Is this because Petal has no knowledge of users, and just acts as a disk, or is it something else? • The authors state that even though Frangipani is designed to work well in a "cluster of workstations within a single administrative domain," that it could be exported to "untrusted machines outside an administrative domain." How would this affect the administration of the system? Can the current design cope with this? What about security with untrusted machines now part of the network? • The authors list some security measures they could implement (but haven't) and also state that if they did so they could "reach roughly the NFS level of security." What does "roughly" mean?

  36. Security (cont'd) • It seems that security is a fairly significant limitation for the presented system, as the authors state that they haven't implemented any security measures. Do you know if there is any follow up work that solves this problem? • The authors claim that Frangipani can be "exported to untrusted machines using ordinary network file access protocols", but wouldn't the networks' file storage be compromised? • There seems to be a lot (maybe too much) trust in the Frangipani system. [ie: “Frangipani servers trust one another, the Petal servers, and the lock service.” (p2) and “Any Frangipani machine can read or write any block of the shared Petal virtual disk, so Frangipani must run only on machines with trusted operating systems” (p3)]. Is this very secure?

  37. Logging/Recovery • For recovery, how's recovery demon assigned? Does the fact that only one recovery demon is active means that there is no partitioning? • The paper states that “only metadata is logged, not user data, so a user has no guarantee that the file system state is consistent from his point of view after a failure. We do not claim these semantics to be ideal, but they are the same as what standard local Unix file systems provide.” (p5) Is it reasonable for the users’ data to be inconsistent after a failure? I don’t think this is reasonable and I don’t believe ‘standard local file systems do that’ is a good excuse. Would it be beneficial to combine this with systems such as the Elephant file system to provide more data security?

  38. Performance • How can the authors claim that their system is scalable when all of their performance tests are done on a single server? They did look at the "average time taken by one Frangipani machine" on the modified Andrew Benchmark with up to 6 machines, but how can one claim that a system is scalable with only 6 machines? • The authors promote the reliability of their system (reliably detect the end of the log, method reliably rejects writes with expired leases, etc) yet they failed to report supporting test data. Is this an oversight or perhaps an overly optimistic prediction on their part? • The performance of Frangipani was about the same (but worse) than that of AdvFS, even given that Frangipani has five times the bandwidth. How does it compare to other distributed file systems like AFS?

  39. Scalability / Consistency • Is Frangipani scalable to the large network environment? Can it handle several issues of communication? • The authors state that Petal "optionally replicates data for high availability." If data is replicated, how can the system guarantee that "changes made to a file or directory on one machine are immediately visible on all others." What mechanisms does Petal employ to ensure this level of consistency?

  40. Contributions/ Later work • What are the significant contributions of the Frangipani distributed file system? (i.e. what was their new concept). It seems like they rely heavily on Petal for many of the details. They repeatedly stress the simplicity of their idea (which isn't a bad thing at all) but... What did they do? (Locking system for shared file management vs. a log based system?) • Frangipani is not very portable. Have there been attempts to develop Frangipani for other systems or to make it more portable?

  41. THANK YOU!

More Related