1 / 48

Chapter 5: Distributed File Systems

Chapter 5: Distributed File Systems. Introduction. Distributed file system enables programs to store/access remote files from any computer on a network . Performance & reliability experienced for access to files stored at a server: - comparable to file stored on local disk.

meara
Download Presentation

Chapter 5: Distributed File Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5: Distributed File Systems

  2. Introduction • Distributed file system • enables programs to store/access remote files • from any computer on a network. • Performance & reliability experienced for access to files stored at a server: - comparable to file stored on local disk. • Basic distributed file service implementations: • Sun Network File Systems (NFS) • Andrew File System (AFS)

  3. Introduction (Cont). Storage systems and their properties First generation of DS (1974-95): - file systems (e.g. NFS) - only networked storage systems. Advent of distributed object systems (CORBA, Java) and the web - file systems has become more complex.

  4. Fig. 8.1: Storage systems and their properties Sharing Persis- Distributed Consistency Example tence cache/replicas maintenance Main memory 1 RAM 1 File system UNIX file system Distributed file system Sun NFS Web server Web Distributed shared memory Ivy (DSM, Ch. 18) Remote objects (RMI/ORB) CORBA 1 1 Persistent object store CORBA Persistent Object Service 2 Peer-to-peer storage system OceanStore (Ch. 10) Types of consistency between copies: 1 - strict one-copy consistency √ - approximate consistency, slightly weaker guarantees X - no automatic consistency 2: considerably weaker guarantees

  5. Introduction (Cont). What is a file system? • Persistent stored data sets • Hierarchic name space visible to all processes • API with the following characteristics: • Share data between users, with access control - access and update operations on persistently stored data sets - sequential access model (with additional random facilities) • Concurrent access: -certainly for read-only access • Other features: • - mountable file stores

  6. Figure 8.2File system modules

  7. File length Creation timestamp Read timestamp Write timestamp Attribute timestamp Reference count Owner File type Access control list E.g. for UNIX: rw-rw-r-- What is a file system? Fig. 8.3: File attribute record structure updated by system: updated by owner:

  8. Tranparencies Access: Same operations Location: Same name space after relocation of files or processes Mobility: Automatic relocation of files is possible Performance: Satisfactory performance across a specified range of system loads Scaling: Service can be expanded to meet additional loads Concurrency properties Isolation File-level or record-level locking Other forms of concurrency control to minimise contention • Replication properties • File service maintains multiple identical copies of files • Load-sharing between servers makes service more scalable • Local access has better response (lower latency) • Fault tolerance • Full replication is difficult to implement. • Caching (of all or part of a file) gives most of the benefits (except fault tolerance) Heterogeneity properties Service can be accessed by clients running on (almost) any OS or hardware platform. Design must be compatible with the file systems of different OSes Service interfaces must be open - precise specifications of APIs are published. • Fault tolerance • Service must continue to operate even when clients make errors or crash. • at-most-once semantics • at-least-once semantics • requires idempotent operations • Service must resume after a server machine crashes. • If the service is replicated, it can continue to operate even during a server crash. Consistency Unix offers one-copy update semantics for operations on local files - caching is completely transparent. Difficult to achieve the same for distributed file systems while maintaining good performance and scalability. • Security • Must maintain access control and privacy as for local files. • based on identity of user making request • identities of remote users must be authenticated • privacy requires secure communication • Service interfaces are open to all processes not excluded by a firewall. • vulnerable to impersonation and other attacks Efficiency Goal for distributed file systems is usually performance comparable to local file system. Distributed File System Requirements • Transparency • Concurrency • Replication • Heterogeneity • Fault tolerance • Consistency • Security • Efficiency..

  9. Distributed File System Requirements • File system is the most heavily loaded service in internet, so its functionality and performance are critical. • The design should support many transparencies • Access transparency: Clients should be unaware of distribution of files, i.e., a single command should be used to access local and remote files. • Location transparency: Clients should see the same name space of a file irrespective of its location. • Mobility transparency: Change in location of file should not affect user. Change should automatically be detected by administration tables. • Performance transparency: Performance of the client should be satisfactory, when load varies on the service. • Scaling transparency: service should be extendable to deal with a wide range of loads.

  10. File Service Architecture • A file service architecture has three divisions of responsibilities: Flat file service: Concerned with the implementing operations on the contents of the file. UFIDs (Unique File Identifies) are used to refer to files. UFIDs also differentiate between directory and file. Directory Service: For mapping between text names of files and their UFIDs. Client Module: A client module runs in each client computer . It integrates and extends operations of flat file service and directory service under a single interface.

  11. LookupAddName UnName GetNames Client computer Server computer Directory service Application Application program program Flat file service Client module ReadWrite Create Delete GetAttributes SetAttributes Fig. 8.5:Model File Service Architecture

  12. Flat file service Read(FileId, i, n) -> Data Write(FileId, i, Data) Create() -> FileId Delete(FileId) GetAttributes(FileId) -> Attr SetAttributes(FileId, Attr) Directory service Lookup(Dir, Name) -> FileId AddName(Dir, Name, File) UnName(Dir, Name) GetNames(Dir, Pattern) -> NameSeq position of first byte position of first byte Server Operations For The Model File Service Fig. 8.6 and 8.7 FileId Pathname lookup Pathnames such as '/usr/bin/tar' are resolved by iterative calls to lookup(), a call for each component of the path, starting with the ID of the root directory '/' which is known in every client. FileId A unique identifier for files anywhere in the network.

  13. File Group • A collection of files that can be located on any server or moved between servers while maintaining the same names. A file can not change its group. • Similar to a UNIX filesystem • Helps with distributing the load of file serving between several servers. • File groups have identifiers which are unique throughout the system (and hence for an open system, they must be globally unique). • Used to refer to file groups and files

  14. File Group (Cont.) To construct a globally unique ID we use some unique attribute of the machine on which it is created, e.g. IP number, even though the file group may move subsequently. File Group ID: 32 bits 16 bits IP address date

  15. Case Study: Sun NFS • An industry standard for file sharing on local networks since the 1980s. • Open standard with clear & simple interfaces. • Supports many of the design requirements already mentioned: - Transparency - Heterogeneity • - Efficiency - Fault tolerance • Limited achievement of: • - Concurrency - Replication • - Consistency - Security

  16. Client computer NFS Client Kernel Application Application program program UNIX system calls UNIX kernel Operationson remote files Operations on local files NFS Client Other file system NFSprotocol (remote operations) Fig. 8.8: NFS Architecture Client computer Server computer Application Application program program Virtual file system Virtual file system UNIX UNIX NFS NFS file file client server system system

  17. NFS architecture: does implementation have to be in system kernel? No: • there are examples of NFS clients and servers that run at application-level as libraries or processes (e.g. early Windows and MacOS implementations, current PocketPC, etc.) Continues on next slide ...

  18. But, for a Unix implementation there are advantages: • Binary code compatible - no need to recompile applications • Standard system calls that access remote files can be routed through the NFS client module by the kernel • Shared cache of recently-used blocks at client • Kernel-level server can access i-nodes and file blocks directly • but a privileged (root) application program could do almost the same. • Security of the encryption key used for authentication.

  19. lookup(dirfh, name) -> fh, attr Returns file handle and attributes for the file name in the directory dirfh. create(dirfh, name, attr) -> Creates a new file name in directory dirfh with attributes attr and newfh, attr returns the new file handle and attributes. remove(dirfh, name) status Removes file name from directory dirfh. getattr(fh) -> attr Returns file attributes of file fh. (Similar to the UNIX stat system call.) Sets attributes (mode, user id, group id, size, access time & setattr(fh, attr) -> attr modify time of a file). Setting the size to 0 truncates the file. read(fh, offset, count) -> attr, data Returns up to count bytes of data from a file starting at offset. Also returns the latest attributes of the file. write(fh, offset, count, data) -> attr Writes count bytes of data to a file starting at offset. Returns the attributes of the file after the write has taken place. rename(dirfh, name, todirfh, toname) Changes the name of file name in directory dirfh to toname in -> status directory to todirfh . link(newdirfh, newname, dirfh, name) Creates an entry newname in the directory newdirfh which -> status refers to file name in the directory dirfh. Fig. 8.9: NFS server operations (simplified) – 1 Continues on next slide ...

  20. symlink(newdirfh, newname, string) Creates an entry newname in the directory newdirfh of type -> status symbolic link with the value string. The server does not interpret the string but makes a symbolic link file to hold it. readlink(fh) -> string Returns the string that is associated with the symbolic link file identified by fh. mkdir(dirfh, name, attr) -> newfh, attr Creates a new directory name with attributes attr and returns the new file handle and attributes. rmdir(dirfh, name) -> status Removes the empty directory name from the parent directory dirfh. Fails if the directory is not empty. readdir(dirfh, cookie, count) -> entries Returns up to count bytes of directory entries from the directory dirfh. Each entry contains a file name, a file handle, and an opaque pointer to the next directory entry, called a cookie. The cookie is used in subsequent readdir calls to start reading from the following entry. If the value of cookie is 0, reads from the first entry in the directory. statfs(fh) -> fsstats Returns file system information (such as block size, number of free blocks and so on) for the file system containing a file fh. Fig. 8.9: NFS server operations (simplified) – 2

  21. NFS Access Control & Authentication • Stateless server- user's identity & access rights must be checked by server on each request. -In local file system, checked only on open() • Every client request is accompanied by the userID and groupID. • Server is exposed to imposter attacks unless userID & groupID are protected by encryption. • Kerberos - integrated with NFS to provide a stronger & comprehensive security solution.

  22. Mount service • Mount operation: mount(remotehost, remotedirectory, localdirectory) • Server maintains a table of clients who have mounted filesystems at that server • Each client maintains a table of mounted file systems holding:< IP address, port number, file handle>

  23. Fig. 8.10: Local & remote file systems accessible on NFS client Note: The file system mounted at /usr/students in the client is actually the sub-tree located at /export/people in Server 1; The file system mounted at /usr/staff in the client is actually the sub-tree located at /nfs/users in Server 2.

  24. NFS Optimization - Server Caching • Similar to UNIX file caching for local files: • pages (blocks) from disk are held in a main memory buffer cache until the space is required for newer pages. Read-ahead and delayed-write optimizations. • For local files, writes are deferred to next sync event (30 second intervals) • Works well in local context, where files are always accessed through the local cache, but in the remote case it doesn't offer necessary synchronization guarantees to clients.

  25. NFS Optimization - Server Caching (Cont.) • NFS v3 servers offers two strategies for updating the disk: • write-through - altered pages are written to disk as soon as they are received at the server. When a write() RPC returns, the NFS client knows that the page is on the disk: non pre-emptive • delayed commit - pages are held only in the cache until a commit() call is received for the relevant file. This is the default mode used by NFS v3 clients. A commit() is issued by the client whenever a file is closed: pre-emptive

  26. NFS Optimization - Client Caching • Server caching does nothing to reduce RPC traffic between client and server • further optimization is essential to reduce server load in large networks • NFS client module caches the results of read, write, getattr, lookup and readdir operations • synchronization of file contents (one-copy semantics) is not guaranteed when two or more clients are sharing the same file.

  27. NFS optimization - client caching (Cont.) • Timestamp-based validity check • reduces inconsistency, but doesn't eliminate it • validity condition for cache entries at the client: (T - Tc < t) v (Tmclient = Tmserver) • t is configurable (per file) but is typically set to 3 seconds for files and 30 secs. for directories • it remains difficult to write distributed applications that share files with NFS t freshness guarantee Tc time when cache entry was last validated Tm time when block was last updated at server T current time

  28. Other NFS optimizations • Sun RPC runs over UDP by default (can use TCP if required) • Uses UNIX BSD Fast File System with 8-kbyte blocks • reads() and writes() can be of any size (negotiated between client and server) • the guaranteed freshness interval t is set adaptively for individual files to reduce gettattr() calls needed to update Tm • file attribute information (including Tm) is piggybacked in replies to all file requests

  29. NFS Summary • An excellent example of a simple, robust, high-performance distributed service. • Achievement of transparencies: Access: Excellent; the API is the UNIX system call interface for both local and remote files. Location: Not guaranteed but normally achieved; naming of filesystems is controlled by client mount operations, but transparency can be ensured by an appropriate system configuration.

  30. Achievement of transparencies (continued): Concurrency: Limited but adequate for most purposes; when read-write files are shared concurrently between clients, consistency is not perfect. Replication: Limited to read-only file systems; for writable files, SUN Network Information Service (NIS) runs over NFS & is used to replicate essential system files. Failure: Limited but effective; service is suspended if a server fails. Recovery from failures is aided by the simple stateless design.

  31. Achievement of transparencies (continued): Mobility: Hardly achieved; relocation of files is not possible, relocation of filesystems is possible, but requires updates to client configurations. Performance: Good; multiprocessor servers achieve very high performance, but for a single filesystem it's not possible to go beyond the throughput of a multiprocessor server. Scaling: Good; filesystems (file groups) may be subdivided and allocated to separate servers. Ultimately, the performance limit is determined by the load on the server holding the most heavily-used filesystem (file group).

  32. Case Study: Andrew File System • Provides transparent access to remote shared file • AFS compatible with NFS • Differ markedly from NFS in its design & implementation • Design to perform well with larger numbers of active users than other distributed file systems • 2 unusual design characteristics: ~ Whole-file serving: The entire contents of directories and files are transmitted to client computers by AFS server • ~ Whole-file caching: Once a copy of a file or its chunk has been transferred to a client computer, it is stored in a cache on the local disk.

  33. Andrew File System operation • client comp issues an open and no current copy in local cache, server holding is located and send request for a copy. • copy is stored in local UNIX in client comp • subsequent read, write and other operation are applied in local copy • when client issues a close, updated will sent to the server-timestamp and content. copy on the local disk is retained for other user in same workstation.

  34. AFS implementation • How AFS control when an open or close system call referring to a file in the shared file space is issued by client? • How the server holding the required file located? • What space is allocated for cached files in workstation • How does AFS ensures that the cached copies are up to date when many files updated by several clients?

  35. Fig. 8.11: Distribution of processes in the Andrew File System Vice process: Server s/w to run the user processes at server. Venus process: Client s/w

  36. Fig. 8.12: File name space seen by clients of AFS in Unix

  37. Fig. 8.13: System call interception in AFS

  38. Fig. 8.14: Implementation of file system calls in AFS

  39. Cache consistency • vice supply a file to venus also provide a callback promise • Callback promise stored with cached file on workstation and have 2 state : valid and cancelled. • when venus handles open on behalf of client, it checks the cached. • If it found in the cache, token is checked. • If value is cancelled – fresh copy of file must be fetch from Vice server. • If valid – cached copy can be opened without reference to Vice • BUT, if worksation is restarted after a failure or shutdown?

  40. Fetch(fid) -> attr, data Returns the attributes (status) and, optionally, the contents of file identified by the fid and records a callback promise on it. Store(fid, attr, data) Updates the attributes and (optionally) the contents of a specified file. Create() -> fid Creates a new file and records a callback promise on it. Remove(fid) Deletes the specified file. SetLock(fid, mode) Sets a lock on the specified file or directory. The mode of the lock may be shared or exclusive. Locks that are not removed expire after 30 minutes. ReleaseLock(fid) Unlocks the specified file or directory. RemoveCallback(fid) Informs server that a Venus process has flushed a file from its cache. BreakCallback(fid) This call is made by a Vice server to a Venus process. It cancels the callback promise on the relevant file. Fig. 8.15: Main components of the Vice service interface

  41. Enhancement and further developments • NFS Enhancement • AFS Enhancement • Improvement in storage organization • New design approaches

  42. Recent advances in file services NFS enhancements • WebNFS– The advent of Java and applets led to WebNFW. NFS server implements a web-like service on a well-known port. Requests use a 'public file handle' and a pathname-capable variant of lookup(). Enables applications to access NFS servers directly, e.g. to read a portion of a large file. • One-copy update semantics (Spritely NFS, NQNFS) - Include an open() operation and maintain tables of open files at servers, which are used to prevent multiple writers and to generate callbacks to clients notifying them of updates. Performance was improved by reduction in gettattr() traffic.

  43. Recent advances in file services (Cont.) • AFS enhancements: The design of DCE/DFS distributed file systems goes beyond AFS. • Improvements in disk storage organisation RAID - improves performance and reliability by striping data redundantly across several disk drives Log-structured file storage – • updated pages are stored contiguously in memory and committed to disk in large contiguous blocks (~ 1 Mbyte). • File maps are modified whenever an update occurs. • Garbage collection to recover disk space.

  44. New design Approaches • Distribute file data across several servers • Exploits high-speed networks (ATM, Gigabit Ethernet) • Layered approach, lowest level is like a 'distributed virtual disk' • Achieves scalability even for a single heavily-used file • 'Serverless' architecture • Exploits processing and disk resources in all available network nodes • Service is distributed at the level of individual files

  45. New design Approaches (Cont.) • Examples: xFS is serverless: Experimental implementation demonstrated a substantial performance gain over NFS and AFS Frangipani is a hidhly scalable DS: Performance similar to local UNIX file access Tiger Video File System Peer-to-peer systems: Napster, OceanStore (UCB), Farsite (MSR), Publius (AT&T research)

  46. New design Approaches (Cont.2) • Replicated read-write files • High availability • Disconnected working • re-integration after disconnection is a major problem if conflicting updates have ocurred • Examples: • Bayou system • Coda system

  47. Summary • Sun NFS is an excellent example of a distributed service designed to meet many important design requirements • Effective client caching can produce file service performance equal to or better than local file systems • Consistency versus update semantics versus fault tolerance remains an issue • Most client and server failures can be masked • Superior scalability can be achieved with whole-file serving (Andrew FS) or the distributed virtual disk approach

  48. Exercise • Discuss the EIGHT (8) requirements of Distributed File System in detail. • Discuss the cache consistency issue, if workstation is restarted after a failure or shutdown?

More Related