1 / 84

Outline for Today’s Lecture

Outline for Today’s Lecture. Administrative: Objective: NTFS – continued Journaling FS Distributed File Systems Disconnected File Access Energy Management. NTFS - continued. File Compression. (a) An example of a 48-block file being compressed to 32 blocks

Download Presentation

Outline for Today’s Lecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Outline for Today’s Lecture Administrative: Objective: • NTFS – continued • Journaling FS • Distributed File Systems • Disconnected File Access • Energy Management

  2. NTFS - continued

  3. File Compression (a) An example of a 48-block file being compressed to 32 blocks (b) The MTF record for the file after compression

  4. File Encryption K retrieved Operation of the encrypting file system user's public key

  5. Comparisons

  6. Journaling for Meta-data Ops

  7. Metadata Operations • Metadata operations modify thestructure of the file system • Creating, deleting, or renamingfiles, directories, or special files • Data must be written to disk in such a way that the file system can be recovered to a consistent state after a system crash

  8. General Rules of Ordering • Never point to a structure before it has been initialized (inode < direntry) • Never re-use a resource before nullifying all previous pointers to it • Never reset the old pointer to a live resource before the new pointer has been set (renaming)

  9. Metadata Integrity • FFS uses synchronous writes to guarantee the integrity of metadata • Any operation modifying multiple pieces of metadata will write its data to disk in a specific order • These writes will beblocking • Guarantees integrity and durability of metadata updates

  10. Deleting a file i-node-1 abc def i-node-2 ghi i-node-3 Assume we want to delete file “def”

  11. Deleting a file i-node-1 abc ? def ghi i-node-3 Cannot delete i-node before directory entry “def”

  12. Deleting a file • Correct sequence is • Write to disk directory block containing deleted directory entry “def” • Write to disk i-node block containing deleted i-node • Leaves the file system in a consistent state

  13. Creating a file i-node-1 abc ghi i-node-3 Assume we want to create new file “tuv”

  14. Creating a file i-node-1 abc ghi i-node-3 tuv ? Cannot write directory entry “tuv” before i-node

  15. Creating a file • Correct sequence is • Write to disk i-node block containing new i-node • Write to disk directory block containing new directory entry • Leaves the file system in a consistent state

  16. Synchronous Updates • Used by FFS to guarantee consistency of metadata: • All metadata updates are done through blocking writes • Increases the cost of metadata updates • Can significantly impact the performance of whole file system

  17. Journaling • Journaling systems maintain an auxiliary log that records all meta-data operations • Write-ahead loggingensures that the log is written to diskbefore any blocks containing data modified by the corresponding operations. • After a crash, can replay the log to bring the file system to a consistent state

  18. Journaling • Log writes are performed in addition to the regular writes • Journaling systems incur log write overhead but • Log writes can be performed efficiently because they are sequential • Metadata blocks do not need to be written back after each update

  19. Journaling • Journaling systems can provide • same durability semantics as FFS if log is forced to disk after each meta-data operation • the laxer semantics if log writes are buffered until entire buffers are full

  20. Implementation with log as file • Maintains a circular log in a pre-allocated file in the FFS (about 1% of file system size) • Buffer manager uses a write-ahead logging protocol to ensure proper synchronization between regular file data and the log • Buffer header of each modified block in cache identifies the first and last log entries describing an update to the block

  21. Implementation with log as file • System uses • First item to decide which log entries can be purged from log • Second item to ensure that all relevant log entries are written to disk before the block is flushed from the cache • Maintains its log asynchronously • Maintains file system integrity, but does not guarantee durability of updates

  22. Data structures for log circular log file cached buffer headers first last first last Superblock - records log start

  23. Recovery • Superblock has address of last checkpoint • First recover the log • Read then the log from logical end (backward pass) and undo all aborted operations • Do forward pass and reapply all updates that have not yet been written to disk

  24. Other Approaches • Using non-volatile cache (Network Appliances) • Ultimate solution: can keep data in cache forever • Additional cost of NVRAM • Simulating NVRAM with • Uninterruptible power supplies • Hardware-protected RAM (Rio): cache is marked read-only most of the time

  25. Other Approaches • Log-structured file systems • Not always possible to write all related meta-data in a single disk transfer • Sprite-LFS adds small log entries to the beginning of segments • BSD-LFS make segments temporary until all metadata necessary to ensure the recoverability of the file system are on disk.

  26. Distributed File Systems

  27. client server network client client server Distributed File Systems • Naming • Location transparency/ independence • Caching • Consistency • Replication • Availability and updates

  28. usr m_pt A Her local tree after mount B His after mount on B A B usr A usr B m_pt m_pt Naming Her local directory tree • \\His\d\pictures\castle.jpg • Not location transparent - both machine and drive embedded in name. • NFS mounting • Remote directory mounted over local directory in local naming hierarching. • /usr/m_pt/A • No global view for_export His local dir tree

  29. Global Name Space Example: Andrew File System / afs tmp bin lib local files shared files - looks identical to all clients

  30. Hints • A valuable distributed systems design technique that can be illustrated in naming. • Definition: information that is not guaranteed to be correct. If it is, it can improve performance. If not, things will still work OK. Must be able to validate information. • Example: Sprite prefix tables

  31. Prefix Tables / A /A/m_pt1 -> blue m_pt1 /A/m_pt1/usr/B -> pink usr /A/m_pt1/usr/m_pt2 -> pink m_pt2 B /A/m_pt1/usr/m_pt2/stuff.below

  32. user space syscall layer (file, uio, etc.) Virtual File System (VFS) network protocol stack (TCP/IP) FFS LFS NFS *FS etc. etc. device drivers VFS: the Filesystem Switch Sun Microsystems introduced the virtual file system framework in 1985 to accommodate the Network File System cleanly. • VFS allows diverse specific file systems to coexist in a file tree, isolating all FS-dependencies in pluggable filesystem modules. VFS was an internal kernel restructuring with no effect on the syscall interface. Incorporates object-oriented concepts: a generic procedural interface with multiple implementations. Other abstract interfaces in the kernel: device drivers, file objects, executable files, memory objects.

  33. free vnodes Vnodes In the VFS framework, every file or directory in active use is represented by a vnode object in kernel memory. syscall layer Each vnode has a standard file attributes struct. Generic vnode points at filesystem-specific struct (e.g., inode, rnode), seen only by the filesystem. Active vnodes are reference- counted by the structures that hold pointers to them, e.g., the system open file table. NFS UFS Vnode operations are macros that vector to filesystem-specific procedures. Each specific file system maintains a hash of its resident vnodes.

  34. Vnode Operations and Attributes vnode/file attributes (vattr or fattr) type (VREG, VDIR, VLNK, etc.) mode (9+ bits of permissions) nlink (hard link count) owner user ID owner group ID filesystem ID unique file ID file size (bytes and blocks) access time modify time generation number directories only vop_lookup (OUT vpp, name) vop_create (OUT vpp, name, vattr) vop_remove (vp, name) vop_link (vp, name) vop_rename (vp, name, tdvp, tvp, name) vop_mkdir (OUT vpp, name, vattr) vop_rmdir (vp, name) vop_readdir (uio, cookie) vop_symlink (OUT vpp, name, vattr, contents) vop_readlink (uio) files only vop_getpages (page**, count, offset) vop_putpages (page**, count, sync, offset) vop_fsync () generic operations vop_getattr (vattr) vop_setattr (vattr) vhold() vholdrele()

  35. Pathname Traversal • When a pathname is passed as an argument to a system call, the syscall layer must “convert it to a vnode”. • Pathname traversal is a sequence of vop_lookup calls to descend the tree to the named file or directory. open(“/tmp/zot”) vp = get vnode for / (rootdir) vp->vop_lookup(&cvp, “tmp”); vp = cvp; vp->vop_lookup(&cvp, “zot”); Issues: 1. crossing mount points 2. obtaining root vnode (or current dir) 3. finding resident vnodes in memory 4. caching name->vnode translations 5. symbolic (soft) links 6. disk implementation of directories 7. locking/referencing to handle races with name create and delete operations

  36. Example:Network File System (NFS) server client syscall layer user programs VFS syscall layer NFS server VFS UFS NFS client UFS network

  37. NFS Protocol NFS is a network protocol layered above TCP/IP. • Original implementations (and most today) use UDP datagram transport for low overhead. • Maximum IP datagram size was increased to match FS block size, to allow send/receive of entire file blocks. • Some newer implementations use TCP as a transport. NFS protocol is a set of message formats and types. • Client issues a request message for a service operation. • Server performs requested operation and returns a reply message with status and (perhaps) requested data.

  38. File Handles Question: how does the client tell the server which file or directory the operation applies to? • Similarly, how does the server return the result of a lookup? • More generally, how to pass a pointer or an object reference as an argument/result of an RPC call? In NFS, the reference is a file handle or fhandle, a 32-byte token/ticket whose value is determined by the server. • Includes all information needed to identify the file/object on the server, and get a pointer to it quickly. volume ID inode # generation #

  39. NFS: From Concept to Implementation Now that we understand the basics, how do we make it work in a real system? • How do we make it fast? • Answer: caching, read-ahead, and write-behind. • How do we make it reliable? What if a message is dropped? What if the server crashes? • Answer: client retransmits request until it receives a response. • How do we preserve file system semantics in the presence of failures and/or sharing by multiple clients? • Answer: well, we don’t, at least not completely. • What about security and access control?

  40. client server network client client server Distributed File Systems • Naming • Location transparency/ independence • Caching • Consistency • Replication • Availability and updates

  41. Caching was “The Answer” Proc • Avoid the disk for as many file operations as possible. • Cache acts as a filter for the requests seen by the disk - reads served best. • Delayed writeback will avoid going to disk at all for temp files. Memory File cache

  42. server server Caching in Distributed F.S. • Location of cache on client - disk or memory • Update policy • write through • delayed writeback • write-on-close • Consistency • Client does validity check, contacting server • Server call-backs client network client client

  43. File Cache Consistency Caching is a key technique in distributed systems. The cache consistency problem: cached data may become stale if cached data is updated elsewhere in the network. Solutions: Timestamp invalidation (NFS). Timestamp each cache entry, and periodically query the server: “has this file changed since time t?”; invalidate cache if stale. Callback invalidation (AFS). Request notification (callback) from the server if the file changes; invalidate cache on callback. Leases (NQ-NFS)[Gray&Cheriton89]

  44. client server network client client server Sun NFS Cache Consistency open ti== tj ? ti • Server is stateless • Requests are self-contained. • Blocks are transferred and cached in memory. • Timestamp of last known mod kept with cached file, compared with “true” timestamp at server on Open. (Good for an interval) • Updates delayed but flushed before Close ends. tj write/ close

  45. Cache Consistency for the Web • Time-to-Live (TTL) fields - HTTP “expires” header • Client polling -HTTP “if-modified-since” request headers • polling frequency? possibly adaptive (e.g. based on age of object and assumed stability) client client lan network proxy cache Web server

  46. server server AFS Cache Consistency {c0, c1} • Server keeps state of all clients holding copies (copy set) • Callbacks when cached data are about to become stale • Large units (whole files or 64K portions) • Updates propagated upon close • Cache on local disk & memory c0 callback network close c1 c2 • If client crashes, revalidation on recovery (lost callback possibility)

  47. NQ-NFS Leases In NQ-NFS, a client obtains a lease on the file that permits the client’s desired read/write activity. “A lease is a ticket permitting an activity; the lease is valid until some expiration time.” • A read-caching lease allows the client to cache clean data. Guarantee: no other client is modifying the file. • A write-caching lease allows the client to buffer modified data for the file. Guarantee: no other client has the file cached. Leases may be revoked by the server if another client requests a conflicting operation (server sends eviction notice). Since leases expire, losing “state” of leases at server is OK.

  48. Coda – Using Caching to Handle Disconnected Access • Single location-transparent UNIX FS. • Scalability - coarse granularity (whole-file caching, volume management) • First class (server) replication and client caching (second class replication) • Optimistic replication & consistency maintenance. • Designed for disconnected operation for mobile computing clients

  49. Explicit First-class Replication • File name maps to set of replicas, one of which will be used to satisfy request • Goal: availability • Update strategy • Atomic updates - all or none • Primary copy approach • Voting schemes • Optimistic, then detection of conflicts

  50. High availability Conflicting updates are the potential problem - requiring detection and resolution. Avoids conflicts by holding of shared or exclusive locks. How to arrange when disconnection is involuntary? Leases [Gray, SOSP89] puts a time-bound on locks but what about expiration? Optimistic vs. Pessimistic

More Related