1 / 68

NANDFS: A Flexible Flash File System for RAM-Constrained Systems

NANDFS: A Flexible Flash File System for RAM-Constrained Systems. Aviad Zuck, Ohad Barzliay and Sivan Toledo. Overview. Introduction + motivation Flash properties Big Ideas Going into details Software engineering, tests and experiments General flash issues. Flash is Everywhere.

vivek
Download Presentation

NANDFS: A Flexible Flash File System for RAM-Constrained Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NANDFS: A Flexible Flash File System for RAM-Constrained Systems Aviad Zuck, Ohad Barzliay and Sivan Toledo

  2. Overview • Introduction + motivation • Flash properties • Big Ideas • Going into details • Software engineering, tests and experiments • General flash issues

  3. Flash is Everywhere

  4. Resilient to vibrations and extreme conditions • Faster up 100 times more (random access) than rotating disks

  5. What’s missing?

  6. Sequential access • And • “Today, consumer-grade SSD costs from $2 to $3.45 per gigabyte, hard drives about $0.38 per gigabyte…” Computerworld.com, 27.8.2008* *http://www.computerworld.com/s/article/print/9112065/Solid_state_disk_lackluster_for_laptops_PCs

  7. Two Ways of Flash Management NTFS FAT ext3 … JFFS YAFFS NANDFS …

  8. So Why NANDFS?

  9. NANDFS Also Has: • File locking • Transactions • Competitive performance and graceful degradation 11

  10. How is it Done, in a Nutshell? Explanation does not fit in a nutshell • Complex data structures • New garbage collection mechanism • And much more… Let’s elaborate

  11. Flash Properties

  12. Flash memory is divided to pages – 0.5KB, 2KB, 4KB • Page consists of Data and Metadata areas – 16B of metadata for every 512B of data • Pages arranged in units – 32/64/128 pages per unit • Metadata contains unit validity indicator, ECC code and file system metadata

  13. 15

  14. Erasures & Programming • Page bits initialized to 1’s • Writing clears bits (1 to 0) • Bits set by erasing entire unit (“erase unit”). • Erase unit has limited endurance

  15. The Design of NANDFS -The “Big” Ideas

  16. Log-structured design • Overwrite-in-place is not permitted in flash • Caching avoids rippling effect

  17. Modular Flash File System • Modularity is good. But… • We need a block device API designated for flash • We call our “block device” the sequencing layer

  18. High-level Design • A 2-layer structure: • File System Layer - transactional file system with unix-like file structure • Sequencing Layer – manages the allocation of immutable page-sized chunks of data. Assists in crash recovery and atomicity 20

  19. The Sequencing Layer 21

  20. Divides flash to fixed-size physical units called slots • Slots assigned to segments - logical units of the same size • Each segment maps to one physical matching slot, except one “activesegment” which is mapped to two slots. 22

  21. Block access • Segment ~> Slot mapping table in RAM • Block is referenced by a logical handle <segment_id, offset_in_segment> • Address translation • Example:Logical address <0,2> ~>Physical address 8 23

  22. Where’s the innovation? • Logical address mapping not a new idea: • Logical Disk (1993), YAFFS, JFFS, And more • Many FTL’s use some logical address mapping • Full mapping ~> expensive • Coarse-grained mapping • Fragmentation, performance degradation • Costly merges 24

  23. * DFTL: A Flash Translation Layer Employing Demand-based Selective Caching of Page-level Address Mappings (2009)

  24. The difference in NANDFS • NANDFS uses coarse-grained mapping, not full mapping • Less RAM for page mapping (more RAM flexibility) • Collect garbage while preserving validity of pointers to non-obsolete blocks • Appropriate for flash, not for magnetic disks

  25. Block allocation • NANDFS is log-structured • New blocks allocated sequentially from the active segment. • In a log-structured system blocks are never re-written • File pointer structures need to be updated to reflect the new location of the data. 27

  26. Garbage collection • TRIM - pages with obsolete data are marked with a special “obsolete flag” • sequencing layer manages counters of obsolete pages in every segment. • Problem - EUs contain a mixture of valid and obsolete data (pages), we can’t simply collect entire EUs • Solution :Garbage collection is performed together with allocation 28

  27. Reclamation unit = Segment • The sequencing layer chooses a segment to reclaim, and allocates it another (fresh) second slot. • Reclaim obsolete pages while copying non-obsolete pages • NOTICE – Logical addresses are preserved, although physical translation changed 29

  28. Finally when the new slot is full, the old slot is erased. • Can now be used to reclaim another segment • We choose the segment with the highest obsolete counter level as the new “active segment”. • This will not go down well in rotating disks – too many seek operations

  29. Sequencing Layer Recovery • When a new slot is allocated to a segment, a segment header is written in the slot’s first page • Header contains: • Incremented segment sequencing number • Segment number • Segment type • Checkpoint (further details later) 31

  30. On mounting the header of every slot is read • The segment-to-slot map can be reconstructed using only the data from the headers • Other systems (with complete mapping) need to scan entire flash 32

  31. Bad EU Management Each flash memory chip contains some bad EUs Some slots contain more valid EUs than others Solution – some slots are set aside as a bank of reserve EUs 33 33

  32. Brief Summary 34

  33. The Design of NANDFS -More Ideas

  34. Wear Leveling • Writes and erases should be spread evenly over all EUs • Problem: some slots may be reclaimed rarely • Solution: Perform periodic random wear leveling process • Choose random slot and copy it to a fresh slot • Incurs only a low overhead • Guarantees near-optimal expected endurance (Ben-Aroya and Toledo, 2006) • Technique widely used (YAFFS, JFFS)

  35. Transactions • File system operations are atomic and transactional • Marking pages as obsolete is not straightforward • Simple transaction – block re-write • After rewriting, old data block should be marked obsolete • If we mark it, and the transaction aborts before completing, old data should remain valid • If already marked as obsolete – cannot undo

  36. Solution: Perform valid-to-obsolete-transition (or VOT) AFTER the transaction commits. • Write VOT records to flash in dedicated pages • On commit use VOT records to mark pages as obsolete • Maintain linked list of all pages written in a specific transaction on flash • Keep in RAM a pointer to the last page written in a transaction • On abort mark all pages written by the transaction as obsolete

  37. 39

  38. Checkpoints • Snapshot of system state • Ensures returning to stable state following a crash • Checkpoint is written: • As part of a segment header. • Whenever a transaction commits. • Structure: • Obsolete counters array • Pointer to last-written block address of committed transaction • Pointers to the last-written blocks of all on-going transactions • Pointer to root inode

  39. Simple Example

  40. Finding the Last Checkpoint • In every given time there is only one valid checkpoint in flash • On mounting • Locate last allocated slot (using its sequence #) • Perform binary search to see if another later checkpoint exists in the slot • Aborting all other transactions • Truncate all pages written after the checkpoint • Finishing the transaction that was committed

  41. File System Layer

  42. Files represented by inode trees • File metadata • Direct pointers to data pages • Indirect pointers etc. • All pointers are logical pointers • Regular files not permitted to be sparse

  43. Root file and directory inodes may be sparse. • Hole indicated by special flag

  44. The Root File • Array of inodes

  45. When a file is deleted a page-size hole is created • When creating a file a hole can easily be located • If no hole exists, allocate a new inode by extending the root file

  46. Directory Structure • Directory = array of directory entries • inode number • Length • UTF-8 file name. • Direntry length <= 256 bytes. • Direntries packed into chunks without gaps

  47. chunk size < (page - direntry size) ~> directory contains “hole” • Allocating new direntry requires finding a hole • Direntry Lookup is sequential

  48. System Calls • Most system calls (creat, unlink, mkdir…) are atomic transactions • Transaction that handles a write() commits only when on close() • System calls that modify a single file can be bundled into a single transaction • 5 consecutive calls to write() + close() on a single file are treated as a single transaction • Overhead of transaction commit ~ 1 Actual physical page writes Minimum possible page writes

More Related