1 / 35

Lecture 18 Mass Storage Devices: RAID, Data Library, MSS

Lecture 18 Mass Storage Devices: RAID, Data Library, MSS. Review: Improving Bandwidth of Secondary Storage. Processor performance growth phenomenal I/O? “ I/O certainly has been lagging in the last decade ” Seymour Cray, Public Lecture (1976) “ Also, I/O needs a lot of work ”

alyssa
Download Presentation

Lecture 18 Mass Storage Devices: RAID, Data Library, MSS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 18Mass Storage Devices:RAID, Data Library, MSS CS510 Computer Architectures

  2. Review: Improving Bandwidth of Secondary Storage • Processor performance growth phenomenal • I/O? “I/O certainly has been lagging in the last decade” Seymour Cray, Public Lecture (1976) “Also, I/O needs a lot of work” David Kuck, Keynote Address, (1988) CS510 Computer Architectures

  3. Decreasing Disk Diameters 14" » 10" » 8" » 5.25" » 3.5" » 2.5" » 1.8" » 1.3" » . . . high bandwidth disk systems based on arrays of disks Network File Services OS structures supporting remote file access Increasing Network Bandwidth 3 Mb/s » 10Mb/s » 50 Mb/s » 100 Mb/s » 1 Gb/s » 10 Gb/s networks capable of sustaining high bandwidth transfers Network Attached Storage High Performance Storage Service on a High Speed Network Network provides well defined physical and logical interfaces: separate CPU and storage system! CS510 Computer Architectures

  4. RAID CS510 Computer Architectures

  5. Conventional: 4 disk designs 14” 3.5” 5.25” 10” High End Low End Disk Array: 1 disk design 3.5” Manufacturing Advantages of Disk Arrays Disk Product Families CS510 Computer Architectures

  6. Data Capacity Volume Power Data Rate I/O Rate MTTF Cost large data and I/O rates high MB per cu. ft., high MB per KW awful reliability Disk Arrays have potential for Replace Small Number of Large Disks with Large Number of Small Disks IBM 3390 (K) 20 GBytes 97 cu. ft. 3 KW 15 MB/s 600 I/Os/s 250 KHrs $250K IBM 3.5" 0061 320 MBytes 0.1 cu. ft. 11 W 1.5 MB/s 55 I/Os/s 50 KHrs $2K x70 23 GBytes 11 cu. ft. 1 KW 120 MB/s 3900 IOs/s ??? Hrs $150K CS510 Computer Architectures

  7. Mirroring/Shadowing (high capacity cost) Horizontal Hamming Codes (overkill) Parity & Reed-Solomon Codes Failure Prediction (no capacity overhead!) VaxSimPlus - Technique is controversial Redundant Arrays of Disks • Files are "striped" across multiple spindles to gain throughput • Increase of the number of disks reduces the reliability • Redundancy yields high data availability Disks will fail Contents reconstructed from data redundantly stored in the array • Capacity penalty to store it • Bandwidth penalty to update Techniques: CS510 Computer Architectures

  8. Array Reliability • Reliability of N disks = Reliability of 1 Disk ¸ N • 50,000 Hours ¸ 70 disks = 700 hours • Disk system MTTF: Drops from 6 years to 1 month! • Arrays without redundancy too unreliable to be useful! Hot spares support reconstruction in parallel with access: very high media availability can be achieved CS510 Computer Architectures

  9. Disk Mirroring, Shadowing Shadow 1 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 • Each disk is fully duplicated onto its "shadow" • Logical write = two physical writes • 100% capacity overhead Parity • Parity Data Bandwidth Array 0 1 1 0 0 1 1 0 1 0 0 1 0 0 1 1 1 0 1 1 1 0 0 1 1 0 1 1 0 0 1 1 • Parity computed horizontally Recovery purpose instead of fault detection • Logically a single high data bw disk • High I/O Rate Parity Array • Interleaved parity blocks • Independent reads and writes • Logical write = 2 reads + 2 writes • Parity + Reed-Solomon codes Redundant Arrays of Disks(RAID) CS510 Computer Architectures

  10. D0 D1 D2 D0' D3 P 2. Read old parity 1. Read old data new data XOR + + XOR 3. Write new data 4. Write new parity D0' D1 D2 D3 P' Problems of Disk Arrays: Small Writes RAID-5: Small Write Algorithm 1 Logical Write = 2 Physical Reads + 2 Physical Writes CS510 Computer Architectures

  11. recovery group Redundant Arrays of Disks:RAID 1: Disk Mirroring/Shadowing • Each disk is fully duplicated onto its "shadow" • Very high availability can be achieved • Bandwidth sacrifice on write: • Logical write = two physical writes • Reads may be optimized • Most expensive solution: 100% capacity overhead Targeted for high I/O rate , high availability environments CS510 Computer Architectures

  12. P 1 1 0 0 1 1 0 1 1 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 Striped physical records Redundant Arrays of Disks:RAID 3: Parity Disk logical record 10010011 11001101 10010011 . . . 0 0 1 1 0 0 1 0 1 1 1 1 1 1 1 1 • Parity computed across recovery group to protect against hard disk failures • 33% capacity cost for parity in this configuration • wider arrays reduce capacity costs, decrease expected availability, • increase reconstruction time • Arms logically synchronized, spindles rotationally synchronized • logically a single high capacity, high transfer rate disk Targeted for high bandwidth applications: Scientific, Image Processing CS510 Computer Architectures

  13. D0 D1 D2 D3 P0 D4 D5 D6 P1 D7 D8 D9 P2 D10 D11 D12 P3 D13 D14 D15 Stripe P4 D16 D17 D18 D19 Stripe Unit D20 D21 D22 D23 P5 . . . . . . . . . . . . . . . a disk Disk Columns Redundant Arrays of Disks:RAID 5+: High I/O Rate Parity Independent accesses occur in ‘llel A logical write is 4 physical I/Os; 2 Reads and 2 Writes Independent writes, 1 data and 1 parity, possible because of interleaved parity Reed-Solomon Codes ("Q") for protection during reconstruction Targeted for mixed applications CS510 Computer Architectures

  14. array controller host host adapter single board disk controller manages interface to host, DMA single board disk controller control, buffering, parity logic single board disk controller physical device control single board disk controller Subsystem Organization • Striping software off-loaded from host to array controller • No applications modifications • No reduction of host performance often piggy-backed in small format devices CS510 Computer Architectures

  15. Array Controller String Controller . . . String Controller . . . String Controller . . . String Controller . . . String Controller . . . String Controller . . . Redundant Support Components: fans, power supplies, controller, cables Data Recovery Group: unit of data redundancy End to End Data Integrity: internal parity protected data paths System Availability:Orthogonal RAIDs CS510 Computer Architectures

  16. host I/O Controller I/O Controller Fully dual redundant Array Controller Array Controller . . . . . . . . . . . . . . . . . . Recovery Group System-Level Availability Goal: No Single Points of Failure with duplicated paths, higher performance can be obtained when there are no failures CS510 Computer Architectures

  17. Magnetic Tapes CS510 Computer Architectures

  18. File Cache Cost per bit Access Time Hard Disk Tapes Capacity Memory Hierarchies General Purpose Computing Environment Memory Hierarchy CS510 Computer Architectures

  19. File Cache File Cache On-Line SSD Hard Disk High I/O Rate Disks Low $/Actuator Disk Arrays High Data Rate Disks Low $/MB Tapes Automated Tape Libraries Near-Line Optical Juke Box General Purpose Computing Environment Memory Hierarchy 1980 Off Line Storage Remote Archive Memory Hierarchy 1995 Memory Hierarchies CS510 Computer Architectures

  20. File Cache Increasing Declining Access Time $/MByte Magnetic Disk Magnetic T ape Capacity File Client Cache W orkstation Local Magnetic Disk Local Area Network Server Cache File Server “Remote” Magnetic Disk Ser ver Magnetic T ape Storage Trends:Distributed Storage Storage Hierarchy 1980 Storage Hierarchy 1990 CS510 Computer Architectures

  21. Client Cache Local Area Network Server Cache On-line Storage Disk Array W ide Area Internet Network Optical Disk Jukebox Near -line Storage Magnetic or Optical T ape Library Off-line Storage Shelved Magnetic or Optical T ape Storage Trends:Wide-Area Storage 1995 Typical Storage Hierarchy Conventional disks replaced by disk arrays Near-line storage emerges between disk and tape CS510 Computer Architectures

  22. What's All This About Tape? Tape is used for: • Backup Storage for Hard Disk Data • Written once, very infrequently (hopefully never!) read • Software Distribution • Written once, read once • Data Interchange • Written once, read once • File Retrieval • Written/Rewritten, files occasionally read • Near Line Archive • Electronic Image Management Relatively New Application For Tape CS510 Computer Architectures

  23. Cap BPI TPI BPI*TPI Data Xfer Acc Time Technology (MB) (Million) (KByte/s) Alternative Data Storage Technologies Conventional Tape: Reel-to-Reel (.5") 140 6250 18 0.11 549 minutes Cartridge (.25") 150 12000 104 1.25 92 minutes Helical Scan Tape: VHS (.5") 2500 1743 5 650 11.33 120 minutes Video (8mm)* 2300 43200 819 35.28 246 minutes DAT (4mm)** 1300 61000 1870 114.07 183 20 seconds Disk: Hard Disk (5.25") 760 30552 1667 50.94 1373 20 ms Floppy Disk (3.5") 2 17434 135 2.35 92 1 second CD ROM (3.5") 540 27600 15875 438.15 183 1 second * Second Generation 8mm: 5000 MB, 500KB/s ** Second Generation 4mm: 10000 GB CS510 Computer Architectures

  24. R-DAT Technology Two Competing Standards DDS (HP, Sony) * 22 frames/group * 1870 tpi * Optimized for serial writes DataDAT (Hitachi, Matsushita, Sharp) * Two modes: streaming (like DDS) and update in place * Update in place sacrifices transfer rate and capacity Spare data groups, inter-group gaps, preformatted tapes CS510 Computer Architectures

  25. R-DAT Technology Advantages: * Small Formfactor, easy handling/loading * 200X speed search on index fields (40 sec. max, 20 sec. avg.) * 1000X physical positioning (8 sec. max, 4 sec. avg.) * Inexpensive media ($10/GBytes) * Volumetric Efficiency: 1 GB in 2.5 cu. in; 1 TB in 1 cu. ft. Disadvantages: * Two incompatible standards (DDS, DataDAT) * Slow XFER rate * Lower capacity vs. 8mm tape * Small bit size (13 x 0.4 sq. micron) effect on archive stability CS510 Computer Architectures

  26. R-DAT Technical Challenges Tape Capacity * Data Compression is key Tape Bandwidth * Data Compression * Striped Tape CS510 Computer Architectures

  27. Speed Capacity Cost MSS Tape: No Perfect? Tape Drive • Best 2 out of 3 Cost, Size, Speed • Expensive (Fast & big) • Cheap (Slow & big) CS510 Computer Architectures

  28. Host SCSI HBA Embedded Controller Transport Compression Done Here Host SCSI HBA Video Compression Audio Compression Image Compression Text Compression . . . Embedded Controller Transport Hints from Host 20:1 Data Specific Compression 2,3:1 Data Compression Issues Peripheral Manufacturer Approach: System Approach: CS510 Computer Architectures

  29. Speed Matching Buffers 180 KB/s Embedded Controller Transport To/From Host 180 KB/s Embedded Controller Transport 180 KB/s Embedded Controller Transport 180 KB/s Embedded Controller Transport Striped Tape Challenges: * Difficult to logically synchronize tape drives * Unpredictable write times R after W verify, Error Correction Schemes, N Group Writing, Etc. CS510 Computer Architectures

  30. Tape Carousels 19" Gravity Feed 3.5" formfactor tape reader Carousel 4mm Tape Reader Automated Media Handling CS510 Computer Architectures

  31. Tape Readers Tape Cassette Side View Front View Tape Pack: Unit of Archive Automated Media Handling CS510 Computer Architectures

  32. EXB-120 Cartridge Holders Entry/Exit Port 5 feet T ape Readers 3 feet MSS: Automated Tape Library • 116 x 5 GB 8 mm tapes = 0.6 TBytes (1991) • 4 tape readers 1991, 8 half height readers now • 4 x .5 MByte/second = 2 MBytes/s • $40,000 O.E.M. Price • Predict 1995: 3 TBytes; 2000: 9 TBytes CS510 Computer Architectures

  33. Open Research Issues • Hardware/Software attack on very large storage systems • File system extensions to handle terabyte sized file systems • Storage controllers able to meet bandwidth and capacity demands • Compression/decompression between secondary and tertiary storage • Hardware assist for on-the-fly compression • Application hints for data specific compression • More effective compression over large buffered data • DB indices over compressed data • Striped tape: is large buffer enough? • * Applications: Where are the Terabytes going to come from? • Image Storage Systems • Personal Communications Network multimedia file server CS510 Computer Architectures

  34. MSS:Applications of Technology Robo-Line Library Books/Bancroft x Pages/book x bytes/page = Bancroft 372,910 400 4000 = 0.54 TB Full text Bancroft Near Line = 0.5 TB; Pages images ? 20 TB Predict: "RLB" (Robo-Line Bancroft) = $250,000 Bancroft costs: Catalogue a book: $20 / book Reshelve a book: $1/ book % new books purchased per year never checked out: 20% CS510 Computer Architectures

  35. Robo-Line Tape 100000 10000 1000 Access Gap #2 100 Magnetic Disk 10 Access Time (ms) 1 0.1 Access Gap #1 0.01 0.001 DRAM 0.0001 $0.00 $0.01 $0.10 $1.00 $10.00 $100.00 $ / MB MSS: Summary CS510 Computer Architectures

More Related