1 / 81

Chapter 7 Storage Systems

Chapter 7 Storage Systems. Outline. Introduction Types of Storage Devices RAID: Redundant Arrays of Inexpensive Disks Errors and Failures in Real Systems Benchmarks of Storage Performance and Availability Design An I/O System. Introduction. Motivation: Who Cares About I/O?.

kellan
Download Presentation

Chapter 7 Storage Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7 Storage Systems

  2. Outline • Introduction • Types of Storage Devices • RAID: Redundant Arrays of Inexpensive Disks • Errors and Failures in Real Systems • Benchmarks of Storage Performance and Availability • Design An I/O System

  3. Introduction

  4. Motivation: Who Cares About I/O? • CPU Performance: 2 times very 18 months • I/O performance limited by mechanical delays (disk I/O) • < 10% per year (I/O per sec or MB per sec) • Amdahl's Law: system speed-up limited by the slowest part! • 10% I/O & 10x CPU  5x Performance (lose 50%) • 10% I/O & 100x CPU  10x Performance (lose 90%) • I/O bottleneck: • Diminishing fraction of time in CPU • Diminishing value of faster CPUs

  5. Position of I/O in Computer Architecture – Past • An orphan in the architecture domain • I/O meant the non-processor and memory stuff • Disk, tape, LAN, WAN, etc. • Performance was not a major concern • Devices characterized as: • Extraneous, non-priority, infrequently used, slow • Exception is swap area of disk • Part of the memory hierarchy • Hence part of system performance but you’re hosed if you use it often

  6. Position of I/O in Computer Architecture – Now • Trends – I/O is the bottleneck • Communication is frequent • Voice response & transaction systems, real-time video • Multimedia expectations • Even standard networks come in gigabit/sec flavors • For multi-computers • Result • Significant focus on system bus performance • Common bridge to the memory system and the I/O systems • Critical performance component for the SMP server platforms

  7. System vs. CPU Performance • Care about speed at which user jobs get done • Throughput - how many jobs/time (system view) • Latency - how quick for a single job (user view) • Response time – time between when a command is issued and results appear (user view) • CPU performance main factor when: • Job mix fits in the memory  there are very few page faults • I/O performance main factor when: • The job is too big for the memory - paging is dominant • When the job reads/writes/creates a lot of unexpected files • OLTP – Decision support -- Database • And then there is graphics & specialty I/O devices

  8. System Performance • Depends on many factors in the worst case • CPU • Compiler • Operating System • Cache • Main Memory • Memory-IO bus • I/O controller or channel • I/O drivers and interrupt handlers • I/O devices: there are many types • Level of autonomous behavior • Amount of internal buffer capacity • Device specific parameters for latency and throughput

  9. I/O Systems May the same or differentMemory – I/O Bus interrupts Processor Cache Memory - I/O Bus Main Memory I/O Controller I/O Controller I/O Controller Graphics Disk Disk Network

  10. Keys to a Balanced System • It’s all about overlap - I/O vs CPU • Timeworkload = Timecpu + TimeI/O - Timeoverlap • Consider the benefit of just speeding up one • Amdahl’s Law (see P4 as well) • Latency vs. Throughput

  11. I/O System Design Considerations • Depends on type of I/O device • Size, bandwidth, and type of transaction • Frequency of transaction • Defer vs. do now • Appropriate memory bus utilization • What should the controller do • Programmed I/O • Interrupt vs. polled • Priority or not • DMA • Buffering issues - what happens on over-run • Protection • Validation

  12. Types of I/O Devices • Behavior • Read, Write, Both • Once, multiple • Size of average transaction • Bandwidth • Latency • Partner - the speed of the slowest link theory • Human operated (interactive or not) • Machine operated (local or remote)

  13. Is I/O Important? • Depends on your application • Business - disks for file system I/O • Graphics - graphics cards or special co-processors • Parallelism - the communications fabric • Our focus = mainline uniprocessing • Storage subsystems (Chapter 7) • Networks (Chapter 8) • Noteworthy Point • The traditional orphan • But now often viewed more as a front line topic

  14. Types of Storage Devices

  15. Magnetic Disks • 2 important Roles • Long term, non-volatile storage – file system and OS • Lowest level of the memory hierarchy • Most of the virtual memory is physically resident on the disk • Long viewed as a bottleneck • Mechanical system  slow • Hence they seem to be an easy target for improved technology • Disk improvement w.r.t. density have done better than Moore’s law

  16. Disks are organized into platters, tracks, and sectors (1-12 * 2 sides) (5000 – 30000 each surface) (100 – 500) A sector is the smallestunit that can be read or written

  17. Physical Organization Options • Platters – one or many • Density - fixed or variable • All tracks have the same no. of sectors?) • Organization - sectors, cylinders, and tracks • Actuators - 1 or more • Heads - 1 per track or 1 per actuator • Access - seek time vs. rotational latency • Seek related to distance but not linearly • Typical rotation: 3600 RPM or 15000 RPM • Diameter – 1.0 to 3.5 inches

  18. Typical Physical Organization • Multiple platters • Metal disks covered with magnetic recording material on both sides • Single actuator (since they are expensive) • Single R/W head per arm • One arm per surface • All heads therefore over same cylinder • Fixed sector size • Variable density encoding • Disk controller – usually built in processor + buffering

  19. Anatomy of a Read Access • Steps • Memory mapped I/O over bus to controller • Controller starts access • Seek + rotational latency wait • Sector is read and buffered (validity checked) • Controller says ready or DMA’s to main memory and then says ready

  20. Access Time • Access Time • Seek time: time to move the arm over the proper track • Very non-linear: accelerate and decelerate times complicate • Rotation latency (delay): time for the requested sector to rotate under the head (on average: 0.5 * RPM) • Transfer time: time to transfer a block of bits (typically a sector) under the read-write head • Controller overhead: the overhead the controller imposes in performing an I/O access • Queuing delay: time spent waiting for a disk to become free

  21. Access Time Example • Assumption: average seek time – 5ms; transfer rate – 40MB/sec; 10,000 RPM; controller overhead – 0.1ms; no queuing delay • What is the average time to r/w a 512-byte sector? • Answer

  22. Cost VS Performance • Large-diameter drives have many more data to amortize the cost of electronics  lowest cost per GB • Higher sales volume  lower manufacturing cost • 3.5-inch drive, the largest surviving drive in 2001, also has the highest sales volume, so it unquestionably has the best price per GB

  23. Future of Magnetic Disks • Areal density: bits/unit area is common improvement metric • Trends • Until 1988: 29% improvement per year • 1988 – 1996: 60% per year • 1997 – 2001: 100% per year • 2001 • 20 billion bits per square inch • 60 billion bit per square inch demonstrated in labs

  24. Disk Price Trends by Capacity

  25. Disk Price Trends – Dollars Per MB

  26. Cost VS Access Time for SRAM, DRAM, and Magnetic Disk

  27. Disk Alternatives • Optical Disks • Optical compact disks (CD) – 0.65GB • Digital video discs, digital versatile disks (DVD) – 4.7GB * 2 sides • Rewritable CD (CD-RW) and write-once CD (CD-R) • Rewritable DVD (DVD-RAM) and write-once DVD (DVD-R) • Robotic Tape Storage • Optical Juke Boxes • Tapes – DAT, DLT • Flash memory • Good for embedded systems • Nonvolatile storage and rewritable ROM

  28. Bus – Connecting I/O Devices to CPU/Memory

  29. I/O Connection Issues Connecting the CPU to the I/O device world • Shared communication link between subsystems • Typical choice is a bus • Advantages • Shares a common set of wires and protocols  low cost • Often based on standard - PCI, SCSI, etc. portable and versatility • Disadvantages • Poor performance • Multiple devices imply arbitration and therefore contention • Can be a bottleneck

  30. I/O bus Lengthy Many types of connected devices Wide range in device bandwidth Follow a bus standard Accept devices varying in latency and bandwidth capabilities CPU-memory bus Short High speed Match to the memory system to maximize CPU-memory bandwidth Knows all types of devices that must connect together I/O Connection Issues – Multiple Buses

  31. Typical Bus Synchronous Read Transaction

  32. Bus Design Decisions • Other things to standardize as well • Connectors • Voltage and current levels • Physical encoding of control signals • Protocols for good citizenship

  33. Bus Design Decisions (Cont.) • Bus master: devices that can initiate a R/W transaction • Multiple : multiple CPUs, I/O device initiate bus transactions • Multiple bus masters need arbitration (fixed priority or random) • Split transaction for multiple masters • Use packets for the full transaction (does not hold the bus) • A read transaction is broken into read-request and memory-reply transactions • Make the bus available for other masters while the data is read/written from/to the specified address • Transactions must be tagged • Higher bandwidth, but also higher latency

  34. Split Transaction Bus

  35. Bus Design Decisions (Cont.) • Clocking: Synchronous vs. Asynchronous • Synchronous • Include a clock in the control lines, and a fixed protocol for address and data relative to the clock • Fast and inexpensive (little or no logic to determine what's next) • Everything on the bus must run at the same clock rate • Short length (due to clock skew) • CPU-memory buses • Asynchronous • Easier to connect a wide variety of devices, and lengthen the bus • Scale better with technological changes • I/O buses

  36. Synchronous or Asynchronous?

  37. Standards • The Good • Let the computer and I/O-device designers work independently • Provides a path for second party (e.g. cheaper) competition • The Bad • Become major performance anchors • Inhibit change • How to create a standard • Bottom-up • Company tries to get standards committee to approve it’s latest philosophy in hopes that they’ll get the jump on the others (e.g. S bus, PC-AT bus, ...) • De facto standards • Top-down • Design by committee (PCI, SCSI, ...)

  38. Connecting the I/O Bus • To main memory • I/O bus and CPU-memory bus may the same • I/O commands on bus could interfere with CPU's access memory • Since cache misses are rare, does not tend to stall the CPU • Problem is lack of coherency • Currently, we consider this case • To cache • Access • Memory-mapped I/O or distinct instruction (I/O opcodes) • Interrupt vs. Polling • DMA or not • Autonomous control allows overlap and latency hiding • However there is a cost impact

  39. A typical interface of I/O devices and an I/O bus to the CPU-memory bus

  40. Processor Interface Issues • Processor interface • Interrupts • Memory mapped I/O • I/O Control Structures • Polling • Interrupts • DMA • I/O Controllers • I/O Processors • Capacity, Access Time, Bandwidth • Interconnections • Busses

  41. I/O Controller Ready, done, error… I/O Address Command, Interrupt…

  42. CPU Single Memory & I/O Bus No Separate I/O Instructions ROM Memory RAM Interface Interface Peripheral Peripheral CPU $ I/O L2 $ Memory Bus I/O bus Memory Bus Adaptor Memory Mapped I/O Some portions of memory address space are assigned to I/O device.Reads/Writes to these space cause data transfer

  43. Programmed I/O • Polling • I/O module performs the action, on behalf of the processor • But I/O module does not interrupt CPU when I/O is done • Processor is kept busy checking status of I/O module • not an efficient way to use the CPU unless the device is very fast! • Byte by Byte…

  44. Interrupt-Driven I/O • Processor is interrupted when I/O module ready to exchange data • Processor is free to do other work • No needless waiting • Consumes a lot of processor time because every word read or written passes through the processor and requires an interrupt • Interrupt per byte

  45. Direct Memory Access (DMA) • CPU issues request to a DMA module (separate module or incorporated into I/O module) • DMA module transfers a block of data directly to or from memory (without going through CPU) • An interrupt is sent when the task is complete • Only one interrupt per block, rather than one interrupt per byte • The CPU is only involved at the beginning and end of the transfer • The CPU is free to perform other tasks during data transfer

  46. D1 IOP CPU D2 main memory bus Mem . . . Dn I/O bus target device where cmnds are CPU IOP issues instruction to IOP interrupts when done OP Device Address (4) (1) looks in memory for commands (2) (3) memory OP Addr Cnt Other special requests what to do Device to/from memory transfers are controlled by the IOP directly. IOP steals memory cycles. where to put data how much Input/Output Processors

  47. RAID: Redundant Arrays of Inexpensive Disks

  48. 3 Important Aspects of File Systems • Reliability – is anything broken? • Redundancy is main hack to increased reliability • Availability – is the system still available to the user? • When single point of failure occurs is the rest of the system still usable? • ECC and various correction schemes help (but cannot improve reliability) • Data Integrity • You must know exactly what is lost when something goes wrong

  49. Disk Arrays • Multiple arms improve throughput, but not necessarily improve latency • Striping • Spreading data over multiple disks • Reliability • General metric is N devices have 1/N reliability • Rule of thumb: MTTF of a disk is about 5 years • Hence need to add redundant disks to compensate • MTTR ::= mean time to repair (or replace) (hours for disks) • If MTTR is small then the array’s MTTF can be pushed out significantly with a fairly small redundancy factor

  50. Data Striping • Bit-level striping: split the bit of each bytes across multiple disks • No. of disks can be a multiple of 8 or divides 8 • Block-level striping: blocks of a file are striped across multiple disks; with n disks, block i goes to disk (i mod n)+1 • Every disk participates in every access • Number of I/O per second is the same as a single disk • Number of data per second is improved • Provide high data-transfer rates, but not improve reliability

More Related