1 / 13

Lecture 17

Lecture 17. I/O Optimization. Disk Organization. Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder: corresponding track on each surface. Disk Performance. Seek: position heads over cylinder (~10 ms to move across disk)

vincenzo
Download Presentation

Lecture 17

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 17 I/O Optimization

  2. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder: corresponding track on each surface

  3. Disk Performance • Seek: position heads over cylinder (~10 ms to move across disk) • Rotational delay: wait for sector to rotate underneath head (~8 ms per rotation) • Transfer rate: ~ 4MB/s Tradeoff: Small sectors: seek time dominates Large sectors: transfer at disk bandwidth, but wastes space if file small

  4. Con: many transfers for large file, potentially more seeks High overhead of disk space (interrecord gaps between each sector) More Inodes Pro: Quick transfer Less internal fragmentation Berkeley FFS uses 4K blocks, also uses fragments which are ¼ of blocks to save space Block size optimization Tradeoff for Small Blocks

  5. Disk Arm Scheduling How to choose the next request to serve: • FIFO: fair, but may result in long seeks. • SSTF: shortest seek time first. Reduce seeks, but prone to starvation. • SCAN: like an elevator, move arm in one direction, take the closest request until no additional requests. Fairer than SSTF but does not perform as well, favors cylinders in center of disk.

  6. More Disk Scheduling • CSCAN: like SCAN, but only goes in one direction – skips any requests when moving the head back. Fairer than SCAN, but performance a little worse • In practice, if locality is good, then few seeks will be required, so any algorithm works well.

  7. Rotational Scheduling • SRLTF (shortest rotational latency time first) works well • Skip-sector: allocate a sequential file to interleaved sectors • Track offset for head switching: Since switching head takes time, offset the track start position on all tracks by a few sectors to allow the head to be selected

  8. File placement • Locality of reference: place in the same cylinder files that are frequently accessed together can minimize seek time. • E.g., inodes of files in the same directories are placed in the same cylinder group. (ls command) • Seek frequency can be reduced if commonly used files are placed on different disks

  9. Disk Caching • Exploit locality by caching recently used blocks in memory  use LRU for replacement • Works well as long as memory is big enough to hold working set of files; hit ratio is high (70 – 90%) • Problem as with any LRU scheme, is thrashing; working set size > file system cache

  10. Prefetching • Since most common access pattern is sequential, want to prefetch subsequent disk blocks ahead of the current read request • Typically will load the entire track into cache for every read. • Problem with prefetching too many blocks: delays concurrent disk requests by other processes

  11. Write behind • Batch writes so disk scheduler can efficiently order lots of requests • Avoid some writes since many temporary files need never get written to disk • UNIX has 30-second write-behind policy; writes go to kernel buffers, and the buffers are flushed every 30 second • Problem: a system crash could lose data. Can be solved by using NVRAM(non-volatile ram) as write cache

  12. RAID • Idea: improve performance by doing parallel reads, and improve reliability through redundancy • Keep an array of disks, distribute data over multiple disks, so read can be done in parallel. • Keep a check disk to store the parity of every bit to recover from failure. This works only if only one disk can fail at a time

  13. RAID, continued • Problem: parity disk can be a bottleneck since it has to be updated on every write. • Solution: Distribute parity information uniformly over all disks, so the bottleneck of a unique single disk is eliminated

More Related