1 / 63

Swap-Space Management

Swap-Space Management. Swap-space — Virtual memory uses disk space as an extension of main memory Swap-space can be carved out of the normal file system, or, more commonly, it can be in a separate disk partition Swap-space management

Download Presentation

Swap-Space Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Swap-Space Management • Swap-space — Virtual memory uses disk space as an extension of main memory • Swap-space can be carved out of the normal file system, or, more commonly, it can be in a separate disk partition • Swap-space management • Allocate swap space when process starts; holds text segment (the program) and data segment • Kernel uses swap mapsto track swap-space use

  2. Data Structures for Swapping on Linux Systems

  3. Mass-Storage Systems UCSB CS170 Tao Yang

  4. Mass-Storage Systems: What to Learn • Structure of mass-storage devices and the resulting effects on the uses of the devices • Hard Disk Drive • SSD • Hybrid Disk • Performance characteristics and management of mass-storage devices • Disk Scheduling in HDD • RAID – improve performance/reliability • Text book Chapter 12 and 14.2.

  5. Mass Storage: HDD and SSD • Most popular: Magnetic hard disk drives • Solid state drives: (SSD)

  6. Magnetic Tape • Relatively permanent and holds large quantities of data • Random access ~1000 times slower than disk • Mainly used for backup, storage of infrequently-used data, transfer medium between systems • 20-1.5TB typical storage • Common technologies are 4mm, 8mm, 19mm, LTO-2 and SDLT

  7. Disk Attachment • Drive attached to computer via I/O bus • USB • SATA (replacing ATA, PATA, EIDE) • SCSI • itself is a bus, up to 16 devices on one cable, SCSI initiatorrequests operation and SCSI targetsperform tasks • FC (Fiber Channel) is high-speed serial architecture • Can be switched fabric with 24-bit address space – the basis of storagearea networks(SANs) in which many hosts attach to many storage units • Can be arbitrated loop (FC-AL) of 126 devices

  8. SATA connectors • SCSI • FC with SAN-switch

  9. Network-Attached Storage • Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus) • NFS and CIFS are common protocols • Implemented via remote procedure calls (RPCs) between host and storage • New iSCSI protocol uses IP network to carry the SCSI protocol

  10. Storage Area Network (SAN) • Special/dedicated network for accessing block level data storage • Multiple hosts attached to multiple storage arrays - flexible

  11. Performance characteristics of disks • Drives rotate at 60 to 200 times per second • Positioning time is • time to move disk arm to desired cylinder (seek time) • plus time for desired sector to rotate under the disk head (rotational latency) • Effective bandwidth: “average data transfer rate during a transfer– that is the, number of bytes divided by transfer time” • data rate includes positioning overhead

  12. Moving-head Disk Mechanism

  13. Disk Performance Disk Latency = Seek Time + Rotation Time + Transfer Time Seek Time: time to move disk arm over track (1-20ms) Fine-grained position adjustment necessary for head to “settle” Head switch time ~ track switch time (on modern disks) Rotation Time: time to wait for disk to rotate under disk head Disk rotation: 4 – 15ms (depending on price of disk) On average, only need to wait half a rotation Transfer Time: time to transfer data onto/off of disk Disk head transfer rate: 50-100MB/s (5-10 usec/sector) Host transfer rate dependent on I/O connector (USB, SATA, …)

  14. Toshiba Disk (2008)

  15. Moving-head Disk Mechanism 54MB/s 128MB/s

  16. Question • How long to complete 500 random disk reads, in FIFO order? Each reads one sector(512 bytes).

  17. Question • How long to complete 500 random disk reads, in FIFO order? Each reads one sector(512 bytes). • Seek: average 10.5 msec • Rotation: average 4.15 msec • Disk spins 120 times per second (7200 RPM/60) • Average rotational cost is time to travel half track: 1/120 * 50%=4.15msec • Transfer: 5-10 usec • 54MB/second to transfer 512 bytes per sector • 0.5K/(54K) =0.01 msec • 500 * (10.5 + 4.15 + 0.01)/1000 = 7.3 seconds • Effective bandwidth: • 500 sectors*512 Bytes / 7.3 sec =0.034MB/sec • Copying 1GB of data takes 8.37 hours

  18. Question • How long to complete 500 sequential disk reads?

  19. Question • How long to complete 500 sequential disk reads? • Seek Time: 10.5 ms (to reach first sector) • Rotation Time: 4.15 ms (to reach first sector) • Transfer Time: (outer track) 500 sectors * 512 bytes / 128MB/sec = 2ms Total: 10.5 + 4.15 + 2 = 16.7 ms • Effective bandwidth: • 500 sectors*512 Bytes / 16.7 ms =14.97 MB/sec • This is 11.7% of the maximum transfer rate with 250KB data transferring.

  20. Question • How large a transfer is needed to achieve 80% of the max disk transfer rate?

  21. Question • How large a transfer is needed to achieve 80% of the max disk transfer rate? Assume x rotations are needed, then solve for x: 0.8 (10.5 ms + (1ms + 8.5ms) x) = 8.5ms x Total: x = 9.1 rotations, 9.8MB ( with 2100 sectors/track) • A simplified approximation is to compute the effective bandwidth first x/(10.5ms + x/128 ) = 0.8 *128  x=7.5MB • Copying 1GB of data takes 10 seconds!

  22. Disk Scheduling: Objective • Given a set of IO requests • Coordinate disk access of multiple I/O requests for faster performance and reduced seek time. • Seek time  seek distance • Measured by total head movement in terms of cylinders from one request to another. Hard Disk Drive

  23. FCFS (First Come First Serve) total head movement: 640 cylinders for executing all requests 199 Disk Head … 2 1

  24. SSTF (Shortest Seek Time First) • Selects the request with the minimum seek time from the current head position • total head movement: 236 cylinders

  25. Question • Consider the following sequence of requests (2, 4, 1, 8), and assume the head position is on track 9. Then, the order in which SSTF services the requests is _________ Anthony D. Joseph UCB CS162

  26. Question • Q5: Consider the following sequence of requests (2, 4, 1, 8), and assume the head position is on track 9. Then, the order in which SSTF services the requests is _________ (8, 4, 2, 1)

  27. SCAN Algorithm for Disk Scheduling • SCAN: move disk arm in one direction, until all requests satisfied, then reverse direction • Also called “elevator scheduling”

  28. SCAN: Elevator algorithm • total head movement : 208 cylinders

  29. CSCAN for Disk Scheduling • CSCAN: move disk arm in one direction, until all requests satisfied, then start again from farthest request Provides a more uniform wait time than SCAN by treating cylinders as a circular list. The head moves from one end of the disk to the other, servicing requests as it goes. When it reaches the other end, it immediately returns to the beginning of the disk, without servicing any requests on the return trip

  30. C-SCAN (Circular-SCAN)

  31. Scheduling Algorithms

  32. Selecting a Disk-Scheduling Algorithm • SSTF is common with its natural appeal (but it may lead to starvation issue). • C-LOOK is fair and efficient • SCAN and C-SCAN perform better for systems that place a heavy load on the disk • Performance depends on the number and types of requests

  33. Solid State Disks (SSDs) • Use NAND Multi-Level Cell (2-bit/cell) flash memory • Non-volatile storage technology • Sector (4 KB page) addressable, but stores 4-64 “pages” per memory block • No moving parts (no rotate/seek motors) • Very low power and lightweight

  34. SSD Logic Components • Transfer time: transfer a 4KB page • Limited by controller and disk interface (SATA: 300-600MB/s) • Latency = Queuing Time + Controller time + Xfer Time

  35. SSD Architecture – Writes (I) • Writing data is complex! (~200μs – 1.7ms ) • Can only write empty pages in a block • Erasing a block takes ~1.5ms • Controller maintains pool of empty blocks by coalescing used pages (read, erase, write), also reserves some % of capacity https://en.wikipedia.org/wiki/Solid-state_drive Anthony D. Joseph UCB CS162

  36. SSD Architecture – Writes (II) • Write A, B, C, D https://en.wikipedia.org/wiki/Solid-state_drive Anthony D. Joseph UCB CS162

  37. SSD Architecture – Writes (II) • Write A, B, C, D • Write E, F, G, H and A’, B’, C’, D’ • Record A, B, C, D as obsolete https://en.wikipedia.org/wiki/Solid-state_drive Anthony D. Joseph UCB CS162

  38. SSD Architecture – Writes (II) • Write A, B, C, D • Write E, F, G, H and A’, B’, C’, D’ • Record A, B, C, D as obsolete • Controller garbage collects obsolete pages by copyingvalid pages to new (erased) block • Typical steady state behavior when SSD is almost full • One erase every 64 or 128 writes Anthony D. Joseph UCB CS162

  39. SSD Architecture – Writes (III) • Write and erase cycles require “high” voltage • Damages memory cells, limits SSD lifespan • Controller uses ECC, performs wear leveling • Result is very workload dependent performance • Latency = Queuing Time + Controller time (Find Free Block) + Xfer Time • Highest BW: Seq. OR Random writes (limited by empty pages) Rule of thumb: writes 10x more expensive than reads, and erases 10x more expensive than writes

  40. Flash Drive (2011)

  41. Storage Performance & Price 1http://www.fastestssd.com/featured/ssd-rankings-the-fastest-solid-state-drives/ 2http://www.extremetech.com/computing/164677-storage-pricewatch-hard-drive-and-ssd-prices-drop-making-for-a-good-time-to-buy BW: SSD up to x10 than HDD, DRAM > x10 than SSD Price: HDD x20 less than SSD, SSD x5 less than DRAM Anthony D. Joseph UCB CS162

  42. SSD Summary • Pros (vs. hard disk drives): • Low latency, high throughput (eliminate seek/rotational delay) • No moving parts: • Very light weight, low power, silent, very shock insensitive • Read at memory speeds (limited by controller and I/O bus) • Cons • Small storage (0.1-0.5x disk), very expensive (20x disk) • Hybrid alternative: combine small SSD with large HDD • Asymmetric block write performance: read pg/erase/write pg • Limited drive lifetime • Avg failure rate is 6 years, life expectancy is 9–11 years Anthony D. Joseph UCB CS162

  43. Questions: HDDs and SSDs • Q1: True _ False _ The block is the smallest addressable unit on a disk • Q2: True _ False _ An SSD has zero seek time • Q3: True _ False _ For an HDD, the read and write latencies are similar • Q4: True _ False _ For an SSD, the read and write latencies are similar Anthony D. Joseph UCB CS162

  44. Questions: HDDs and SSDs X • Q1: True _ False _ The block is the smallest addressable unit on a disk • Q2: True _ False _ An SSD has zero seek time • Q3: True _ False _ For an HDD, the read and write latencies are similar • Q4: True _ False _ For an SSD, the read and write latencies are similar X X X

  45. NV Cache Add a non-volatile cache Hybrid Disk Drive • A hybrid disk uses a small SSD as a buffer for a larger drive • All dirty blocks can be flushed to the actual hard drive based on: • Time, Threshold, Loss of power/computer shutdown Dram Cache ATA Interface

  46. Up to 90% Power Saving when powered down Read and Write instantly while spindle stopped Hybrid Disk Drive Benefits Dram Cache ATA Interface NV Cache

  47. How often do disk drives fail? • Schroeder and Gibson. “Disk Failures in the Real World: What Does and MTTF of 1,000,000 Hours Mean to You?” USENIX FAST 2007 • Typical drive replacement rate is 2-4% annually • In 2011, spinning disk have 0.5% (1.7*106 hours) • 1000 drives • 2%* 10000 means 20 failed drives per year • A failure every 2-3 weeks! • 1000 machines, each has 4 drives • 2%*4000 = 80 drive failures • A failure every 4-5 days!

  48. Fault Tolerance: Measurement • Mean time before failure (MTTF) • Inverse of annual failure rate • In 2011, advertised failure rates of spinning disks • 0.5% (MTTF= 1.7*106 hours) • 0.9% ( MTTF= 106 hours) • Actual failure rates are often reported 2-4%. • Mean Time To Repair (MTTR) is a basic measure of the maintainability of repairable items. It represents the average time required to repair a failed component or device • Typically hours to days.

  49. High Availability System Classes Gmail, Hosted Exchange target 3 nines (unscheduled) 2010: Gmail (99.984), Exchange (>99.9) UnAvailability ~ MTTR/MTBF Can cut it by reducing MTTR or increasing MTBF

  50. RAID (Redundant Array of Inexpensive Disks) • Multiple disk drives provide reliability via redundancy. Increases the mean time to failure • Hardware RAID with RAID controller vs software RAID

More Related