1 / 43

The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2)

The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2). Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University. Cache Line Replacement. The cache memory is always smaller than the main memory (else why have a cache?).

tyrell
Download Presentation

The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Memory HierarchyCache, Main Memory, and Virtual Memory(Part 2) Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

  2. Cache Line Replacement • The cache memory is always smaller than the main memory (else why have a cache?). • For this reason, it is often the case that a memory block being placed into the cache must replace a memory block already there. • The process is called “cache replacement” and the method to choose the block to replace is the “cache replacement policy”.

  3. Example of a Cache Line • Consider a cache memory with 256 (28) cache lines, each holding 16 (24) bytes. • A 24-bit address would be divided as follows: A 4-bit offset into the cache line A 20-bit memory block tag • The memory block tag is required because a number of different memory blocks can be mapped into the same cache line.

  4. Direct-Mapped and Set Associative • In each of these, the count of cache lines is always a power of 2; in our example 256 = 28. • For 2L cache lines, the lower L bits of the memory tag specify the cache line. • Suppose an N bit address. • For our example, K = 4, L = 8, N – L – K = 20

  5. Memory Tag 0xAB712 • In our example that assumes a 24-bit memory address and a 16-byte cache line, the 20-bit memory tag 0xAB712 references a block containing addresses 0xAB7120 – 0xAB712F. • For 256 (28) cache lines and a direct-mapped cache, this block would go to line 0x12. • The cache tag would be 0xAB7. • We can reconstruct the memory tag from the cache tag and the cache line number.

  6. Replacement Policy • Direct mapped: no choice • Set associative • Prefer non-valid entry, if there is one • Otherwise, choose among entries in the set • Least-recently used (LRU) • Choose the one unused for the longest time • Simple for 2-way, manageable for 4-way, too hard beyond that • Random • Gives approximately the same performance as LRU for high associativity Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6

  7. The Dirty Bit and Replacement • Consider a cache line. If the valid bit V = 0, no data has ever been placed in the cache line.This is a great place to put a new block.(This does not apply to direct mapped caches). • In some cache organizations, the dirty bit can be used to select the cache line to replace if all cache lines have V = 1. • If a cache line has D = 0 (is not “dirty”), it is not necessary to write its contents back to main memory in order to avoid data loss.

  8. Replacement for Cache Line 0x12 • Suppose memory block 0xAB712 is located in cache line 0x12 of a cache with 256 cache lines. • Now the CPU references memory block 0xCD112. This must also go into line 0x12. • A direct-mapped cache contains only one block per cache line, so block 0xAB712 is replaced.If the block is “dirty”, it must be written back to main memory before being overwritten.

  9. Replacement for Set Associativity • In an N-way set associative memory, each cache line can hold N memory blocks with their tags. • Suppose memory block 0xCD112 must be read into cache line 0x12 and is not already there. • Check if any of the four sets has V = 0. If so, there are no data in it. Read into that set. • The next choice is a set with D = 0. A valid set that is not dirty can be simply overwritten. • Otherwise, use some other way to pick the set.

  10. Writing to a Cache • Suppose that the CPU writes to memory. The data written will be sent to the cache. • What happens next depends on whether or not the target memory block is present in the cache. If the block is present, there is a hit. • On a hit, the dirty bit for the block is set: D = 1. • If the block is not present, a block is chosen for replacement, and the target block is read into the cache. The write proceeds and D = 1.

  11. Write Policy • Write-through • Update both upper and lower levels • Simplifies replacement, but may require write buffer • Write-back • Update upper level only • Update lower level when block is replaced • Need to keep more state • Virtual memory • Only write-back is feasible, given disk write latency Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 11

  12. Cache Misses • On cache hit, CPU proceeds normally • On cache miss • Stall the CPU pipeline • Fetch block from next level of hierarchy • Instruction cache miss • Restart instruction fetch • Data cache miss • Complete data access Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 12

  13. Sources of Misses • Compulsory misses (aka cold start misses) • First access to a block • Capacity misses • Due to finite cache size • A replaced block is later accessed again • Conflict misses (aka collision misses) • In a non-fully associative cache • Due to competition for entries in a set • Would not occur in a fully associative cache of the same total size Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 13

  14. Write-Through • On data-write hit, could just update the block in cache • But then cache and memory would be inconsistent • Write through: also update memory • But makes writes take longer • e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles • Effective CPI = 1 + 0.1×100 = 11 • Solution: write buffer • Holds data waiting to be written to memory • CPU continues immediately • Only stalls on write if write buffer is already full Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 14

  15. Write-Back • Alternative: On data-write hit, just update the block in cache • Keep track of whether each block is dirty • When a dirty block is replaced • Write it back to memory • Can use a write buffer to allow replacing block to be read first Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 15

  16. The Write Buffer • This buffer exists between the L2 cache and main memory. It is used for memory writes.

  17. What is Written to the Buffer? • Consider our previous example, with the L2 cache organized into 16-byte cache lines. • Suppose the byte at address 0xAB7129 is written. The value is put into the write buffer. • If only the byte itself is put into the buffer, then its address 0xAB7129 is also put there, in order to identify where to put the byte. • If the entire cache line is written to the buffer, then only the memory tag 0xAB712 is needed.

  18. Write Allocation • What should happen on a write miss? • Alternatives for write-through • Allocate on miss: fetch the block • Write around: don’t fetch the block • Since programs often write a whole block before reading it (e.g., initialization) • For write-back • Usually fetch the block Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18

  19. Cache Design Trade-offs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19

  20. Block Size Considerations • Larger blocks should reduce miss rate • Due to spatial locality • But in a fixed-sized cache • Larger blocks  fewer of them • More competition  increased miss rate • Larger blocks  pollution • Larger miss penalty • Can override benefit of reduced miss rate • Early restart and critical-word-first can help Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20

  21. Multilevel Caches • Primary cache attached to CPU • Small, but fast • Level-2 cache services misses from primary cache • Larger, slower, but still faster than main memory • Main memory services L-2 cache misses • Some high-end systems include L-3 cache Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21

  22. Multilevel Cache Example • Given • CPU base CPI = 1, clock rate = 4GHz • Miss rate/instruction = 2% • Main memory access time = 100ns • With just primary cache • Miss penalty = 100ns/0.25ns = 400 cycles • Effective CPI = 1 + 0.02 × 400 = 9 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 22

  23. Example (cont.) • Now add L-2 cache • Access time = 5ns • Global miss rate to main memory = 0.5% • Primary miss with L-2 hit • Penalty = 5ns/0.25ns = 20 cycles • Primary miss with L-2 miss • Extra penalty = 500 cycles • CPI = 1 + 0.02 × 20 + 0.005 × 400 = 3.4 • Performance ratio = 9/3.4 = 2.6 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 23

  24. Multilevel Cache Considerations • Primary cache • Focus on minimal hit time • L-2 cache • Focus on low miss rate to avoid main memory access • Hit time has less overall impact • Results • L-1 cache usually smaller than a single cache • L-1 block size smaller than L-2 block size Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 24

  25. Interactions with Advanced CPUs • Out-of-order CPUs can execute instructions during cache miss • Pending store stays in load/store unit • Dependent instructions wait in reservation stations • Independent instructions continue • Effect of miss depends on program data flow • Much harder to analyse • Use system simulation Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 25

  26. Main Memory Supporting Caches • Use DRAMs for main memory • Fixed width (e.g., 1 word) • Connected by fixed-width clocked bus • Bus clock is typically slower than CPU clock • Example cache block read • 1 bus cycle for address transfer • 15 bus cycles per DRAM access • 1 bus cycle per data transfer • For 4-word block, 1-word-wide DRAM • Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles • Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 26

  27. Increasing Memory Bandwidth • 4-word wide memory • Miss penalty = 1 + 15 + 1 = 17 bus cycles • Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle • 4-bank interleaved memory • Miss penalty = 1 + 15 + 4×1 = 20 bus cycles • Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 27

  28. Advanced DRAM Organization • Bits in a DRAM are organized as a rectangular array • DRAM accesses an entire row • Burst mode: supply successive words from a row with reduced latency • Double data rate (DDR) DRAM • Transfer on rising and falling clock edges • Quad data rate (QDR) DRAM • Separate DDR inputs and outputs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 28

  29. DRAM Generations Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 29

  30. Measuring Cache Performance • Components of CPU time • Program execution cycles • Includes cache hit time • Memory stall cycles • Mainly from cache misses • With simplifying assumptions: §5.3 Measuring and Improving Cache Performance Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 30

  31. Cache Performance Example • Given • I-cache miss rate = 2% • D-cache miss rate = 4% • Miss penalty = 100 cycles • Base CPI (ideal cache) = 2 • Load & stores are 36% of instructions • Miss cycles per instruction • I-cache: 0.02 × 100 = 2 • D-cache: 0.36 × 0.04 × 100 = 1.44 • Actual CPI = 2 + 2 + 1.44 = 5.44 • Ideal CPU is 5.44/2 =2.72 times faster Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 31

  32. Average Access Time • Hit time is also important for performance • Average memory access time (AMAT) • AMAT = Hit time + Miss rate × Miss penalty • Example • CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5% • AMAT = 1 + 0.05 × 20 = 2ns • 2 cycles per instruction Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 32

  33. Performance Summary • When CPU performance increased • Miss penalty becomes more significant • Decreasing base CPI • Greater proportion of time spent on memory stalls • Increasing clock rate • Memory stalls account for more CPU cycles • Can’t neglect cache behavior when evaluating system performance Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 33

  34. 31 10 9 4 3 0 Tag Index Offset 18 bits 10 bits 4 bits Cache Control • Example cache characteristics • Direct-mapped, write-back, write allocate • Block size: 4 words (16 bytes) • Cache size: 16 KB (1024 blocks) • 32-bit byte addresses • Valid bit and dirty bit per block • Blocking cache • CPU waits until access is complete §5.7 Using a Finite State Machine to Control A Simple Cache Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 34

  35. Interface Signals Cache Memory CPU Read/Write Read/Write Valid Valid 32 32 Address Address 32 128 Write Data Write Data 32 128 Read Data Read Data Ready Ready Multiple cycles per access Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 35

  36. Interactions with Software • Misses depend on memory access patterns • Algorithm behavior • Compiler optimization for memory access Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 36

  37. Multilevel On-Chip Caches Intel Nehalem 4-core processor §5.10 Real Stuff: The AMD Opteron X4 and Intel Nehalem Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 37

  38. 3-Level Cache Organization Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 38

  39. Virtual Memory and Cache Memory • Any modern computer supports both virtual memory and cache memory. Consider the following example, based on results in previous lectures. • Byte–addressable memory • A 32–bit logical address, giving a logical address space of 232 bytes. • 224bytes of physical memory, requiring 24 bits to address. • Virtual memory implemented using page sizes of 212= 4096 bytes. • Cache memory implemented using a fully associative cache with cache line size of 16 bytes.

  40. The Two Address Spaces • The logical address is divided as follows: • The physical address is divided as follows:

  41. VM and Cache: The Complete Process

  42. The Virtually Mapped Cache

  43. Problems with Virtually Mapped Caches • Cache misses have to invoke the virtual memory system (more on that later).This is not a problem. • One problem with virtual mapping is that the translation from virtual addresses to physical addresses varies between processes. This is called the “aliasing problem”. • A solution is to extend the virtual address by a process id.

More Related