1 / 18

Cache Innovations: Basics and DRAM (Lecture 13)

This lecture covers the basics of cache access and introduces cache innovations, specifically focusing on DRAM. Topics include associativity, types of cache misses, reducing miss rate, cache replacement policies, and multi-level caches.

jrode
Download Presentation

Cache Innovations: Basics and DRAM (Lecture 13)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 13: Cache Innovations • Today: cache access basics and innovations, DRAM • (Sections 5.1-5.3)

  2. Associativity Byte address Set associativity  fewer conflicts; wasted power because multiple data and tags are read 10100000 Tag Way-1 Way-2 Tag array Data array Compare

  3. Types of Cache Misses • Compulsory misses: happens the first time a memory • word is accessed – the misses for an infinite cache • Capacity misses: happens because the program touched • many other words before re-touching the same word – the • misses for a fully-associative cache • Conflict misses: happens because two words map to the • same location in the cache – the misses generated while • moving from a fully-associative to a direct-mapped cache • Sidenote: can a fully-associative cache have more misses • than a direct-mapped cache of the same size?

  4. What Influences Cache Misses?

  5. Reducing Miss Rate • Large block size – reduces compulsory misses, reduces • miss penalty in case of spatial locality – increases traffic • between different levels, space wastage, and conflict misses • Large caches – reduces capacity/conflict misses – access • time penalty • High associativity – reduces conflict misses – rule of thumb: • 2-way cache of capacity N/2 has the same miss rate as • 1-way cache of capacity N – access time penalty • Way prediction – by predicting the way, the access time • is effectively like a direct-mapped cache – can also reduce • power consumption

  6. Cache Misses • On a write miss, you may either choose to bring the block • into the cache (write-allocate) or not (write-no-allocate) • On a read miss, you always bring the block in (spatial and • temporal locality) – but which block do you replace? • no choice for a direct-mapped cache • randomly pick one of the ways to replace • replace the way that was least-recently used (LRU) • FIFO replacement (round-robin)

  7. Writes • When you write into a block, do you also update the • copy in L2? • write-through: every write to L1  write to L2 • write-back: mark the block as dirty, when the block gets replaced from L1, write it to L2 • Writeback coalesces multiple writes to an L1 block into one • L2 write • Writethrough simplifies coherency protocols in a • multiprocessor system as the L2 always has a current • copy of data

  8. Reducing Cache Miss Penalty • Multi-level caches • Critical word first • Priority for reads • Victim caches

  9. Multi-Level Caches • The L2 and L3 have properties that are different from L1 • access time is not as critical for L2 as it is for L1 (every load/store/instruction accesses the L1) • the L2 is much larger and can consume more power per access • Hence, they can adopt alternative design choices • serial tag and data access • high associativity

  10. Read/Write Priority • For writeback/thru caches, writes to lower levels are placed • in write buffers • When we have a read miss, we must look up the write • buffer before checking the lower level • When we have a write miss, the write can merge with • another entry in the write buffer or it creates a new entry • Reads are more urgent than writes (probability of an instr • waiting for the result of a read is 100%, while probability of • an instr waiting for the result of a write is much smaller) – • hence, reads get priority unless the write buffer is full

  11. Victim Caches • A direct-mapped cache suffers from misses because • multiple pieces of data map to the same location • The processor often tries to access data that it recently • discarded – all discards are placed in a small victim cache • (4 or 8 entries) – the victim cache is checked before going • to L2 • Can be viewed as additional associativity for a few sets • that tend to have the most conflicts

  12. Tolerating Miss Penalty • Out of order execution: can do other useful work while • waiting for the miss – can have multiple cache misses • -- cache controller has to keep track of multiple • outstanding misses (non-blocking cache) • Hardware and software prefetching into prefetch buffers • – aggressive prefetching can increase contention for buses

  13. DRAM Access 1M DRAM = 1024 x 1024 array of bits 10 row address bits arrive first Row Access Strobe (RAS) 1024 bits are read out Subset of bits returned to CPU 10 column address bits arrive next Column decoder Column Access Strobe (CAS)

  14. DRAM Properties • The RAS and CAS bits share the same pins on the chip • Each bit loses its value after a while – hence, each bit • has to be refreshed periodically – this is done by reading • each row and writing the value back (hence, dynamic • random access memory) – causes variability • in memory access time • Dual Inline Memory Modules (DIMMs) contain 4-16 DRAM • chips and usually feed eight bytes to the processor

  15. Technology Trends • Improvements in technology (smaller devices)  DRAM • capacities double every two years • Time to read data out of the array improves by only • 5% every year  high memory latency (the memory wall!) • Time to read data out of the column decoder improves by • 10% every year  influences bandwidth

  16. Increasing Bandwidth • The column decoder has access to many bits of data – • many sequential bits can be forwarded to the CPU without • additional row accesses (fast page mode) • Each word is sent asynchronously to the CPU – every • transfer entails overhead to synchronize with the • controller – by introducing a clock, more than one word • can be sent without increasing the overhead – synchronous • DRAM

  17. Increasing Bandwidth • By increasing the memory width (number of memory chips • and the connecting bus), more bytes can be transferred • together – increases cost • Interleaved memory – since the memory is composed of • many chips, multiple operations can happen at the same • time – a single address is fed to multiple chips, allowing • us to read sequential words in parallel

  18. Title • Bullet

More Related