1 / 30

Memory Hierarchy

Memory Hierarchy. Gap grew 50% per year. Since 1980, CPU has outpaced DRAM. Q. How do architects address this gap?. A. Put smaller, faster “cache” memories between CPU and DRAM. Create a “memory hierarchy”. Performance (1/latency). CPU 60% per yr 2X in 1.5 yrs. 1000. CPU. 100. DRAM

shellyj
Download Presentation

Memory Hierarchy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Hierarchy

  2. Gap grew 50% per year Since 1980, CPU has outpaced DRAM ... Q. How do architects address this gap? A. Put smaller, faster “cache” memories between CPU and DRAM. Create a “memory hierarchy”. Performance (1/latency) CPU 60% per yr 2X in 1.5 yrs 1000 CPU 100 DRAM 9% per yr 2X in 10 yrs 10 DRAM 1980 1990 2000 Year

  3. Apple ][ (1977) CPU: 1000 ns DRAM: 400 ns Steve Wozniak Steve Jobs 1977: DRAM faster than microprocessors

  4. Levels of the Memory Hierarchy Upper Level Capacity Access Time Cost Staging Xfer Unit faster CPU Registers 100s Bytes <10s ns Registers prog./compiler 1-8 bytes Instr. Operands Cache K Bytes 10-100 ns 1-0.1 cents/bit Cache cache cntl 8-128 bytes Blocks Main Memory M Bytes 200ns- 500ns $.0001-.00001 cents /bit Memory OS 512-4K bytes Pages Disk G Bytes, 10 ms (10,000,000 ns) 10 - 10 cents/bit Disk -6 -5 user/operator Mbytes Files Larger Tape infinite sec-min 10 Tape Lower Level -8

  5. Managed by compiler Managed by OS, hardware, application Managed by hardware iMac G5 1.6 GHz Memory Hierarchy: Apple iMac G5 Goal: Illusion of large, fast, cheap memory Let programs address a memory space that scales to the disk size, at a speed that is usually as fast as register access

  6. L1 (64K Instruction) Registers 512K L2 (1K) L1 (32K Data) iMac’s PowerPC 970: All caches on-chip

  7. The Principle of Locality • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. (This is kind of like in real life, we all have a lot of friends. But at any given time most of us can only keep in touch with a small group of them.) • Two Different Types of Locality: • Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) • Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) • Last 15 years, HW relied on locality for speed It is a property of programs which is exploited in machine design.

  8. Lower Level Memory Upper Level Memory To Processor Blk X From Processor Blk Y Memory Hierarchy: Terminology • Hit: data appears in some block in the upper level (example: Block X) • Hit Rate: the fraction of memory access found in the upper level • Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss • Miss: data needs to be retrieved from a block in the lower level (Block Y) • Miss Rate = 1 - (Hit Rate) • Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor • Hit Time << Miss Penalty

  9. Cache Measures • Hit rate: fraction found in that level • So high that usually talk about Miss rate • Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory • Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) • Miss penalty: time to replace a block from lower level, including time to replace in CPU • access time: time to lower level = f(latency to lower level) • transfer time: time to transfer block =f(BW between upper & lower levels)

  10. Performance - Cache Memory System • Te: Effective memory access time in cache memory system • Tc: Cache access time • Tm: Main memory access time Te = Tc + (1 - h) Tm • Example: Tc = 0.4ns, Tm = 1.2ns, h = 0.85% • Te = 0.4 + (1 - 0.85) × 1.2 = 0.58ns

  11. Associative Memory • The time required to find an item in memory can be reduced considerably if stored data can be identified for access by the content of the data itself rather than by an address • A memory unit accessed by content is called an associative memory or content addressable memory (CAM) • When a word is to be read from an associative memory, the content of the word, or part of the word, is specified. • It is more expensive than RAM as each cell must have storage capability as well as logic circuits for matching its content with an external argument

  12. 4 Questions for Memory Hierarchy • Q1: Where can a block be placed in the upper level? (Block placement) • Q2: How is a block found if it is in the upper level? (Block identification) • Q3: Which block should be replaced on a miss? (Block replacement) • Q4: What happens on a write? (Write strategy)

  13. Direct Mapping • Each memory block has only one place to load in cache • Mapping table is made of RAM instead of CAM • n-bit memory address consists of 2 parts; k bits of index field and n - k bits of tag field • n-bit addresses are used to access main memory and k-bit index is used to access the cache

  14. Direct Mapping (contd..) • When CPU generates a memory request, the index field is used for address to access the cache and the tag field is compared with the tag in the word read from the cache • If two tags matches, there is a hit and the desired data word is in cache • If there is no match, there is a miss and the required word is read from main memory, and stored in cache together with the new tag, replacing the old value • The disadvantage is that the hit ratio drops if two or more words with same index but different tags are accessed repeatedly (minimized by the fact that they are far apart- 512 locations)

  15. Direct Mapping with blocks • The index field is now divided into two parts; the block field and the word field. • In a 512-word cache we have 64 blocks of 8 words each; the tag field stored within the cache is common to all eight words of the same block • Every time a miss occurs, an entire block of eight words is transferred from main memory to cache memory; this takes extra time, but the hit ratio will most likely improve with a larger block size because of the sequential nature of programs

  16. Associative Mapping • Any block location in cache can store any block in memory • Most Flexible, fast, very expensive • Mapping table is implemented in an associative memory • Mapping table stores both address and the content of the memory word • If the address is found, 12-bit data is read and sent to the CPU, if not main memory is accessed for the word and the address-data pair is transferred to the associative cache memory • If the cache is full, an address-data pair is displaced to make room for the pair needed according to a replacement policy (e.g. Round-robin, FIFO, LRU, OPT, Random)

  17. Set-Associative Mapping • Each word of cache can store two or more words of memory under the same index address • The number of tag-data items in one word of cache is said to form a set • The word length is 2(6 + 12) = 36 bits, the index address is 9 bits, and the cache can accommodate 1024 words of main memory • The index value of the address is used to access the cache, and the tag field of the CPU address is compared with both tags; the comparison logic is done by an associative search of the tags in the set similar to an associative memory (thus the name set-associative)

  18. Q3: Which block should be replaced on a miss? • Easy for Direct Mapped • Set Associative or Fully Associative: • Random • LRU (Least Recently Used) Assoc: 2-way 4-way 8-way Size LRU Ran LRU Ran LRU Ran 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12%

  19. Miss Rate for 2-way Set Associative Cache Q3: After a cache read miss, if there are no empty cache blocks, which block should be removed from the cache? A randomly chosen block? Easy to implement, how well does it work? The Least Recently Used (LRU) block? Appealing, but hard to implement for high associativity Also, try other LRU approx.

  20. Q4: What happens on a write? Additional option -- let writes to an un-cached address allocate a new cache line (“write-allocate”).

  21. Lower Level Memory Cache Processor Write Buffer Holds data awaiting write-through to lower level memory Write Buffers for Write-Through Caches Q. Why a write buffer ? Ans. So CPU doesn’t stall Q. Why a buffer, why not just one register ? Ans. Bursts of writes are common. Q. Are Read After Write (RAW) hazards an issue for write buffer? Ans. Yes! Drain buffer before next read, or send read 1st after check write buffers.

  22. 5 Basic Cache Optimizations • Reducing Miss Rate • Larger Block size (compulsory misses) • Larger Cache size (capacity misses) • Higher Associativity (conflict misses) • Reducing Miss Penalty • Multilevel Caches • Reducing Hit Time • Giving Reads Priority over Writes • E.g., Read completes before earlier writes in write buffer

  23. “Physical addresses” of memory locations Data All programs share one address space: The physical address space The Limits of Physical Addressing A0-A31 A0-A31 CPU Memory D0-D31 D0-D31 Machine language programs must be aware of the machine organization No way to prevent a program from accessing any machine resource

  24. “Physical Addresses” “Virtual Addresses” Virtual Physical Address Translation Solution: Add a Layer of Indirection A0-A31 A0-A31 CPU Memory D0-D31 D0-D31 Data User programs run in an standardized virtual address space Address Translation hardware managed by the operating system (OS) maps virtual address to physical memory Hardware supports “modern” OS features: Protection, Translation, Sharing

  25. Three Advantages of Virtual Memory • Translation: • Program can be given consistent view of memory, even though physical memory is scrambled • Makes multithreading reasonable (now used a lot!) • Only the most important part of program (“Working Set”) must be in physical memory. • Contiguous structures (like stacks) use only as much physical memory as necessary yet still grow later. • Protection: • Different threads (or processes) protected from each other. • Different pages can be given special behavior • (Read Only, Invisible to user programs, etc). • Kernel data protected from User programs • Very important for protection from malicious programs • Sharing: • Can map same physical page to multiple users(“Shared memory”)

  26. Virtual Address Space Physical Address Space A machine usually supports pages of a few sizes (MIPS R4000): Page tables encode virtual address spaces A virtual address space is divided into blocks of memory called pages frame frame frame frame A valid page table entry codes physical memory “frame” address for the page

  27. Physical Memory Space Page Table frame frame frame A machine usually supports pages of a few sizes (MIPS R4000): frame virtual address A page table is indexed by a virtual address OS manages the page table for each ASID A valid page table entry codes physical memory “frame” address for the page Page tables encode virtual address spaces A virtual address space is divided into blocks of memory called pages

  28. Physical Memory Space Virtual Address 12 V page no. offset Page Table Page Table Base Reg Access Rights V PA index into page table table located in physical memory 12 P page no. offset Physical Address Details of Page Table Page Table • Page table maps virtual page numbers to physical frames (“PTE” = Page Table Entry) • Virtual memory => treat memory  cache for disk frame frame frame frame virtual address

  29. Summary #1/2: The Cache Design Space • Several interacting dimensions • cache size • block size • associativity • replacement policy • write-through vs write-back • write allocation • The optimal choice is a compromise • depends on access characteristics • workload • depends on technology / cost • Simplicity often wins

  30. Summary #2/2: Caches • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. • Temporal Locality: Locality in Time • Spatial Locality: Locality in Space • Write Policy: Write Through vs. Write Back

More Related