1 / 40

Professor Alvin R. Lebeck Computer Science 220 ECE 252 Fall 2008

Memory Hierarchy— Motivation, Definitions, Four Questions about Memory Hierarchy, Improving Performance. Professor Alvin R. Lebeck Computer Science 220 ECE 252 Fall 2008. Admin. Some stuff will be review…some will not Projects… Reading H&P Appendix C & Chapter 5

dysis
Download Presentation

Professor Alvin R. Lebeck Computer Science 220 ECE 252 Fall 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Hierarchy— Motivation, Definitions, Four Questions about Memory Hierarchy, Improving Performance Professor Alvin R. Lebeck Computer Science 220 ECE 252 Fall 2008

  2. Admin • Some stuff will be review…some will not • Projects… • Reading • H&P Appendix C & Chapter 5 • There will be three papers to read

  3. Outline of Today’s Lecture • Most of today should be review…later stuff will not • The Memory Hierarchy • Direct Mapped Cache. • Two-Way Set Associative Cache • Fully Associative cache • Replacement Policies • Write Strategies • Memory Hierarchy Performance • Improving Performance

  4. Cache What is a cache? What is the motivation for a cache? Why do caches work? How do caches work?

  5. The Motivation for Caches Memory System • Motivation: • Large memories (DRAM) are slow • Small memories (SRAM) are fast • Make the average access time small by: • Servicing most accesses from a small, fast memory. • Reduce the bandwidth required of the large memory Processor DRAM Cache

  6. Levels of the Memory Hierarchy Upper Level Capacity Access Time Cost Staging Xfer Unit faster CPU Registers 100s Bytes <1s ns Registers prog./compiler 1-8 bytes Instr. Operands Cache K-M Bytes 1-30 ns Cache cache controller 8-128 bytes Blocks Main Memory G Bytes 100ns-1us $250 2GB 10-8 cents/bit Memory OS 512-4K bytes Pages Disk 40-200 G Bytes 3-15 ms 80GB $110 10-10 cents/bit Disk user/operator Mbytes Files Larger Tape 400 GB sec-min $80 400GB 10-11 cents/bit Tape Lower Level

  7. Probability of reference 0 Address Space 2n The Principle of Locality • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. • Example: 90% of time in 10% of the code • Two Different Types of Locality: • Temporal Locality(Locality in Time): If an item is referenced, it will tend to be referenced again soon. • Spatial Locality(Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon.

  8. Memory Hierarchy: Principles of Operation • At any given time, data is copied between only 2 adjacent levels: • Upper Level (Cache) : the one closer to the processor • Smaller, faster, and uses more expensive technology • Lower Level (Memory): the one further away from the processor • Bigger, slower, and uses less expensive technology • Block: • The minimum unit of information that can either be present or not present in the two level hierarchy Lower Level (Memory) Upper Level (Cache) To Processor Blk X From Processor Blk Y

  9. Memory Hierarchy: Terminology • Hit: data appears in some block in the upper level (example: Block X) • Hit Rate: the fraction of memory access found in the upper level • Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss • Miss: data needs to be retrieve from a block in the lower level (Block Y) • Miss Rate = 1 - (Hit Rate) • Miss Penalty = Time to replace a block in the upper level + Time to deliver the block the processor • Hit Time << Miss Penalty Lower Level (Memory) Upper Level (Cache) To Processor Blk X From Processor Blk Y

  10. Four Questions for Memory Hierarchy Designers • Q1: Where can a block be placed in the upper level? (Block placement) • Q2: How is a block found if it is in the upper level? (Block identification) • Q3: Which block should be replaced on a miss?(Block replacement) • Q4: What happens on a write?(Write strategy)

  11. Direct Mapped Cache • Direct Mapped cache is an array of fixed size frames. • Each frame holds consecutive bytes (cache block) of main memory data. • The Tag Array holds the Block Memory Address. • A valid bit associated with each cache block tells if the data is valid. • Cache Index: The location of a block (and it’s tag) in the cache. • Block Offset: The byte location in the cache block. Cache-Index = (<Address> Mod (Cache_Size))/ Block_Size Block-Offset = <Address> Mod (Block_Size) Tag=<Address> / (Cache_Size)

  12. The Simplest Cache: Direct Mapped Cache Memory Address Memory • Location 0 can be occupied by data from: • Memory location 0, 4, 8, ... etc. • In general: any memory locationwhose 2 LSBs of the address are 0s • Address<1:0> => cache index • Which one should we place in the cache? • How can we tell which one is in the cache? 0 4 Byte Direct Mapped Cache 1 Cache Index 2 0 3 1 4 2 5 3 6 7 8 9 A B C D E F

  13. 32-M bits M-L bits L bits Tag Cache Index block offset Data Address Direct Mapped Cache (Cont.) For a Cache of 2M bytes with block size of 2L bytes • There are 2M-L cache blocks, • Lowest L bits of the address are Block-Offset bits • Next (M - L) bits are the Cache-Index. • The last (32 - M)bits are the Tag bits.

  14. 22 bits Tag 5 bits Cache Index 5 bits block offset Address Cache Tag Direct Mapped Cache Data Valid bit 22 bits 32-byte block . . . . Byte 31 Byte 30 Byte 1 Byte 0 32 cache blocks Example: 1-KB Cache with 32B blocks: Cache Index = (<Address> Mod (1024))/ 32 Block-Offset = <Address> Mod (32) Tag=<Address> / (1024) 1K = 210 = 1024 25 = 32

  15. 31 9 4 0 Cache Tag Example: 0x50 Cache Index Byte Select Ex: 0x01 Ex: 0x00 Stored as part of the cache “state” Cache Data : Valid Bit Cache Tag Byte 31 Byte 1 Byte 0 0 : Byte 63 Byte 33 Byte 32 1 0x50 2 3 : : : : Byte 1023 Byte 992 31 Byte Select Example: 1KB Direct Mapped Cache with 32B Blocks • For a 1024 (210) byte cache with 32-byte blocks: • The uppermost 22 = (32 - 10) address bits are the Cache Tag • The lowest 5 address bits are the Byte Select (Block Size = 25) • The next 5 address bits (bit5 - bit9) are the Cache Index

  16. Cache Data : Byte 31 Byte 1 Byte 0 0 : Byte 63 Byte 33 Byte 32 1 2 3 : : Byte 1023 Byte 992 31 = Example: 1K Direct Mapped Cache Cache Tag Byte Select 31 9 Cache Index 4 0 0x0002fe 0x00 0x00 Valid Bit Cache Tag 0 0xxxxxxx 1 0x000050 1 0x004440 : : Byte Select Cache Miss

  17. Cache Data 0 : Byte 63 Byte 33 Byte 32 1 2 3 : : Byte 1023 Byte 992 31 = Example: 1K Direct Mapped Cache Cache Tag 31 9 Cache Index 4 Byte Select 0 0x0002fe 0x00 0x00 Valid Bit Cache Tag 1 0x0002fe New Bloick of data 1 0x000050 1 0x004440 : : Byte Select

  18. Cache Data : Byte 31 Byte 1 Byte 0 0 : Byte 63 Byte 33 Byte 32 1 2 3 : : Byte 1023 Byte 992 31 = Example: 1K Direct Mapped Cache Cache Tag 31 9 Cache Index 4 Byte Select 0 0x000050 0x01 0x08 Valid Bit Cache Tag 1 0x0002fe 1 0x000050 1 0x004440 : : Byte Select Cache Hit

  19. Cache Data : Byte 31 Byte 1 Byte 0 0 : Byte 63 Byte 33 Byte 32 1 2 3 : : Byte 1023 Byte 992 31 = Example: 1K Direct Mapped Cache Cache Tag Byte Select 31 9 Cache Index 4 0 0x002450 0x02 0x04 Valid Bit Cache Tag 1 0x0002fe 1 0x000050 1 0x004440 : : Byte Select Cache Miss

  20. Cache Data : Byte 31 Byte 1 Byte 0 0 : Byte 63 Byte 33 Byte 32 1 2 3 : : Byte 1023 Byte 992 31 = Example: 1K Direct Mapped Cache Cache Tag Byte Select 31 9 Cache Index 4 0 0x002450 0x02 0x04 Valid Bit Cache Tag 1 0x0002fe 1 0x000050 1 0x002450 New Block of data : : Byte Select

  21. Block Size Tradeoff • In general, larger block size takes advantage of spatial locality BUT: • Larger block size means larger miss penalty: • Takes longer time to fill up the block • If block size is too big relative to cache size, miss rate will go up • Too few cache blocks • Average Memory Access Time: • Hit Time x (1 - Miss Rate) + Miss Penalty x Miss Rate Average Access Time Miss Rate Miss Penalty Exploits Spatial Locality Increased Miss Penalty & Miss Rate Fewer blocks: compromises temporal locality Block Size Block Size Block Size

  22. Cache Data Cache Tag Valid Cache Block 0 : : : Compare A N-way Set Associative Cache • N-way set associative: N entries for each Cache Index • N direct mapped caches operating in parallel • Example: Two-way set associative cache • Cache Index selects a “set” from the cache • The two tags in the set are compared in parallel • Data is selected based on the tag result Cache Index Valid Cache Tag Cache Data Cache Block 0 : : : Adr. Tag Adr. Tag Compare 1 0 Mux SEL1 SEL0 OR Cache Block Hit

  23. Advantages of Set associative cache • Higher Hit rate for the same cache size. • Fewer Conflict Misses. • Can have a larger cache but keep the index smaller (same size as virtual page index)

  24. Cache Index Valid Cache Tag Cache Data Cache Data Cache Tag Valid Cache Block 0 Cache Block 0 : : : : : : Adr Tag Compare Compare 1 0 Mux SEL1 SEL0 OR Cache Block Hit Disadvantage of Set Associative Cache • N-way Set Associative Cache versus Direct Mapped Cache: • N comparators vs. 1 (power penalty) • Extra MUX delay for the data (delay penalty) • Data comes AFTER Hit/Miss decision and set selection • In a direct mapped cache, Cache Block is available BEFORE Hit/Miss: • Possible to assume a hit and continue, recover later if it is a miss.

  25. And yet Another Extreme Example: Fully Associative cache • Fully Associative Cache -- push the set associative idea to its limit! • Forget about the Cache Index • Compare the Cache Tags of all cache entries in parallel • Example: Block Size = 32B blocks, we need N 27-bit comparators • By definition: Conflict Miss = 0 for a fully associative cache 31 4 0 Cache Tag (27 bits long) Byte Select Ex: 0x01 Cache Tag Valid Bit Cache Data : X Byte 31 Byte 1 Byte 0 : X Byte 63 Byte 33 Byte 32 X X : : : X

  26. Sources of Cache Misses • Compulsory (cold start or process migration, first reference): first access to a block • Miss in infinite cache • “Cold” fact of life: not a whole lot you can do about it • Prefetch, larger block gives implicit prefetch • Capacity: • Miss in Fully Associative cache with LRU replacement • Cache cannot contain all blocks access by the program • Solution: increase cache size • Conflict (collision): • Hit in Fully Associative, miss in Set-Associative • Multiple memory locations mappedto the same cache location • Solution 1: increase cache size • Solution 2: increase associativity • Invalidation: other process (e.g., I/O) updates memory

  27. Sources of Cache Misses Direct Mapped N-way Set Associative Fully Associative Cache Size Big Medium Small Compulsory Miss Same Same Same Conflict Miss High Medium Zero Capacity Miss Low(er) Medium High Invalidation Miss Same Same/lower Same/lower Note: If you are going to run “billions” of instruction, Compulsory Misses are insignificant.

  28. The Need to Make a Decision! • Direct Mapped Cache: • Each memory location can map to only 1 cache location (frame) • No need to make any decision :-) • Current item replaced the previous item in that cache location • N-way Set Associative Cache: • Each memory location have a choice of N cache frames • Fully Associative Cache: • Each memory location can be placed in ANY cache frame • Cache miss in a N-way Set Associative or Fully Associative Cache: • Bring in new block from memory • Throw out a cache block to make room for the new block • We need to make a decision on which block to throw out!

  29. Cache Block Replacement Policy • Random Replacement: • Hardware randomly selects a cache item and throw it out • Least Recently Used: • Hardware keeps track of the access history • Replace the entry that has not been used for the longest time. • For two way set associative cache only needs one bit for LRU replacement. • Example of a Simple “Pseudo” Least Recently Used Implementation: • Assume 64 Fully Associative Entries • Hardware replacement pointer points to one cache entry • Whenever an access is made to the entry the pointer points to: • Move the pointer to the next entry • Otherwise: do not move the pointer Entry 0 Entry 1 Replacement : Pointer Entry 63

  30. Cache Write Policy: Write Through versus Write Back • Cache read is much easier to handle than cache write: • Instruction cache is much easier to design than data cache • Cache write: • How do we keep data in the cache and memory consistent? • Two options (decision time again :-) • Write Back: write to cache only. Write the cache block to memory when that cache block is being replaced on a cache miss. • Need a “dirty bit” for each cache block • Greatly reduce the memory bandwidth requirement • Can buffer the data in a write-back buffer • Control can be complex • Write Through: write to cache and memory at the same time. • What!!! How can this be? Isn’t memory too slow for this?

  31. Write Buffer for Write Through • A Write Buffer is needed between the Cache and Memory • Processor: writes data into the cache and the write buffer • Memory controller: write contents of the buffer to memory • Write buffer is just a FIFO: • Typical number of entries: 4 • Works fine if: Store frequency (w.r.t. time) << 1 / DRAM write cycle • Memory system designer’s nightmare: • Store frequency (w.r.t. time) > 1 / DRAM write cycle • Write buffer saturation Cache Processor DRAM Write Buffer

  32. Write Buffer Saturation Cache • Store frequency (w.r.t. time) -> 1 / DRAM write cycle • If this condition exist for a long period of time (CPU cycle time too quick and/or too many store instructions in a row): • Store buffer will overflow no matter how big you make it • The CPU Cycle Time << DRAM Write Cycle Time • Solution for write buffer saturation: • Use a write back cache • Install a second level (L2) cache Processor DRAM Write Buffer Cache L2 Cache Processor DRAM Write Buffer

  33. Write Policies • We know about write-through vs. write-back • Assume: a 16-bit write to memory location 0x00 causes a cache miss. • Do we change the cache tag and update data in the block? Yes: Write Allocate No: Write No-Allocate • Do we fetch the other data in the block? Yes: Fetch-on-Write(usually do write-allocate) No: No-Fetch-on-Write • Write-around cache • Write-through no-write-allocate

  34. Sub-block Cache (Sectored) • Sub-block: • Share one cache tag between all sub-blocks in a block • Each sub-block within a block has its own valid bit • Example: 1 KB Direct Mapped Cache, 32-B Block, 8-B Sub-block • Each cache entry will have: 32/8 = 4 valid bits • Miss: only the bytes in that sub-block are brought in. • reduces cache fill bandwidth (penalty). SB3’s V Bit SB2’s V Bit SB1’s V Bit SB0’s V Bit Cache Tag Cache Data : : B31 B24 B7 B0 0 Sub-block3 Sub-block2 Sub-block1 Sub-block0 1 2 3 : : : : : : 31 Byte 1023 Byte 992

  35. Review: Four Questions for Memory Hierarchy Designers • Q1: Where can a block be placed in the upper level? (Block placement) • Fully Associative, Set Associative, Direct Mapped • Q2: How is a block found if it is in the upper level? (Block identification) • Tag/Block • Q3: Which block should be replaced on a miss? (Block replacement) • Random, LRU • Q4: What happens on a write? (Write strategy) • Write Back or Write Through (with Write Buffer) CPS 220

  36. Cache Performance CPU time = (CPU execution clock cycles + Memory stall clock cycles) x clock cycle time Memory stall clock cycles = (Reads x Read miss rate x Read miss penalty + Writes x Write miss rate x Write miss penalty) Memory stall clock cycles = Memory accesses x Miss rate x Miss penalty CPS 220

  37. Cache Performance CPUtime = IC x (CPIexecution + (Mem accesses per instruction x Miss rate x Miss penalty)) x Clock cycle time hits are included in CPIexecution Misses per instruction = Memory accesses per instruction x Miss rate CPUtime = IC x (CPIexecution + Misses per instruction x Miss penalty) x Clock cycle time CPS 220

  38. Example • Miss penalty 50 clocks • Miss rate 2% • Base CPI 2.0 • 1.33 references per instruction • Compute the CPUtime • CPUtime = IC x (2.0 + (1.33 x 0.02 x 50)) x Clock • CPUtime = IC x 3.33 x Clock • So CPI increased from 2.0 to 3.33 with a 2% miss rate

  39. Example 2 • Two caches: both 64KB, 32 byte blocks, miss penalty 70ns, 1.3 references per instruction, CPI 2.0 w/ perfect cache • direct mapped • Cycle time 2ns • Miss rate 1.4% • 2-way associative • Cycle time increases by 10% • Miss rate 1.0% • Which is better? • Compute average memory access time • Compute CPU time

  40. Example 2 Continued • Ave Mem Acc Time = Hit time + (miss rate x miss penalty) • 1-way: 2.0 + (0.014 x 70) = 2.98ns • 2-way: 2.2 + (0.010 x 70) = 2.90ns • CPUtime = IC x CPIexec x Cycle • CPIexec = CPIbase + ((memacc/inst) x Miss rate x miss penalty) • Note: miss penalty x cycle time = 70ns • 1-way: IC x ((2.0 x 2.0) + (1.3x0.014x70)) = 5.27 x IC • 2-way: IC x ((2.0 x 2.2) + (1.3x0.010x70)) = 5.31 x IC

More Related