1 / 19

CENG 450 Computer Systems and Architecture Lecture 16

CENG 450 Computer Systems and Architecture Lecture 16. Amirali Baniasadi amirali@ece.uvic.ca. The Motivation for Caches. Memory System. Processor. DRAM. Cache. Motivation: Large memories (DRAM) are slow Small memories (SRAM) are fast Make the average access time small by:

Download Presentation

CENG 450 Computer Systems and Architecture Lecture 16

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CENG 450Computer Systems and ArchitectureLecture 16 Amirali Baniasadi amirali@ece.uvic.ca

  2. The Motivation for Caches Memory System Processor DRAM Cache • Motivation: • Large memories (DRAM) are slow • Small memories (SRAM) are fast • Make the average access time small by: • Servicing most accesses from a small, fast memory. • Reduce the bandwidth required of the large memory

  3. The Principle of Locality Probability of reference 0 Address Space 2 • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. • Example: 90% of time in 10% of the code • Two Different Types of Locality: • Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon. • Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon.

  4. The Simplest Cache: Direct Mapped Cache Memory Address Memory 0 4 Byte Direct Mapped Cache 1 Cache Index • Cache index=(Block Address) MOD (# of blocks in cache) • Location 0 can be occupied by data from: • Memory location 0, 4, 8, ... etc. • In general: any memory locationwhose 2 LSBs of the address are 0s • Address<1:0> => cache index • Which one should we place in the cache? • How can we tell which one is in the cache? 2 0 3 1 4 2 5 3 6 7 8 9 A B C D E F

  5. Cache Tag and Cache Index N 2 - 1 • Assume a 32-bit memory (byte ) address: • A 2**N bytes direct mapped cache: • Cache Index: The lower N bits of the memory address • Cache Tag: The upper (32 - N) bits of the memory address 31 N 0 Cache Tag Example: 0x50 Cache Index Ex: 0x03 Stored as part of the cache “state” N 2 Bytes Valid Bit Direct Mapped Cache Byte 0 0 Byte 1 1 Byte 2 2 0x50 Byte 3 3 : : : Byte 2**N -1

  6. Example: 1 KB Direct Mapped Cache with 32 B Blocks • For a 2 ** N byte cache: • The uppermost (32 - N) bits are always the Cache Tag • The lowest M bits are the Byte Select (Block Size = 2 ** M) 31 9 4 0 Cache Tag Example: 0x50 Cache Index Byte Select Ex: 0x01 Ex: 0x00 Stored as part of the cache “state” Valid Bit Cache Tag Cache Data : Byte 31 Byte 1 Byte 0 0 : 0x50 Byte 63 Byte 33 Byte 32 1 2 3 : : : : Byte 1023 Byte 992 31

  7. Block Size Tradeoff • In general, larger block size take advantage of spatial locality BUT: • Larger block size means larger miss penalty: • Takes longer time to fill up the block • If block size is too big relative to cache size, miss rate will go up • Average Access Time: • = Hit Time x (1 - Miss Rate) + Miss Penalty x Miss Rate Average Access Time Miss Rate Miss Penalty Exploits Spatial Locality Increased Miss Penalty & Miss Rate Fewer blocks: compromises temporal locality Block Size Block Size Block Size

  8. A Summary on Sources of Cache Misses • Compulsory (cold start, first reference): first access to a block • “Cold” fact of life: not a whole lot you can do about it • Conflict (collision): • Multiple memory locations mappedto the same cache location • Solution 1: increase cache size • Solution 2: increase associativity • Capacity: • Cache cannot contain all blocks access by the program • Solution: increase cache size • Invalidation: other process (e.g., I/O) updates memory

  9. Conflict Misses • True: If an item is accessed, likely that it will be accessed again soon • But it is unlikely that it will be accessed again immediately!!! • The next access will likely to be a miss again • Continually loading data into the cache butdiscard (force out) them before they are used again • Worst nightmare of a cache designer: Ping Pong Effect • Conflict Misses are misses caused by: • Different memory locations mapped to the same cache index • Solution 1: make the cache size bigger • Solution 2: Multiple entries for the same Cache Index

  10. A Two-way Set Associative Cache Cache Data Cache Tag Valid Cache Block 0 : : : Compare • N-way set associative: N entries for each Cache Index • N direct mapped caches operates in parallel • Example: Two-way set associative cache • Cache Index selects a “set” from the cache • The two tags in the set are compared in parallel • Data is selected based on the tag result Cache Index Valid Cache Tag Cache Data Cache Block 0 : : : Adr Tag Compare 1 0 Mux Sel1 Sel0 OR Cache Block Hit

  11. Disadvantage of Set Associative Cache Cache Index Valid Cache Tag Cache Data Cache Data Cache Tag Valid Cache Block 0 Cache Block 0 : : : : : : Adr Tag Compare Compare 1 0 Mux Sel1 Sel0 OR Cache Block Hit • N-way Set Associative Cache versus Direct Mapped Cache: • N comparators vs. 1 • Extra MUX delay for the data • Data comes AFTER Hit/Miss • In a direct mapped cache, Cache Block is available BEFORE Hit/Miss: • Possible to assume a hit and continue. Recover later if miss.

  12. And yet Another Extreme Example: Fully Associative • Fully Associative Cache -- push the set associative idea to its limit! • Forget about the Cache Index • Compare the Cache Tags of all cache entries in parallel • Example: Block Size = 32 B blocks, we need N 27-bit comparators • By definition: Conflict Miss = 0 for a fully associative cache 31 4 0 Cache Tag (27 bits long) Byte Select Ex: 0x01 Cache Tag Valid Bit Cache Data : X Byte 31 Byte 1 Byte 0 : X Byte 63 Byte 33 Byte 32 X X : : : X

  13. Cache performance • Miss-oriented Approach to Memory Access: • CPIExecution includes ALU and Memory instructions • Separating out Memory component entirely • AMAT = Average Memory Access Time • CPIALUOps does not include memory instructions

  14. Impact on Performance • Suppose a processor executes at • Clock Rate = 1 GHz (1 ns per cycle), Ideal (no misses) CPI = 1.1 • 50% arith/logic, 30% ld/st, 20% control • Suppose that 10% of memory operations get 100 cycle miss penalty • Suppose that 1% of instructions get same miss penalty 78% of the time the proc is stalled waiting for memory!

  15. Example: Harvard Architecture Proc Proc I-Cache-1 Proc D-Cache-1 Unified Cache-1 Unified Cache-2 Unified Cache-2 • Unified vs. Separate I&D (Harvard) • 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47% • 32KB unified: Aggregate miss rate=1.99% • Which is better (ignore L2 cache)? • Assume 33% data ops  75% accesses from instructions (1.0/1.33) • hit time=1, miss time=50 • Note that data hit has 1 stall for unified cache (only one port) AMATHarvard=75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05 AMATUnified=75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24

  16. IBM POWER4 Memory Hierarchy 4 cycles to load to a floating point register 128-byte blocks divided into 32-byte sectors L1(Instr.) 64 KB Direct Mapped L1(Data) 32 KB 2-way, FIFO write allocate 14 cycles to load to a floating point register 128-byte blocks L2(Instr. + Data) 1440 KB, 3-way, pseudo-LRU (shared by two processors) L3(Instr. + Data) 128 MB 8-way (shared by two processors)  340 cycles 512-byte blocks divided into 128-byte sectors

  17. Intel Itanium Processor L1(Data) 16 KB, 4-way dual-ported write through L1(Instr.) 16 KB 4-way 32-byte blocks 2 cycles 64-byte blocks write allocate 12 cycles L2 (Instr. + Data) 96 KB, 6-way 4 MB (on package, off chip) 64-byte blocks 128 bits bus at 800 MHz (12.8 GB/s) 20 cycles

  18. 3rd Generation Itanium • 1.5 GHz • 410 million transistors • 6MB 24-way set associative L3 cache • 6-level copper interconnect, 0.13 micron • 130W (i.e. lasts 17s on an AA NiCd)

  19. Summary: • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. • Temporal Locality: Locality in Time • Spatial Locality: Locality in Space • Three Major Categories of Cache Misses: • Compulsory Misses: sad facts of life. Example: cold start misses. • Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! • Capacity Misses: increase cache size • Write Policy: • Write Through: need a write buffer. Nightmare: WB saturation • Write Back: control can be complex • Cache Performance

More Related