1 / 41

Graduate Computer Architecture I

Graduate Computer Architecture I. Lecture 5: Memory Hierarchy Young Cho. Branch Prediction. Not Having Branch Prediction are Expensive As pipelines get deeper and more complex Issuing and executing multiple instructions Branch History Table N-bits for loop accuracy Correlation

Download Presentation

Graduate Computer Architecture I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Graduate Computer Architecture I Lecture 5: Memory Hierarchy Young Cho

  2. Branch Prediction • Not Having Branch Prediction are Expensive • As pipelines get deeper and more complex • Issuing and executing multiple instructions • Branch History Table • N-bits for loop accuracy • Correlation • Local N-bits History Table • Global Branch History • Tournament Predictor • Two prediction method to choose better one • For example one local branch predictor and one global • No Branch Prediction • Predicated Execution • Ability to execute or NOP during runtime • Hardware Support needed

  3. Five-Stage Pipelined Processor IF/ID ID/EX EX/MEM MEM/WB Register File PC Control ALU Data Memory Instruction Memory

  4. Why Memory Hierarchy Processor-DRAM Memory Gap (latency) µProc 60%/yr. (2X/1.5yr) 1000 CPU “Moore’s Law” 100 Processor-Memory Performance Gap:(grows 50% / year) Performance 10 DRAM 9%/yr. (2X/10 yrs) DRAM 1 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Time

  5. Levels of the Memory Hierarchy Upper Level Capacity Access Time Cost Staging Xfer Unit faster CPU Registers 100s Bytes << 1s ns Registers prog./compiler 1-8 bytes Instr. Operands Cache 10s-100s K Bytes ~1 ns $1s/ MByte Cache cache cntl 8-128 bytes Blocks Main Memory M Bytes 100ns- 300ns $< 1/ MByte Memory OS 512-4K bytes Pages Disk 10s G Bytes, 10 ms (10,000,000 ns) $0.001/ MByte Disk user/operator Mbytes Files Larger Tape infinite sec-min $0.0014/ MByte Tape Lower Level

  6. Locality • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. • Two Different Types of Locality: • Temporal Locality(Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) • Spatial Locality(Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) • Last 30 years, HW relied on locality for speed MEM P $

  7. Role of Memory • Modern microprocessors are almost all cache

  8. Terminology Lower Level Memory Upper Level Memory To Processor Blk X From Processor Blk Y • Hit: Data appears in some block in the upper level (example: Block X) • Hit Rate • The fraction of memory access found in the upper level • Hit Time • Time to access the upper level which consists of RAM access time + Time to determine hit/miss • Miss: Data needs to be retrieve from a block in the lower level (Block Y) • Miss Rate • 1 - (Hit Rate) • Miss Penalty • Time to replace an upper level block + Time to deliver the block the processor • Hit Time << Miss Penalty (500 instructions on 21264!)

  9. Cache Measures • Hit rate • Fraction found in that level • So high that usually talk about Miss rate • Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory • Average Memory-Access Time • Hit time + Miss rate x Miss penalty (ns or clocks) • Miss penalty • Time to replace a block from lower level • Access time: Time to get to the lower level • Transfer time: Time to transfer block (depends on BW)

  10. Cache Design • Several interacting dimensions • cache size • block size • associativity • replacement policy • write-through vs write-back • The optimal choice is a compromise • depends on access characteristics • workload • use (I-cache, D-cache, TLB) • depends on technology / cost • Simplicity often wins Cache Size Associativity Block Size

  11. Simplest Cache: Direct Mapped Address • Location 0 can be occupied by data from: • Memory location 0, 4, 8, ... etc. • In general: any memory locationwhose 2 LSBs of the address are 0s • Address<1:0> => cache index • Which one should we place in the cache? • How can we tell which one is in the cache? 4 Byte Direct Mapped Cache 0 1 Cache Index 2 0 3 1 4 2 5 3 6 7 8 9 A B C D E F Memory

  12. 1 KB Direct Mapped Cache, 32B blocks • For a 2 ** N byte cache: • The uppermost (32 - N) bits are always the Cache Tag • The lowest M bits are the Byte Select (Block Size = 2 ** M) 31 9 4 0 Cache Tag Example: 0x50 Cache Index Byte Select Ex: 0x01 Ex: 0x00 Stored as part of the cache “state” Valid Bit Cache Tag Cache Data : Byte 31 Byte 1 Byte 0 0 : 0x50 Byte 63 Byte 33 Byte 32 1 2 3 : : : : Byte 1023 Byte 992 31

  13. N-way Set Associative Cache Cache Data Cache Tag Valid Cache Block 0 : : : Compare • N entries for each Cache Index • N direct mapped caches operates in parallel (N typically 2 to 4) • Example: Two-way set associative cache • Cache Index selects a “set” from the cache • The two tags in the set are compared in parallel • Data is selected based on the tag result Cache Index Valid Cache Tag Cache Data Cache Block 0 : : : Adr Tag Compare 1 0 Mux Sel1 Sel0 OR Cache Block Hit

  14. Disadvantage of Set Associative • N comparators vs. 1 • Extra MUX delay for the data • Data comes AFTER Hit/Miss • In a direct mapped cache, Cache Block is available BEFORE Hit/Miss: • Possible to assume a hit and continue. Recover later if miss.

  15. Finding Block Block Address Block Offset Tag Index • Index identifies set of possibilities • Tag on each block • No need to check index or block offset • Increasing associativity shrinks index, expands tag Cache size = Associativity * 2index_size * 2offest_size

  16. Replacing Block • Direct Mapped • No option as to which block to replace • Set Associative or Fully Associative: • Least Recently Used • Random • FIFO

  17. Storing Back to Memory • Write through • Written to both the cache block and the lower-level memory • Write buffers usually there for immediate response • Write back • Written only to the block in the cache • When replacing a block, only the “dirty” blocks are written back to the main memory • Cache Write Miss • Write Allocate: Read from Memory to Cache then Write • Usually Associated with Write-back • Write No Allocate: Write Directly to Main Memory • Usually Associated with Write-through

  18. Write Buffer for Write Through Cache Processor DRAM Write Buffer • A Write Buffer is needed between the Cache and Memory • Processor: writes data into the cache and the write buffer • Memory controller: write contents of the buffer to memory • Write buffer is just a FIFO: • Typical number of entries: 4 • Works fine if Store frequency (w.r.t. time) << 1 / DRAM write cycle

  19. Cache Performance • Ideal Processor (No Misses) • Clock Rate = 200 MHz (5 ns per cycle) • CPI = 1.1 • 50% arith/logic, 30% ld/st, 20% control • Cache Misses • Suppose Miss Penalty of 50 cycles • 10% of LD/ST has Cache Misses • 1% of Reading Instructions has Cache Misses • Cycles Per Instruction • CPI = ideal CPI + average stalls per instruction = 1.1 + [ 0.30 (DataMemOp/ins) x 0.10 (miss/DataMemOp) x 50 (cycle/miss)] + [ 1 (InstMemOp/ins) x 0.01 (miss/InstMemOp) x 50 (cycle/miss)] = (1.1 + 1.5 + .5) cycle/ins = 3.1 • 65% of the time the CPU is stalled waiting for memory • Average Memory Access Time – AMAT (in cycles/access) • Separate Measure the Memory Access Time • AMAT = InstMemAcc Rate x ( Hit Latency + InstMemMiss Rate x Penalty) + DataMemAcc Rate x ( Hit Latency + DataMemMiss Rate x Penalty) • 0.77 x (1 + 0.01 x 50) + 0.23 x (1 + 0.1 x 50) = 2.54 cycles/access

  20. Harvard Architecture Proc Proc I-Cache-1 Proc D-Cache-1 Unified Cache-1 Unified Cache-2 Unified Cache-2 • Unified vs Separate I&D (Harvard) • Statistics (given in H&P): • 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47% • 32KB unified: Aggregate miss rate=1.99% • Which is better? • Assume 33% data ops  75% accesses from instructions (1.0/1.33) • hit time=1, miss time=50 • AMATHarvard=75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05 • Access are Separate • Assumption is I and D Caches usually don’t fight for Unified Cache-2 • AMATUnified=75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24 • Data hit has 1 stall for unified cache (only one port)

  21. Improving Cache Performance • Reduce the miss rate, • Reduce the miss penalty, or • Reduce the time to hit in the cache.

  22. Reducing Misses • Classifying Misses: 3 Cs • Compulsory—The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses.(Misses in even an Infinite Cache) • Capacity—If the cache cannot contain all the blocks needed during execution of a program, capacity misseswill occur due to blocks being discarded and later retrieved.(Misses in Fully Associative Size X Cache) • Conflict—If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses.(Misses in N-way Associative, Size X Cache) • More recent, 4th “C”: • Coherence - Misses caused by cache coherence.

  23. 3Cs Absolute Miss Rate (SPEC92) Conflict Compulsory vanishingly small

  24. 2:1 Cache Rule miss rate 1-way associative cache size X ~= miss rate 2-way associative cache size X/2 Conflict

  25. 3Cs Relative Miss Rate Conflict Caveat: fixed block size

  26. How Can Reduce Misses? • 3 Cs: Compulsory, Capacity, Conflict • In all cases, assume total cache size not changed: • What happens if: 1) Change Block Size: Which of 3Cs is obviously affected? 2) Change Associativity: Which of 3Cs is obviously affected? 3) Change Algorithm / Compiler: Which of 3Cs is obviously affected?

  27. 1. Reduce Misses via Larger Block Size -Compulsory +Conflict & +Capacity +Miss Penalty

  28. 2. Higher Associativity • 2:1 Cache Rule: • Miss Rate DM cache size N ~ Miss Rate 2-way cache size N/2 • Holds for < 128KB • 8 Way as effective as Fully SA • Execution time is only final measure! • Will Clock Cycle time increase? • external cache +10%, • internal + 2%

  29. 3. “Victim Cache” • Combine fast hit time of direct mapped while avoiding conflict misses • Add buffer (small fully SA Cache) to place data discarded from cache (Victim) • Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache • Used in Alpha, HP machines DATA TAGS One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator To Next Lower Level In Hierarchy

  30. 4. “Pseudo-Associativity” • Combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache • Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit) • Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles • Better for caches not tied directly to processor (L2) • Used in MIPS R1000 L2 cache, similar in UltraSPARC Hit Time Miss Penalty Pseudo Hit Time Time

  31. 5. Hardware Prefetching • E.g., Instruction Prefetching • Alpha 21064 fetches 2 blocks on a miss • Extra block placed in “stream buffer” • On miss check stream buffer • Works with data blocks too: • Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43% • Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches • Prefetching relies on having extra memory bandwidth that can be used without penalty

  32. 6. Software Prefetching • Data Prefetch • Load data into register (HP PA-RISC loads) • Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) • Special prefetching instructions cannot cause faults; a form of speculative execution • Issuing Prefetch Instructions takes time • Is cost of prefetch issues < savings in reduced misses? • Higher superscalar reduces difficulty of issue bandwidth

  33. 7. Compiler Optimizations • McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software • Instructions • Reorder procedures in memory so as to reduce conflict misses • Profiling to look at conflicts(using tools they developed) • Data • Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays • Loop Interchange: change nesting of loops to access data in order stored in memory • Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap • Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

  34. Merging Arrays Example /* Before: 2 sequential arrays */ int val[SIZE]; int key[SIZE]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Reducing conflicts between val & key; improve spatial locality

  35. Loop Interchange Example /* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k = k+1) for (i = 0; i < 5000; i = i+1) for (j = 0; j < 100; j = j+1) x[i][j] = 2 * x[i][j]; Sequential accesses instead of striding through memory every 100 words; improved spatial locality

  36. Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} 2 misses per access to a & c vs. one miss per access; improve spatial locality

  37. Blocking Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1){ r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j]; } x[i][j] = r; } • Two Inner Loops: • Read all NxN elements of z[] • Read N elements of 1 row of y[] repeatedly • Write N elements of 1 row of x[] • Capacity Misses a function of N & Cache Size: • 2N3 + N2 => (assuming no conflict; otherwise …) • Idea: compute on BxB submatrix that fits

  38. Blocking Example /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) { r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j]; } x[i][j] = x[i][j] + r; } • B called Blocking Factor • Capacity Misses from 2N3 + N2 to 2N3/B +N2 • Conflict Misses Too?

  39. Compiler Optimizations

  40. Impact of Memory Hierarchy on Algorithms • Today CPU time is a function of (ops, cache misses) vs. just f(ops): What does this mean to Compilers, Data structures, Algorithms? • “The Influence of Caches on the Performance of Sorting” by A. LaMarca and R.E. Ladner. Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, January, 1997, 370-379. • Quicksort: fastest comparison based sorting algorithm when all keys fit in memory • Radix sort: also called “linear time” sort because for keys of fixed length and fixed radix a constant number of passes over the data is sufficient independent of the number of keys

  41. Summary • 3 Cs: Compulsory, Capacity, Conflict 1. Reduce Misses via Larger Block Size 2. Reduce Misses via Higher Associativity 3. Reducing Misses via Victim Cache 4. Reducing Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Misses by Compiler Optimizations • Remember danger of concentrating on just one parameter when evaluating performance

More Related