1 / 142

Efficient Runahead Execution Processors A Power-Efficient Processing Paradigm for Tolerating Long Main Memory Latencies

Efficient Runahead Execution Processors A Power-Efficient Processing Paradigm for Tolerating Long Main Memory Latencies. Onur Mutlu PhD Defense 4/28/2006. Talk Outline. Motivation: The Memory Latency Problem Runahead Execution Evaluation Limitations of the Baseline Runahead Mechanism

diza
Download Presentation

Efficient Runahead Execution Processors A Power-Efficient Processing Paradigm for Tolerating Long Main Memory Latencies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Efficient Runahead Execution ProcessorsA Power-Efficient Processing Paradigm forTolerating Long Main Memory Latencies Onur Mutlu PhD Defense 4/28/2006

  2. Talk Outline • Motivation: The Memory Latency Problem • Runahead Execution • Evaluation • Limitations of the Baseline Runahead Mechanism • Efficient Runahead Execution • Address-Value Delta (AVD) Prediction • Summary of Contributions • Future Work

  3. Motivation • Memory latency is very long in today’s processors • CDC 6600: 10 cycles [Thornton, 1970] • Alpha 21264: 120+ cycles [Wilkes, 2001] • Intel Pentium 4: 300+ cycles [Sprangle & Carmean, 2002] • And, it continues to increase (in terms of processor cycles) • DRAM latency is not reducing as fast as processor cycle time • Conventional techniques to tolerate memory latency do not work well enough.

  4. Conventional Latency Tolerance Techniques • Caching [initially by Wilkes, 1965] • Widely used, simple, effective, but inefficient, passive • Not all applications/phases exhibit temporal or spatial locality • Prefetching [initially in IBM 360/91, 1967] • Works well for regular memory access patterns • Prefetching irregular access patterns is difficult, inaccurate, and hardware-intensive • Multithreading [initially in CDC 6600, 1964] • Works well if there are multiple threads • Improving single thread performance using multithreading hardware is an ongoing research effort • Out-of-order execution [initially by Tomasulo, 1967] • Tolerates cache misses that cannot be prefetched • Requires extensive hardware resources for tolerating long latencies

  5. Out-of-order Execution • Instructions are executed out of sequential program order to tolerate latency. • Instructions are retired in program order to support precise exceptions/interrupts. • Not-yet-retired instructions and their results are buffered in hardware structures, called the instruction window. • The size of the instruction window determines how much latency the processor can tolerate.

  6. Small Windows: Full-window Stalls • When a long-latency instruction is not complete, it blocks retirement. • Incoming instructions fill the instruction window. • Once the window is full, processor cannot place new instructions into the window. • This is called a full-window stall. • A full-window stall prevents the processor from making progress in the execution of the program.

  7. Small Windows: Full-window Stalls 8-entry instruction window: • L2 cache misses are responsible for most full-window stalls. Oldest LOAD R1  mem[R5] L2 Miss! Takes 100s of cycles. BEQ R1, R0, target ADD R2  R2, 8 LOAD R3  mem[R2] Independent of the L2 miss, executed out of program order, but cannot be retired. MUL R4  R4, R3 ADD R4  R4, R5 STOR mem[R2]  R4 ADD R2  R2, 64 Younger instructions cannot be executed because there is no space in the instruction window. LOAD R3  mem[R2] The processor stalls until the L2 Miss is serviced.

  8. Impact of L2 Cache Misses L2 Misses 512KB L2 cache, 500-cycle DRAM latency, aggressive stream-based prefetcher Data averaged over 147 memory-intensive benchmarks on a high-end x86 processor model

  9. Impact of L2 Cache Misses L2 Misses 500-cycle DRAM latency, aggressive stream-based prefetcher Data averaged over 147 memory-intensive benchmarks on a high-end x86 processor model

  10. The Problem • Out-of-order execution requires large instruction windows to tolerate today’s main memory latencies. • As main memory latency increases, instruction window size should also increase to fully tolerate the memory latency. • Building a large instruction window is a challenging task if we would like to achieve • Low power/energy consumption • Short cycle time • Low design and verification complexity

  11. Talk Outline • Motivation: The Memory Latency Problem • Runahead Execution • Evaluation • Limitations of the Baseline Runahead Mechanism • Efficient Runahead Execution • Address-Value Delta (AVD) Prediction • Summary of Contributions • Future Work

  12. Overview of Runahead Execution [HPCA’03] • A technique to obtain the memory-level parallelism benefits of a large instruction window (without having to build it!) • When the oldest instruction is an L2 miss: • Checkpoint architectural state and enter runahead mode • In runahead mode: • Instructions are speculatively pre-executed • The purpose of pre-execution is to discover other L2 misses • The processor does not stall due to L2 misses • Runahead mode ends when the original L2 miss returns • Checkpoint is restored and normal execution resumes

  13. Runahead Example Perfect Caches: Load 2 Hit Load 1 Hit Compute Compute Small Window: Load 2 Miss Load 1 Miss Compute Compute Stall Stall Miss 1 Miss 2 Runahead: Load 1 Miss Load 2 Miss Load 2 Hit Load 1 Hit Runahead Compute Compute Saved Cycles Miss 1 Miss 2

  14. Benefits of Runahead Execution Instead of stalling during an L2 cache miss: • Pre-executed loads and stores independent of L2-miss instructions generate very accurate data prefetches: • For both regular and irregular access patterns • Instructions on the predicted program path are prefetched into the instruction/trace cache and L2. • Hardware prefetcher and branch predictor tables are trained using future access information.

  15. Runahead Execution Mechanism • Entry into runahead mode • Checkpoint architectural register state • Instruction processing in runahead mode • Exit from runahead mode • Restore architectural register state from checkpoint

  16. Instruction Processing in Runahead Mode Load 1 Miss Runahead Compute Miss 1 Runahead mode processing is the same as normal instruction processing, EXCEPT: • It is purely speculative: Architectural (software-visible) register/memory state is NOT updated in runahead mode. • L2-miss dependent instructions are identified and treated specially. • They are quickly removed from the instruction window. • Their results are not trusted.

  17. L2-Miss Dependent Instructions Load 1 Miss Runahead Compute Miss 1 • Two types of results produced: INV and VALID • INV = Dependent on an L2 miss • INV results are marked using INV bits in the register file and store buffer. • INV values are not used for prefetching/branch resolution.

  18. Removal of Instructions from Window Load 1 Miss Runahead Compute Miss 1 • Oldest instruction is examined for pseudo-retirement • An INV instruction is removed from window immediately. • A VALID instruction is removed when it completes execution. • Pseudo-retired instructions free their allocated resources. • This allows the processing of later instructions. • Pseudo-retired stores communicate their data to dependent loads.

  19. Store/Load Handling in Runahead Mode Load 1 Miss Runahead Compute Miss 1 • A pseudo-retired store writes its data and INV status to a dedicated memory, called runahead cache. • Purpose: Data communication through memory in runahead mode. • A dependent load reads its data from the runahead cache. • Does not need to be always correct Size of runahead cache is very small.

  20. Branch Handling in Runahead Mode Load 1 Miss Runahead Compute Miss 1 • INV branches cannot be resolved. • A mispredicted INV branch causes the processor to stay on the wrong program path until the end of runahead execution. • VALID branches are resolved and initiate recovery if mispredicted.

  21. Hardware Cost of Runahead Execution • Checkpoint of the architectural register state • Already exists in current processors • INV bits per register and store buffer entry • Runahead cache (512 bytes) <0.05% area overhead

  22. Talk Outline • Motivation: The Memory Latency Problem • Runahead Execution • Evaluation • Limitations of the Baseline Runahead Mechanism • Efficient Runahead Execution • Address-Value Delta (AVD) Prediction • Summary of Contributions • Future Work

  23. Baseline Processor • 3-wide fetch, 29-stage pipeline x86 processor • 128-entry instruction window • 512 KB, 8-way, 16-cycle unified L2 cache • Approximately 500-cycle L2 miss latency • Bandwidth, contention, conflicts modeled in detail • Aggressive streaming data prefetcher (16 streams) • Next-two-lines instruction prefetcher

  24. Evaluated Benchmarks • 147 Intel x86 benchmarks simulated for 30 million instructions • Benchmark Suites • SPEC CPU 95 (S95): Mostly scientific FP applications • SPEC FP 2000 (FP00) • SPEC INT 2000 (INT00) • Internet (WEB): Spec Jbb, Webmark2001 • Multimedia (MM): MPEG, speech recognition, games • Productivity (PROD): Powerpoint, Excel, Photoshop • Server (SERV): Transaction processing, E-commerce • Workstation (WS): Engineering/CAD applications

  25. Performance of Runahead Execution

  26. Runahead Execution vs. Large Windows

  27. Talk Outline • Motivation: The Memory Latency Problem • Runahead Execution • Evaluation • Limitations of the Baseline Runahead Mechanism • Efficient Runahead Execution • Address-Value Delta (AVD) Prediction • Summary of Contributions • Future Work

  28. Limitations of the Baseline Runahead Mechanism • Energy Inefficiency • A large number of instructions are speculatively executed • Efficient Runahead Execution [ISCA’05, IEEE Micro Top Picks’06] • Ineffectiveness for pointer-intensive applications • Runahead cannot parallelize dependent L2 cache misses • Address-Value Delta (AVD) Prediction [MICRO’05] • Irresolvable branch mispredictions in runahead mode • Cannot recover from a mispredicted L2-miss dependent branch • Wrong Path Events [MICRO’04]

  29. Talk Outline • Motivation: The Memory Latency Problem • Runahead Execution • Evaluation • Limitations of the Baseline Runahead Mechanism • Efficient Runahead Execution • Address-Value Delta (AVD) Prediction • Summary of Contributions • Future Work

  30. The Efficiency Problem [ISCA’05] • A runahead processor pre-executes some instructions speculatively • Each pre-executed instruction consumes energy • Runahead execution significantly increases the number of executed instructions, sometimes without providing performance improvement

  31. The Efficiency Problem 22% 27%

  32. % Increase in IPC Efficiency = % Increase in Executed Instructions Efficiency of Runahead Execution • Goals: • Reduce the number of executed instructions without reducing the IPC improvement • Increase the IPC improvement without increasing the number of executed instructions

  33. Causes of Inefficiency • Short runahead periods • Overlapping runahead periods • Useless runahead periods

  34. Short Runahead Periods • Processor can initiate runahead mode due to an already in-flight L2 miss generated by • the prefetcher, wrong-path, or a previous runahead period • Short periods • are less likely to generate useful L2 misses • have high overhead due to the flush penalty at runahead exit Load 1 Miss Load 2 Miss Load 2 Miss Load 1 Hit Runahead Compute Miss 1 Miss 2

  35. Eliminating Short Runahead Periods • Mechanism to eliminate short periods: • Record the number of cycles C an L2-miss has been in flight • If C is greater than a threshold Tfor an L2 miss, disable entry into runahead mode due to that miss • Tcan be determined statically (at design time) or dynamically • T=400 for a minimum main memory latency of 500 cycles works well

  36. Overlapping Runahead Periods • Two runahead periods that execute the same instructions • Second period is inefficient Load 1 Miss Load 2 INV Load 1 Hit Load 2 Miss OVERLAP OVERLAP Compute Runahead Miss 1 Miss 2

  37. Overlapping Runahead Periods (cont.) • Overlapping periods are not necessarily useless • The availability of a new data value can result in the generation of useful L2 misses • But, this does not happen often enough • Mechanism to eliminate overlapping periods: • Keep track of the number of pseudo-retired instructions R during a runahead period • Keep track of the number of fetched instructions N since the exit from last runahead period • If N < R, do not enter runahead mode

  38. Useless Runahead Periods • Periods that do not result in prefetches for normal mode • They exist due to the lack of memory-level parallelism • Mechanism to eliminate useless periods: • Predict if a period will generate useful L2 misses • Estimate a period to be useful if it generated an L2 miss that cannot be captured by the instruction window • Useless period predictors are trained based on this estimation Load 1 Miss Load 1 Hit Runahead Compute Miss 1

  39. Predicting Useless Runahead Periods • Prediction based on the past usefulness of runahead periods caused by thesame static load instruction • A 2-bit state machine records the past usefulness of a load • Prediction based on too many INV loads • If the fraction of INV loads in a runahead period is greater than T, exit runahead mode • Sampling (phase) based prediction • If last N runahead periods generated fewer than T L2 misses, do not enter runahead for the next M runahead opportunities • Compile-time profile-based prediction • If runahead modes caused by a load were not useful in the profiling run, mark it as non-runahead load

  40. Performance Optimizations for Efficiency • Both efficiency AND performance can be increased byincreasing the usefulness of runahead periods • Three major optimizations: • Turning off the Floating Point Unit (FPU) in runahead mode • FP instructions do not contribute to the generation of load addresses • Optimizing the update policy of the hardware prefetcher (HWP) in runahead mode • Improves the positive interaction between runahead and HWP • Early wake-up of INV instructions • Enables the faster removal of INV instructions

  41. Overall Impact on Executed Instructions 26.5% 6.2%

  42. Overall Impact on IPC 22.6% 22.1%

  43. Talk Outline • Motivation: The Memory Latency Problem • Runahead Execution • Evaluation • Limitations of the Baseline Runahead Mechanism • Efficient Runahead Execution • Address-Value Delta (AVD) Prediction • Summary of Contributions • Future Work

  44. The Problem: Dependent Cache Misses Runahead: Load 2 is dependent on Load 1 • Runahead execution cannot parallelize dependent misses • wasted opportunity to improve performance • wasted energy (useless pre-execution) • Runahead performance would improve by 25% if this limitation were ideally overcome Cannot Compute Its Address! INV Load 1 Miss Load 2 Load 1 Hit Load 2 Miss Runahead Compute Miss 1 Miss 2

  45. The Goal of AVD Prediction • Enable the parallelization of dependent L2 cache misses in runahead mode with a low-cost mechanism • How: • Predict the values of L2-miss address (pointer) loads • Address load: loads an address into its destination register, which is later used to calculate the address of another load • as opposed to data load

  46. Parallelizing Dependent Cache Misses Cannot Compute Its Address! Load 1 Miss Load 2 INV Load 1 Hit Load 2 Miss Runahead Compute Miss 1 Miss 2 Can Compute Its Address Value Predicted Saved Speculative Instructions Miss Load 2 Load 1 Hit Load 2 Hit Load 1 Miss Compute Runahead Saved Cycles Miss 1 Miss 2

  47. AVD Prediction [MICRO’05] • Address-value delta (AVD) of a load instruction defined as: AVD = Effective Address of Load – Data Value of Load • For some address loads, AVD is stable • An AVD predictor keeps track of the AVDs of address loads • When a load is an L2 miss in runahead mode, AVD predictor is consulted • If the predictor returns a stable (confident) AVD for that load, the value of the load is predicted Predicted Value = Effective Address – Predicted AVD

  48. Why Do Stable AVDs Occur? • Regularity in the way data structures are • allocated in memory AND • traversed • Two types of loads can have stable AVDs • Traversal address loads • Produce addresses consumed by address loads • Leaf address loads • Produce addresses consumed by data loads

  49. Traversal Address Loads Regularly-allocated linked list: A traversal address load loads the pointer to next node: node = nodenext A AVD = Effective Addr – Data Value A+k Effective Addr Data Value AVD A+2k A A+k -k A+k A+2k -k ... A+3k A+2k A+3k -k Striding data value Stable AVD

  50. Leaf Address Loads Sorted dictionary in parser: Nodes point to strings (words) String and node allocated consecutively Dictionary looked up for an input word. A leaf address load loads the pointer to the string of each node: lookup (node, input) { // ... ptr_str = nodestring; m = check_match(ptr_str, input); // … } A+k A B+k C+k node string AVD = Effective Addr – Data Value B C Effective Addr Data Value AVD D+k E+k F+k G+k A+k A k D E F G C+k C k F+k F k No stride! Stable AVD

More Related