cpe 631 ilp dynamic exploitation l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
CPE 631: ILP, Dynamic Exploitation PowerPoint Presentation
Download Presentation
CPE 631: ILP, Dynamic Exploitation

Loading in 2 Seconds...

play fullscreen
1 / 114

CPE 631: ILP, Dynamic Exploitation - PowerPoint PPT Presentation


  • 187 Views
  • Uploaded on

CPE 631: ILP, Dynamic Exploitation. Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovi ć milenka@ece.uah.edu http://www.ece.uah.edu/~milenka. Outline. Instruction Level Parallelism (ILP) Recap: Data Dependencies Extended MIPS Pipeline and Hazards

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'CPE 631: ILP, Dynamic Exploitation' - jersey


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
cpe 631 ilp dynamic exploitation

CPE 631: ILP, Dynamic Exploitation

Electrical and Computer EngineeringUniversity of Alabama in Huntsville

Aleksandar Milenković

milenka@ece.uah.edu

http://www.ece.uah.edu/~milenka

outline
Outline
  • Instruction Level Parallelism (ILP)
  • Recap: Data Dependencies
  • Extended MIPS Pipeline and Hazards
  • Dynamic scheduling with a scoreboard
ilp concepts and challenges
ILP: Concepts and Challenges
  • ILP (Instruction Level Parallelism) – overlap execution of unrelated instructions
  • Techniques that increase amount of parallelism exploited among instructions
    • reduce impact of data and control hazards
    • increase processor ability to exploit parallelism
  • Pipeline CPI = Ideal pipeline CPI + Structural stalls + RAW stalls + WAR stalls + WAW stalls + Control stalls
    • Reducing each of the terms of the right-hand side minimize CPI and thus increase instruction throughput
two approaches to exploit parallelism
Two approaches to exploit parallelism
  • Dynamic techniques
    • largely depend on hardware to locate the parallelism
  • Static techniques
    • relay on software
where to look for ilp
Where to look for ILP?
  • Amount of parallelism available within a basic block
    • BB: straight line code sequence of instructions with no branches in except to the entry, and no branches out except at the exit
    • Example: Gcc (Gnu C Compiler): 17% control transfer
      • 5 or 6 instructions + 1 branch
      • Dependencies => amount of parallelism in a basic block is likely to be much less than 5=> look beyond single block to get more instruction level parallelism
  • Simplest and most common way to increase amount of parallelism among instruction is to exploit parallelism among iterations of a loop =>Loop Level Parallelism
  • Vector Processing: see Appendix G
  • for(i=1; i<=1000; i++)
  • x[i]=x[i] + s;
definition data dependencies
Definition: Data Dependencies
  • Data dependence: instruction j is data dependent on instruction i if either of the following holds
    • Instruction i produces a result used by instruction j, or
    • Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i
  • If dependent, cannot execute in parallel
  • Try to schedule to avoid hazards
  • Easy to determine for registers (fixed names)
  • Hard for memory (“memory disambiguation”):
    • Does 100(R4) = 20(R6)?
    • From different loop iterations, does 20(R6) = 20(R6)?
examples of data dependencies
Examples of Data Dependencies
  • Loop: LD.D F0, 0(R1) ; F0 = array element
  • ADD.D F4, F0, F2 ; add scalar in F2
  • SD.D 0(R1), F4 ; store result and
  • DADUI R1,R1,#-8 ; decrement pointer
  • BNE R1, R2, Loop ; branch if R1!=R2
definition name dependencies
Definition: Name Dependencies
  • Two instructions use same name (register or memory location) but don’t exchange data
    • Antidependence (WAR if a hazard for HW)Instruction j writes a register or memory location that instruction i reads from and instruction i is executed first
    • Output dependence (WAW if a hazard for HW)Instruction i and instruction j write the same register or memory location; ordering between instructions must be preserved. If dependent, can’t execute in parallel
  • Renaming to remove data dependencies
  • Again Name Dependencies are Hard for Memory Accesses
    • Does 100(R4) = 20(R6)?
    • From different loop iterations, does 20(R6) = 20(R6)?
where are the name dependencies
Where are the name dependencies?

1 Loop: L.D F0,0(R1)

2 ADD.D F4,F0,F2

3 S.D 0(R1),F4 ;drop DSUBUI & BNEZ

4 L.D F0,-8(R1)

5 ADD.D F4,F0,F2

6 S.D -8(R1),F4 ;drop DSUBUI & BNEZ

7 L.D F0,-16(R1)

8 ADD.D F4,F0,F2

9 S.D -16(R1),F4 ;drop DSUBUI & BNEZ

10 L.D F0,-24(R1)

11 ADD.D F4,F0,F2

12 S.D -24(R1),F4

13 SUBUI R1,R1,#32;alter to 4*8

14 BNEZ R1,LOOP

15 NOP

How can remove them?

where are the name dependencies11
Where are the name dependencies?

1 Loop: L.D F0,0(R1)

2 ADD.D F4,F0,F2

3 S.D 0(R1),F4 ;drop DSUBUI & BNEZ

4 L.D F6,-8(R1)

5 ADD.D F8,F6,F2

6 S.D -8(R1),F8 ;drop DSUBUI & BNEZ

7 L.D F10,-16(R1)

8 ADD.D F12,F10,F2

9 S.D -16(R1),F12 ;drop DSUBUI & BNEZ

10 L.D F14,-24(R1)

11 ADD.D F16,F14,F2

12 S.D -24(R1),F16

13 DSUBUI R1,R1,#32;alter to 4*8

14 BNEZ R1,LOOP

15 NOP

The Orginal“register renaming”

definition control dependencies
Definition: Control Dependencies
  • Example: if p1 {S1;}; if p2 {S2;};S1 is control dependent on p1 and S2 is control dependent on p2 but not on p1
  • Two constraints on control dependences:
    • An instruction that is control dep. on a branch cannot be moved before the branch, so that its execution is no longer controlled by the branch
    • An instruction that is not control dep. on a branch cannot be moved to after the branch so that its execution is controlled by the branch
  • DADDU R5, R6, R7
  • ADD R1, R2, R3
  • BEQZ R4, L
  • SUB R1, R5, R6
  • L: OR R7, R1, R8
overcoming data hazards with dynamic scheduling
Overcoming Data Hazards with Dynamic Scheduling
  • Why in HW at run time?
    • Works when can’t know real dependence at compile time
    • Simpler compiler
    • Code for one machine runs well on another
  • Example
  • Key idea: Allow instructions behind stall to proceed

SUB.D cannot execute because the dependence of ADD.D on DIV.D causes the pipeline to stall; yet SUBD is not data dependent on anything!

  • DIV.D F0,F2,F4
  • ADD.D F10,F0,F8
  • SUB.D F12,F8,F12
overcoming data hazards with dynamic scheduling cont d
Overcoming Data Hazards with Dynamic Scheduling (cont’d)
  • Enables out-of-order execution => out-of-order completion
  • Out-of-order execution divides ID stage:
    • 1. Issue—decode instructions, check for structural hazards
    • 2. Read operands—wait until no data hazards, then read operands
  • Scoreboarding – technique for allowing instructions to execute out of order when there are sufficient resources and no data dependencies (CDC 6600, 1963)
scoreboarding implications
Scoreboarding Implications
  • Out-of-order completion => WAR, WAW hazards?
  • Solutions for WAR
    • Queue both the operation and copies of its operands
    • Read registers only during Read Operands stage
  • For WAW, must detect hazard: stall until other completes
  • Need to have multiple instructions in execution phase => multiple execution units or pipelined execution units
  • Scoreboard keepstrack of dependencies, state or operations
  • Scoreboard replaces ID, EX, WB with 4 stages
  • DIV.D F0,F2,F4
  • ADD.D F10,F0,F8
  • SUB.D F10,F8,F12
  • DIV.D F0,F2,F4
  • ADD.D F10,F0,F8
  • SUB.D F8,F8,F12
four stages of scoreboard control
Four Stages of Scoreboard Control
  • ID1: Issue — decode instructions & check for structural hazards
  • ID2: Read operands — wait until no data hazards, then read operands
  • EX: Execute — operate on operands; when the result is ready, it notifies the scoreboard that it has completed execution
  • WB: Write results — finish execution; the scoreboard checks for WAR hazards. If none, it writes results. If WAR, then it stalls the instruction
  • DIV.D F0,F2,F4
  • ADD.D F10,F0,F8
  • SUB.D F8,F8,F12

Scoreboarding stalls the the SUBD in its write result stage until ADDD reads its operands

four stages of scoreboard control18
Four Stages of Scoreboard Control
  • 1. Issue—decode instructions & check for structural hazards (ID1)
    • If a functional unit for the instruction is free and no other active instruction has the same destination register (WAW), the scoreboard issues the instruction to the functional unit and updates its internal data structure. If a structural or WAW hazard exists, then the instruction issue stalls, and no further instructions will issue until these hazards are cleared.
  • 2. Read operands—wait until no data hazards, then read operands (ID2)
    • A source operand is available if no earlier issued active instruction is going to write it, or if the register containing the operand is being written by a currently active functional unit. When the source operands are available, the scoreboard tells the functional unit to proceed to read the operands from the registers and begin execution. The scoreboard resolves RAW hazards dynamically in this step, and instructions may be sent into execution out of order.
four stages of scoreboard control19
Four Stages of Scoreboard Control
  • 3. Execution—operate on operands (EX)
    • The functional unit begins execution upon receiving operands. When the result is ready, it notifies the scoreboard that it has completed execution.
  • 4. Write result—finish execution (WB)
    • Once the scoreboard is aware that the functional unit has completed execution, the scoreboard checks for WAR hazards. If none, it writes results. If WAR, then it stalls the instruction.
    • Example:
    • CDC 6600 scoreboard would stall SUBD until ADD.D reads operands

DIV.D F0,F2,F4

ADD.D F10,F0,F8

SUB.D F8,F8,F14

three parts of the scoreboard
Three Parts of the Scoreboard
  • 1. Instruction status—which of 4 steps the instruction is in (Capacity = window size)
  • 2. Functional unit status—Indicates the state of the functional unit (FU). 9 fields for each functional unit
    • Busy—Indicates whether the unit is busy or not
    • Op—Operation to perform in the unit (e.g., + or –)
    • Fi—Destination register
    • Fj, Fk—Source-register numbers
    • Qj, Qk—Functional units producing source registers Fj, Fk
    • Rj, Rk—Flags indicating when Fj, Fk are ready
  • 3. Register result status—Indicates which functional unit will write each register, if one exists. Blank when no pending instructions will write that register
mips with a scoreboard
MIPS with a Scoreboard

Scoreboard

Registers

FP Mult

FP Mult

FP Div

FP Div

FP Div

Add1

Add2

Add3

Control/Status

Control/Status

detailed scoreboard pipeline control
Detailed Scoreboard Pipeline Control

Instruction status

Wait until

Bookkeeping

Issue

Not busy (FU) and not result (D)

Busy(FU) yes; Op(FU) op; Fi(FU) ’D’; Fj(FU) ’S1’; Fk(FU) ’S2’; Qj Result(’S1’); Qk Result(’S2’); Rj not Qj; Rk not Qk; Result(’D’) FU;

Read operands

Rj and Rk

Rj No; Rk No

Execution complete

Functional unit done

Write result

f((Fj( f )≠Fi(FU) or Rj( f )=No) & (Fk( f ) ≠Fi(FU) or Rk( f )=No))

f(if Qj(f)=FU then Rj(f) Yes);f(if Qk(f)=FU then Rj(f) Yes); Result(Fi(FU)) 0; Busy(FU) No

scoreboard example cycle 2
Scoreboard Example: Cycle 2

Structural hazard!No further instructions will issue!

Issue 2nd L.D?

scoreboard example cycle 4
Scoreboard Example: Cycle 4

Check for WAR hazards!

If none, write result!

scoreboard example cycle 9
Scoreboard Example: Cycle 9

Read operands for MUL.D and SUB.D!Assume we can feed Mult1 and Add units in the same clock cycle.

Issue ADD.D? Structural Hazard (unit is busy)!

scoreboard example cycle 11
Scoreboard Example: Cycle 11

Last cycle of SUB.D execution.

scoreboard example cycle 12
Scoreboard Example: Cycle 12

Check WAR on F8. Write F8.

scoreboard example cycle 14
Scoreboard Example: Cycle 14

Read operands for ADD.D!

scoreboard example cycle 17
Scoreboard Example: Cycle 17

Why cannot write F6?

scoreboard results
Scoreboard Results
  • For the CDC 6600
    • 70% improvement for Fortran
    • 150% improvement for hand coded assembly language
    • cost was similar to one of the functional units
      • surprisingly low
      • bulk of cost was in the extra busses
  • Still this was in ancient time
    • no caches & no main semiconductor memory
    • no software pipelining
    • compilers?
  • So, why is it coming back
    • performance via ILP
scoreboard limitations
Scoreboard Limitations
  • Amount of parallelism among instructions
    • can we find independent instructions to execute
  • Number of scoreboard entries
    • how far ahead the pipeline can look for independent instructions (we assume a window does not extend beyond a branch)
  • Number and types of functional units
    • avoid structural hazards
  • Presence of antidependences and output dependences
    • WAR and WAW stalls become more important
things to remember
Things to Remember
  • Pipeline CPI = Ideal pipeline CPI + Structural stalls + RAW stalls + WAR stalls + WAW stalls + Control stalls
  • Data dependencies
  • Dynamic scheduling to minimise stalls
  • Dynamic scheduling with a scoreboard
scoreboard limitations49
Scoreboard Limitations
  • Amount of parallelism among instructions
    • can we find independent instructions to execute
  • Number of scoreboard entries
    • how far ahead the pipeline can look for independent instructions (we assume a window does not extend beyond a branch)
  • Number and types of functional units
    • avoid structural hazards
  • Presence of antidependences and output dependences
    • WAR and WAW stalls become more important
tomasulo s algorithm
Tomasulo’s Algorithm
  • Used in IBM 360/91 FPU (before caches)
  • Goal: high FP performance without special compilers
  • Conditions:
    • Small number of floating point registers (4 in 360) prevented interesting compiler scheduling of operations
    • Long memory accesses and long FP delays
    • This led Tomasulo to try to figure out how to get more effective registers — renaming in hardware!
  • Why Study 1966 Computer?
  • The descendants of this have flourished!
    • Alpha 21264, HP 8000, MIPS 10000, Pentium III, PowerPC 604, …
tomasulo s algorithm cont d
Tomasulo’s Algorithm (cont’d)
  • Control & buffers distributed with Function Units (FU)
    • FU buffers called “reservation stations” => buffer the operands of instructions waiting to issue;
  • Registers in instructions replaced by values or pointers to reservation stations (RS) => register renaming
    • avoids WAR, WAW hazards
    • More reservation stations than registers, so can do optimizations compilers can’t
  • Results to FU from RS, not through registers, over Common Data Bus that broadcasts results to all FUs
  • Load and Stores treated as FUs with RSs as well
  • Integer instructions can go past branches, allowing FP ops beyond basic block in FP queue
tomasulo based fpu for mips
Tomasulo-based FPU for MIPS

From Instruction Unit

FP Registers

FP Op

Queue

From Mem

Load Buffers

Load1

Load2

Load3

Load4

Load5

Load6

Store

Buffers

Store1

Store2

Store3

Add1

Add2

Add3

Mult1

Mult2

Reservation

Stations

To Mem

FP adders

FP multipliers

Common Data Bus (CDB)

reservation station components
Reservation Station Components
  • Op: Operation to perform in the unit (e.g., + or –)
  • Vj, Vk: Value of Source operands
    • Store buffers has V field, result to be stored
  • Qj, Qk: Reservation stations producing source registers (value to be written)
    • Note: Qj/Qk=0 => source operand is already available in Vj /Vk
    • Store buffers only have Qi for RS producing result
  • Busy: Indicates reservation station or FU is busy

Register result status—Indicates which functional unit will write each register, if one exists. Blank when no pending instructions that will write that register.

three stages of tomasulo algorithm
Three Stages of Tomasulo Algorithm
  • 1. Issue—get instruction from FP Op Queue
    • If reservation station free (no structural hazard), control issues instr & sends operands (renames registers)
  • 2. Execute—operate on operands (EX)
    • When both operands ready then execute;if not ready, watch Common Data Bus for result
  • 3. Write result—finish execution (WB)
    • Write it on Common Data Bus to all awaiting units; mark reservation station available
  • Normal data bus: data + destination (“go to” bus)
  • Common data bus: data + source (“come from” bus)
    • 64 bits of data + 4 bits of Functional Unit source address
    • Write if matches expected Functional Unit (produces result)
    • Does the broadcast
  • Example speed: 2 clocks for Fl .pt. +,-; 10 for * ; 40 clks for /
tomasulo example
Tomasulo Example

Instruction stream

3 Load/Buffers

FU count

down

3 FP Adder R.S.

2 FP Mult R.S.

Clock cycle counter

tomasulo example cycle 2
Tomasulo Example Cycle 2

Note: Can have multiple loads outstanding

tomasulo example cycle 3
Tomasulo Example Cycle 3
  • Note: registers names are removed (“renamed”) in Reservation Stations; MULT issued
  • Load1 completing; what is waiting for Load1?
tomasulo example cycle 4
Tomasulo Example Cycle 4
  • Load2 completing; what is waiting for Load2?
tomasulo example cycle 5
Tomasulo Example Cycle 5
  • Timer starts down for Add1, Mult1
tomasulo example cycle 6
Tomasulo Example Cycle 6
  • Issue ADDD here despite name dependency on F6?
tomasulo example cycle 7
Tomasulo Example Cycle 7
  • Add1 (SUBD) completing; what is waiting for it?
tomasulo example cycle 10
Tomasulo Example Cycle 10
  • Add2 (ADDD) completing; what is waiting for it?
tomasulo example cycle 11
Tomasulo Example Cycle 11
  • Write result of ADDD here?
  • All quick instructions complete in this cycle!
tomasulo example cycle 15
Tomasulo Example Cycle 15
  • Mult1 (MULTD) completing; what is waiting for it?
tomasulo example cycle 16
Tomasulo Example Cycle 16
  • Just waiting for Mult2 (DIVD) to complete
tomasulo example cycle 56
Tomasulo Example Cycle 56
  • Mult2 (DIVD) is completing; what is waiting for it?
tomasulo example cycle 57
Tomasulo Example Cycle 57
  • Once again: In-order issue, out-of-order execution and out-of-order completion.
tomasulo drawbacks
Tomasulo Drawbacks
  • Complexity
    • delays of 360/91, MIPS 10000, Alpha 21264, IBM PPC 620 in CA:AQA 2/e, but not in silicon!
  • Many associative stores (CDB) at high speed
  • Performance limited by Common Data Bus
    • Each CDB must go to multiple functional units  high capacitance, high wiring density
    • Number of functional units that can complete per cycle limited to one!
      • Multiple CDBs  more FU logic for parallel assoc stores
  • Non-precise interrupts!
    • We will address this later
tomasulo loop example
Tomasulo Loop Example

Loop: LD F0 0(R1)

MULTD F4 F0 F2

SD F4 0 R1

SUBI R1 R1 #8

BNEZ R1 Loop

  • This time assume Multiply takes 4 clocks
  • Assume 1st load takes 8 clocks (L1 cache miss), 2nd load takes 1 clock (hit)
  • To be clear, will show clocks for SUBI, BNEZ
    • Reality: integer instructions ahead of Fl. Pt. Instructions
  • Show 2 iterations
loop example
Loop Example

Iter-

ation

Count

Added Store Buffers

Instruction Loop

Value of Register used for address, iteration control

loop example cycle 3
Loop Example Cycle 3

Implicit renaming sets up data flow graph

loop example cycle 20
Loop Example Cycle 20
  • Once again: In-order issue, out-of-order execution and out-of-order completion.
why can tomasulo overlap iterations of loops
Why can Tomasulo overlap iterations of loops?
  • Register renaming
    • Multiple iterations use different physical destinations for registers (dynamic loop unrolling)
  • Reservation stations
    • Permit instruction issue to advance past integer control flow operations
    • Also buffer old values of registers - totally avoiding the WAR stall that we saw in the scoreboard
  • Other perspective: Tomasulo building data flow dependency graph on the fly
tomasulo s scheme offers 2 major advantages
Tomasulo’s scheme offers 2 major advantages
  • (1) the distribution of the hazard detection logic
    • distributed reservation stations and the CDB
    • If multiple instructions waiting on single result, & each instruction has other operand, then instructions can be released simultaneously by broadcast on CDB
    • If a centralized register file were used, the units would have to read their results from the registers when register buses are available.
  • (2) the elimination of stalls for WAW and WAR hazards
multiple issue
Multiple Issue
  • Allow multiple instructions to issue in a single clock cycle (CPI < 1)
  • Two flavors
    • Superscalar
      • Issue varying number of instruction per clock
      • Can be statically (compiler tech.) or dynamically (Tomasulo) scheduled
    • VLIW (Very Long Instruction Word)
      • Issue a fixed number of instructions formatted as a single long instruction or as a fixed instruction packet
multiple issue with dynamic scheduling
Multiple Issue with Dynamic Scheduling

From Instruction Unit

FP Registers

FP Op

Queue

From Mem

Load Buffers

Load1

Load2

Load3

Load4

Load5

Load6

Store

Buffers

Store1

Store2

Store3

Add1

Add2

Add3

Mult1

Mult2

Reservation

Stations

To Mem

FP adders

FP multipliers

Issue: 2 instructions per clock cycle

multiple issue with dynamic scheduling an example
Multiple Issue with Dynamic Scheduling: An Example

Loop: L.D F0, 0(R1)

ADD.D F4,F0,F2

S.D 0(R1), F4

DADDIU R1,R1,-#8 BNE R1,R2,Loop

Assumptions:

2-issue processor: can issue any pair of instructions if reservation stations are available

Resources: ALU (int + effective address),a separate pipelined FP for each operation type,branch prediction hardware, 1 CDB

2 cc for loads, 3 cc for FP Add

Branches single issue, branch prediction is perfect

multiple issue with dynamic scheduling105
Multiple Issue with Dynamic Scheduling
  • DADDIU waits for ALU used by S.D
    • Add one ALU dedicated to effective address calculation
    • Use 2 CDBs
  • Draw table for the dual-issue version of Tomasulo’s pipeline
what about precise interrupts
What about Precise Interrupts?
  • Tomasulo had:In-order issue, out-of-order execution, and out-of-order completion
  • Need to “fix” the out-of-order completion aspect so that we can find precise breakpoint in instruction stream
hardware based speculation
Hardware-based Speculation
  • With wide issue processors control dependences become a burden, even with sophisticated branch predictors
  • Speculation: speculate on the outcome of branches and execute the program as if our guesses were correct => need a mechanism to handle situations when the speculations were incorrect
relationship between precise interrupts and speculation
Relationship between precise interrupts and speculation
  • Speculation is a form of guessing
  • Important for branch prediction:
    • Need to “take our best shot” at predicting branch direction
  • If we speculate and are wrong, need to back up and restart execution to point at which we predicted incorrectly:
    • This is exactly same as precise exceptions!
  • Technique for both precise interrupts/exceptions and speculation: in-order completion or commit
hw support for precise interrupts
HW support for precise interrupts
  • Need HW buffer for results of uncommitted instructions: reorder buffer (ROB)
    • 4 fields: instr. type, destination, value, ready
    • Use reorder buffer number instead of reservation station when execution completes
    • Supplies operands between execution complete & commit
    • (Reorder buffer can be operand source => more registers like RS)
    • Instructions commit
    • Once instruction commits, result is put into register
    • As a result, easy to undo speculated instructions on mispredicted branches or exceptions

Reorder

Buffer

FP

Op

Queue

FP Regs

Res Stations

Res Stations

FP Adder

FP Adder

four steps of speculative tomasulo algorithm
Four Steps of Speculative Tomasulo Algorithm
  • 1. Issue—get instruction from FP Op Queue
    • If reservation station and reorder buffer slot free, issue instr & send operands & reorder buffer no. for destination (this stage sometimes called “dispatch”)
  • 2. Execution—operate on operands (EX)
    • When both operands ready then execute; if not ready, watch CDB for result; when both in reservation station, execute; checks RAW (sometimes called “issue”)
  • 3. Write result—finish execution (WB)
    • Write on Common Data Bus to all awaiting FUs & reorder buffer; mark reservation station available.
  • 4. Commit—update register with reorder result
    • When instr. at head of reorder buffer & result present, update register with result (or store to memory) and remove instr from reorder buffer. Mispredicted branch flushes reorder buffer (sometimes called “graduation”)
what are the hardware complexities with reorder buffer rob
What are the hardware complexities with reorder buffer (ROB)?

Reorder

Buffer

FP

Op

Queue

Compar network

Program Counter

Exceptions?

Dest Reg

FP Regs

Result

Valid

Reorder Table

Res Stations

Res Stations

FP Adder

FP Adder

  • How do you find the latest version of a register?
    • (As specified by Smith paper) need associative comparison network
    • Could use future file or just use the register result status buffer to track which specific reorder buffer has received the value
  • Need as many ports on ROB as register file
summary
Summary
  • Reservations stations: implicit register renaming to larger set of registers + buffering source operands
    • Prevents registers as bottleneck
    • Avoids WAR, WAW hazards of Scoreboard
    • Allows loop unrolling in HW
  • Not limited to basic blocks (integer units gets ahead, beyond branches)
  • Today, helps cache misses as well
    • Don’t stall for L1 Data cache miss (insufficient ILP for L2 miss?)
  • Lasting Contributions
    • Dynamic scheduling
    • Register renaming
    • Load/store disambiguation
  • 360/91 descendants are Pentium III; PowerPC 604; MIPS R10000; HP-PA 8000; Alpha 21264