1 / 31

Review : Pipelining

Review : Pipelining. A. B. C. D. Pipelining. Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer takes 40 minutes “ Folder ” takes 20 minutes. A. B. C. D. Sequential Laundry. 6 PM. Midnight. 7. 8. 9. 11.

gella
Download Presentation

Review : Pipelining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review: Pipelining

  2. A B C D Pipelining • Laundry Example • Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold • Washer takes 30 minutes • Dryer takes 40 minutes • “Folder” takes 20 minutes

  3. A B C D Sequential Laundry 6 PM Midnight 7 8 9 11 10 Time • Sequential laundry takes 6 hours for 4 loads 30 40 20 30 40 20 30 40 20 30 40 20 T a s k O r d e r

  4. 30 40 40 40 40 20 A B C D Pipelined LaundryStart work ASAP 6 PM Midnight 7 8 9 11 10 • Pipelined laundry takes 3.5 hours for 4 loads Time T a s k O r d e r

  5. 30 40 40 40 40 20 A B C D Pipelining: Observations 6 PM 7 8 9 Time • Multiple tasks operating simultaneously • Pipelining doesn’t help latency of single task, it helps throughput of entire workload • Pipeline rate limited by slowest pipeline stage • Potential speedup = Number pipe stages • Unbalanced lengths of pipe stages reduces speedup • Time to “fill” pipeline and time to “drain” it reduces speedup T a s k O r d e r

  6. M U X Zero? Cond. + NPC 4 M U X A PC ALU Output Regs ALU LMD Data Mem. M U X M U X B Inst. Mem. IR Sign Ext. Imm. 16 32 5 Steps of DLX DatapathFigure 3.1 Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc Memory Access Write Back

  7. Pipelined DLX DatapathFigure 3.4 Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc. M U X Zero? + Write Back Memory Access 4 M U X PC Regs ALU Data Mem. M U X M U X Inst. Mem. 16 32 Sign Ext. IF/ID ID/EX EX/MEM MEM/WB

  8. Visualizing PipeliningFigure 3.3 Time (clock cycles) I n s t r. O r d e r

  9. Limits to Pipelining • Hazards prevent next instruction from executing during its designated clock cycle • Structural hazards: HW cannot support this combination of instructions • Data hazards: Instruction depends on result of prior instruction still in the pipeline • Control hazards: Pipelining of branches & other instructions that change the PC • Common solution is to stall the pipeline until the hazard is resolved, inserting one or more “bubbles” in the pipeline

  10. One Memory Port/Structural HazardsFigure 3.6 Time (clock cycles) Load I n s t r. O r d e r Instr 1 Instr 2 Instr 3 Instr 4

  11. One Memory Port/Structural HazardsFigure 3.7 Load I n s t r. O r d e r Instr 1 Instr 2 stall Instr 3

  12. Speed Up Equation for Pipelining Speedup from pipelining = Ave Instr Time unpipelined Ave Instr Time pipelined = CPIunpipelined x Clock Cycleunpipelined CPIpipelined x Clock Cyclepipelined = CPIunpipelined Clock Cycleunpipelined CPIpipelined Clock Cyclepipelined Ideal CPI = CPIunpipelined/Pipeline depth Speedup = Ideal CPI x Pipeline depth Clock Cycleunpipelined CPIpipelined Clock Cyclepipelined x x

  13. Speed Up Equation for Pipelining CPIpipelined = Ideal CPI + Pipeline stall clock cycles per instr Speedup = Ideal CPI x Pipeline depth Clock Cycleunpipelined Ideal CPI + Pipeline stall CPI Clock Cyclepipelined What is the maximum possible speedup? Speedup = Pipeline depth Clock Cycleunpipelined 1 + Pipeline stall CPI Clock Cyclepipelined x x

  14. Example: Dual-port vs. Single-port • Machine A: Dual ported memory • Machine B: Single ported memory, but its pipelined implementation has a 1.05 times faster clock rate • Ideal CPI = 1 for both • Loads are 40% of instructions executed SpeedUpA = Pipeline Depth/(1 + 0) x (clockunpipe/clockpipe) = Pipeline Depth SpeedUpB = Pipeline Depth/(1 + 0.4) x (clockunpipe/(clockpipe) = (Pipeline Depth/1.4) x 1.05 = 0.75 x Pipeline Depth SpeedUpA / SpeedUpB = Pipeline Depth/(0.75 x Pipeline Depth) = 1.33 • Machine A is 1.33 times faster

  15. Data Hazard on R1Figure 3.9

  16. Three Generic Data Hazards InstrI followed by InstrJ • Read After Write (RAW)InstrJ tries to read operand before InstrI writes it

  17. Three Generic Data Hazards InstrI followed by InstrJ • Write After Read (WAR)InstrJ tries to write operand before InstrI reads it • Can’t happen in DLX 5 stage pipeline because: • All instructions take 5 stages, • Reads are always in stage 2, and • Writes are always in stage 5

  18. Three Generic Data Hazards InstrI followed by InstrJ • Write After Write (WAW)InstrJ tries to write operand before InstrI writes it • Leaves wrong result ( InstrI not InstrJ) • Can’t happen in DLX 5 stage pipeline because: • All instructions take 5 stages, and • Writes are always in stage 5 • Will see WAR and WAW in later more complicated pipes

  19. Forwarding to Avoid Data HazardFigure 3.10

  20. HW Change for ForwardingFigure 3.20

  21. Data Hazard Even with ForwardingFigure 3.12

  22. Data Hazard Even with ForwardingFigure 3.13

  23. Software Scheduling to Avoid Load Hazards Try producing fast code for a = b + c; d = e – f; assuming a, b, c, d ,e, and f in memory. Slow code: LW Rb,b LW Rc,c ADD Ra,Rb,Rc SW a,Ra LW Re,e LW Rf,f SUB Rd,Re,Rf SW d,Rd Fast code: LW Rb,b LW Rc,c LW Re,e ADD Ra,Rb,Rc LW Rf,f SW a,Ra SUB Rd,Re,Rf SW d,Rd

  24. Compiler Avoiding Load Stalls scheduled unscheduled 54% gcc 31% 42% spice 14% 65% tex 25% 0% 20% 40% 60% 80% % loads stalling pipeline

  25. Control Hazard on BranchesThree Stage Stall

  26. Branch Stall Impact • If CPI = 1, if 30% branch, Stall 3 cycles => new CPI = 1.9! • Two part solution: • Determine branch taken or not sooner, AND • Compute taken branch address earlier • DLX branch tests if register = 0 or <> 0 • DLX Solution: • Move Zero test to ID/RF stage • Adder to calculate new PC in ID/RF stage • 1 clock cycle penalty for branch versus 3

  27. Pipelined DLX DatapathFigure 3.22

  28. Four Branch Hazard Alternatives #1: Stall until branch direction is clear #2: Predict Branch Not Taken • Execute successor instructions in sequence • “Squash” instructions in pipeline if branch actually taken • Advantage of late pipeline state update • 47% DLX branches not taken on average • PC+4 already calculated, so use it to get next instruction #3: Predict Branch Taken • 53% DLX branches taken on average • But haven’t calculated branch target address in DLX • DLX still incurs 1 cycle branch penalty

  29. Four Branch Hazard Alternatives #4: Delayed Branch • Define branch to take place AFTER a following instruction branch instruction sequential successor1 sequential successor2 ........ sequential successorn branch target if taken • 1 slot delay allows proper decision and branch target address in 5 stage pipeline • DLX uses this Branch delay of length n

  30. Delayed Branch • Where to get instructions to fill branch delay slot? • Before branch instruction • From the target address: only valuable when branch taken • From fall through: only valuable when branch not taken • Compiler effectiveness for single branch delay slot: • Fills about 60% of branch delay slots • About 80% of instructions executed in branch delay slots useful in computation • About 50% (60% x 80%) of slots usefully filled

  31. Pipelining Summary • Just overlap tasks, and easy if tasks are independent • Speed Up vs Pipeline Depth: • Hazards limit performance on computers: • Structural: need more HW resources • Data: need forwarding, compiler scheduling • Control: discuss next time Pipeline Depth Clock Cycle Unpipelined Speedup = X Clock Cycle Pipelined 1 + Pipeline stall CPI

More Related