chapter one introduction to pipelined processors
Download
Skip this Video
Download Presentation
Chapter One Introduction to Pipelined Processors

Loading in 2 Seconds...

play fullscreen
1 / 38

Chapter One Introduction to Pipelined Processors - PowerPoint PPT Presentation


  • 116 Views
  • Uploaded on

Chapter One Introduction to Pipelined Processors. Principle of Designing Pipeline Processors. (Design Problems of Pipeline Processors). Instruction Prefetch and Branch Handling. The instructions in computer programs can be classified into 4 types: Arithmetic/Load Operations (60%)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Chapter One Introduction to Pipelined Processors' - tayten


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
principle of designing pipeline processors

Principle of Designing Pipeline Processors

(Design Problems of Pipeline Processors)

instruction prefetch and branch handling
Instruction Prefetch and Branch Handling
  • The instructions in computer programs can be classified into 4 types:
    • Arithmetic/Load Operations (60%)
    • Store Type Instructions (15%)
    • Branch Type Instructions (5%)
    • Conditional Branch Type (Yes – 12% and No – 8%)
instruction prefetch and branch handling1
Instruction Prefetch and Branch Handling
  • Arithmetic/Load Operations (60%) :
    • These operations require one or two operand fetches.
    • The execution of different operations requires a different number of pipeline cycles
instruction prefetch and branch handling2
Instruction Prefetch and Branch Handling
  • Store Type Instructions (15%) :
    • It requires a memory access to store the data.
  • Branch Type Instructions (5%) :
    • It corresponds to an unconditional jump.
instruction prefetch and branch handling3
Instruction Prefetch and Branch Handling
  • Conditional Branch Type (Yes – 12% and No – 8%) :
    • Yes path requires the calculation of the new address
    • No path proceeds to next sequential instruction.
instruction prefetch and branch handling4
Instruction Prefetch and Branch Handling
  • Arithmetic-load and store instructions do not alter the execution order of the program.
  • Branch instructions and Interrupts cause some damaging effects on the performance of pipeline computers.
interrupts
Interrupts
  • When instruction I is being executed,the occurrence of an interrupt postpones instruction I+1 until ISR is serviced.
  • There are two types of interrupt:
    • Precise : caused by illegal operation codes and can be detected at decoding stage
    • Imprecise: caused by defaults from storage, address and execution functions
handling interrupts
Handling Interrupts
  • Precise: Since decoding is the first stage, instruction I prohibits I+1 from entering the pipeline and all preceding instructions are executed before ISR
  • Imprecise : No new instructions are allowed and all incomplete instructions whether they precede or follow are executed before ISR.
cray 1 system
Cray-1 System
  • The interrupt system is built around an exchange package.
  • When an interrupt occurs, the Cray-1 saves 8 scalar registers, 8 address registers, program counter and monitor flags.
  • These are packed into 16 words and swapped with a block whose address is specified by a hardware exchange address register
  • Since exchange package does not have all state information, software interrupt handler have to store remaining states
instruction prefetch and branch handling5
Instruction Prefetch and Branch Handling
  • In general, the higher the percentage of branch type instructions in a program, the slower a program will run on a pipeline processor.
effect of branching on pipeline performance
Effect of Branching on Pipeline Performance
  • Consider a linear pipeline of 5 stages

Fetch Instruction

Store

Results

Fetch Operands

Execute

Decode

estimation of the effect of branching
Estimation of the effect of branching
  • Consider an instruction cycle with n pipeline clock periods.
  • Let
    • p – probability of conditional branch (20%)
    • q – probability that a branch is successful (60% of 20%) (12/20=0.6)
estimation of the effect of branching1
Estimation of the effect of branching
  • Suppose there are m instructions
  • Then no. of instructions of successful branches = mxpxq (mx0.2x0.6)
  • Delay of (n-1)/n is required for each successful branch to flush pipeline.
estimation of the effect of branching2
Estimation of the effect of branching
  • Thus, the total instruction cycle required for m instructions =
estimation of the effect of branching3
Estimation of the effect of branching
  • As m becomes large , the average no. of instructions per instruction cycle is given as

= ?

estimation of the effect of branching4
Estimation of the effect of branching
  • As m becomes large , the average no. of instructions per instruction cycle is given as
estimation of the effect of branching5
Estimation of the effect of branching
  • When p =0, the above measure reduces to n, which is ideal.
  • In reality, it is always less than n.
multiple prefetch buffers
Multiple Prefetch Buffers
  • Buffers can be used to match the instruction fetch rate to pipeline consumption rate
  • Sequential Buffers: for in-sequence pipelining
  • Target Buffers: instructions from a branch target (for out-of-sequence pipelining)
multiple prefetch buffers1
Multiple Prefetch Buffers
  • A conditional branch cause both sequential and target to fill and based on condition one is selected and other is discarded
speeding up of pipeline segments
Speeding up of pipeline segments
  • The processing speed of pipeline segments are usually unequal.
  • Consider the example given below:

S1

S2

S3

T1

T2

T3

speeding up of pipeline segments1
Speeding up of pipeline segments
  • If T1 = T3 = T and T2 = 3T, S2 becomes the bottleneck and we need to remove it
  • How?
  • One method is to subdivide the bottleneck
    • Two divisions possible are:
speeding up of pipeline segments2
Speeding up of pipeline segments
  • First Method:

S1

S3

T

T

2T

T

speeding up of pipeline segments3
Speeding up of pipeline segments
  • First Method:

S1

S3

T

T

2T

T

speeding up of pipeline segments4
Speeding up of pipeline segments
  • Second Method:

S1

S3

T

T

T

T

T

speeding up of pipeline segments5
Speeding up of pipeline segments
  • If the bottleneck is not sub-divisible, we can duplicate S2 in parallel

S2

3T

S1

S2

S3

3T

T

T

S2

3T

speeding up of pipeline segments6
Speeding up of pipeline segments
  • Control and Synchronization is more complex in parallel segments
data buffering
Data Buffering
  • Instruction and data buffering provides a continuous flow to pipeline units
  • Example: 4X TI ASC
example 4x ti asc
Example: 4X TI ASC
  • In this system it uses a memory buffer unit (MBU) which
    • Supply arithmetic unit with a continuous stream of operands
    • Store results in memory
  • The MBU has three double buffers X, Y and Z (one octet per buffer)
    • X,Y for input and Z for output
example 4x ti asc1
Example: 4X TI ASC
  • This provides pipeline processing at high rate and alleviate bandwidth mismatch problem between memory and arithmetic pipeline
busing structures
Busing Structures
  • PBLM: Ideally subfunctions in pipeline should be independent, else the pipeline must be halted till dependency is removed.
  • SOLN: An efficient internal busing structure.
  • Example : TI ASC
example ti asc
Example : TI ASC
  • In TI ASC, once instruction dependency is recognized, update capability is incorporated by transferring contents of Z buffer to X or Y buffer.
ad