1 / 53

High-Level Design Strategies for Circuit Design

This lecture notes outline different solution approaches for circuit design problems, including truth table vs computational algorithms and divide-and-conquer strategies. It also includes a specific 8-bit comparator design problem and its solutions.

jjacobson
Download Presentation

High-Level Design Strategies for Circuit Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 465 High Level Design Strategies Lecture Notes # 9 Shantanu Dutt Electrical & Computer Engineering University of Illinois at Chicago

  2. Outline • Circuit Design Problem • Solution Approaches: • Truth Table (TT) vs. Computational/Algorithmic – Yes, hardware, just like software can implement any algorithm (after all software runs on hardware)! • Flat vs. Divide-&-Conquer • Divide-&-Conquer: • Associative operations/functions • General operations/functions • Other Design Strategies for fast circuits: • Design for all cases • Speculative computation • Best of both worlds (best average and best worst-case) • Pipelining • Summary

  3. Circuit Design Problem • Design an 8-bit comparator that compares two 8-bit #s available in two registers A[7..0] and B[7..0], and that o/ps F = 1 if A > B and F = 0 if A <= B. • Approach 1: The TT approach -- Write down a 16-bit TT, derive logic expression from it, minimize it, obtain gate-based realization, etc.! A B F 00000000 00000000 0 00000000 00000001 0 - - - - - - - - - - - - - - - - - - - - 00000001 00000000 1 - - - - - - - - - - - - - - - - - - - - - - 11111111 11111111 0 • Too cumbersome and time-consuming • Fraught with possibility of human error • Difficult to formally prove correctness (i.e., proof w/o exhasutive testing) • Will generally have high hardware cost (including wiring, which can be unstructured and messy) and delay

  4. Circuit Design Problem (contd) • Approach 2: Think computationally/algorithmically about what the ckt is supposed to compute: • Approach 2(a): Flat computational/programming approach: • Note: A TT can be expressed as a sequence of “if-then-else’s” • If A = 00000000 and B = 00000000 then F = 0 else if A = 00000000 and B = 00000001 then F=0 ………. else if A = 00000001 and B = 00000000 then F=1 ………. • Essentially a re-hashing of the TT – same problems as the TT approach

  5. Stitch-up of solns to A1 and A2 to form the complete soln to A Root problem A Subprob. A1 Subprob. A2 A2,2 A1,1 A1,2 A2,1 Circuit Design Problem: Strategy 1: Divide-&-Conquer • Approach 2(b): Structured algorithmic approach: • Be more innovative, think of the structure/properties of the problem that can be used to solve it in a hierarchical or divide-&-conquer (D&C) manner: Data dependency? Legend: : D&C breakup arrows : data/signal flow to solve a higher-level problem : possible data-flow betw. sub-problems • D&C approach: See if the problem can be: • “broken up” into 2 or more smaller subproblems of the same or different type(s): two kinds of breaks possible • by# of operands: partition set of n operands into 2 or more subsets of operands (e.g., adding n numbers) • by operand size: breaking a constant # of n-bit operands into smaller size operands (this mainly applies when the # of operands are a constant, e.g., add/mult of 2 #s) • whose solns can be “stitched-up” (by a stitch-upfunction) to give a soln. to the parent problem • also, consider if there is dependency between the sub-probs (results of some required to solve the other(s)) • Do this recursively for each subprob until subprobs are small enough (the leaf problem) for TT solutions • If the subproblems are of a similar kind (but of smaller size) to the root problem then the breakup and stitching will also be similar, but if not, they have to be broken up differently Do recursively until subprob-size is s.t. TT-based design is doable

  6. Circuit Design Problem: Strategy 1: Divide-&-Conquer • Especially for D&C breakups in which: a) the subproblems are the same problem type as the root problem, and b) there is no data dependency between subproblems, the final circuit will be a “tree”of stitch-up functions (of either the same size or different sizes at different levels depending on the problem) with leaf functions at the bottom of the tree, as shown in the figure below for a 2-way breakup of each problem/sub-problem. Level 1 Note: breaking an n-bit/n-operand problem into a 2-bit/2-operand problem  (log n)-1 levels of breakups and (log n) levels of logic nodes: leaf functions (1 level) and stitch-ups ((log n)-1 levels). Level 2 Stitch-up functions Level (log n), n = # of leaf nodes 2-i/ps 2-i/ps 2-i/ps • A tree is an interconnection structure with nodes and edges/arcs connecting the nodes, so that the nodes can be arranged in a levelized manner such that each node is connected to a unique node called its parent at a higher level (generally a lower #’ed level, where the top level is numbered level 1, and the bottom or leaf level has the highest level #). A binary tree is one in which each node has at most two children (leaf nodes have none). • Solving a problem using D&C generally yields a fast, low-cost and streamlined design (wiring required is structured and not all jumbled up and messy). 2-i/ps Leaf functions

  7. (a) A linearly-connected circuit of 2-i/p XORs x(0) x(1) x(2) X(3) x(15) f Shift Gears: Design of a Parity Detection Circuit—An n-input XOR • No concurrency in design (a)---the actual problem has available concurrency, though, and it is not exploited well in the above “linear” design • Complete sequentialization leading to a delay that is linear in the # of bits n (delay = (n-1)*td), td = delay of 1 gate • All the available concurrency is exploited in design (b)---a parity tree (see next slide). • Question: When can we have a circuit for an operation/function on multiple operands built of “gates” performing the same operation for fewer (generally a small number betw. 2-5) operands? • Answer:When the operation is associative. An oper. “x” is said to be associative if: a x b x c = (a x b) x c = a x (b x c) OR stated as a function f(a, b, c) = f(f(a,b), c) = f(a, f(b,c)) • Note: An operation/function that is not associative (e.g., NAND of n bits/operands), can still be broken up into smaller operations, just not the same type as the original operation • Associativity implies that, for example, if we have 4 operations a x b x c x d = f(a,b,c,d), we can either perform this as: • a x (b x (c x d)) [getting a linear delay of 3 units or in general n-1 units for n operands] i.e., in terms of function notation: f(a, f(b, f(c,d))) • or as (a x b) x (c x d) [getting a logarithmic (base 2) delay of 2 units and exploiting the available concurrency due to the fact that “x” is associative] i.e., in terms of function notation: f(f(a,b), f(c,d)) • Is XOR associative? Yes. • The parenthesisationcorresp. to the above ckt is: • (…..(((x(0) xor x(1))xor x(2))xor x(3))xor …. xor x(15)) • All these Qs can be answered “automatically” by the D&C approach

  8. x(14) x(1) x(0) x(15) w(3,1) w(3,5) w(3,7) w(3,3) w(3,0) w(3,6) w(3,4) w(3,2) w(2,3) w(2,1) w(2,0) w(2,2) w(1,1) w(1,0) w(0,0) = f Shift Gears: Design of a Parity Detection Circuit—A Series of XORs • if we have 4 operations a x b x c x d, we can either perform this as a x (b x (c x d)) [getting a linear delay of 3 units] or as (a x b) x (c x d) [getting a logarithmic (base 2) delay of 2 units and exploiting the available concurrency due to the fact that “x” is associative]. • We can extend this idea to n operands (and n-1 operations) to perform as many of the pairwise operations as possible in parallel (and do this recursively for every level of remaining operations), similar to design (b) for the parity detector [xor is an associative operation!] and thus get a (log2 n) delay. • In fact, any parenthesisation of operands is correct for an associative operation/function, but the above one is fastest. Surprisingly, any parenthesisation leads to the same h/w cost: n-1 2-i/p gates, i.e., 2(n-1) gate i/ps. Why? Analyze. Delay = (# of levels in AND-OR tree) * td = log2 (n) *td An example of simple designer ingenuity. A bad design would have resulted in a linear delay, an ingenious (simple enough though) & well-informed design results in a log delay, and both have the same gate i/p cost (b) 16-bit parity tree Parenthesization of tree-circuit: (((x(15) xor x(14)) xor (x(13) xor x(12))) xor ((x(11) xor x(10)) xor (x(9) xor x(8)))) xor(((x(7) xor x(6)) xor (x(5) xor x(4))) xor ((x(3) xor x(2)) xor (x(1) xor x(0))))

  9. D&C for Associative Operations • Let f(xn-1, ….., x0) be an associative function. Note that this is a many-operand function—associativity only makes sense for many (> 2) operands. • Can the D&C approach be used to yield an efficient, streamlined n-bit xor/parity function w/o having to go through an involved process as we saw for the parity detector? Can it lead automatically to a tree-based ckt? • What is the D&C principle involved here? f(xn-1, .., x0) Stitch-up function---same as the original function for 2 inputs, i.e., f(xn-1, .., x0) = f(f(xn-1, .., xn/2), f(xn/2-1, .., x0)) f(a,b) a b f(xn-1, .., xn/2) f(xn/2-1, .., x0) • Using the D&C approach for an associative operation results in a breakup by # of operands and the stitch up function being the same as the original function (this is not the case for non-assoc. operations), but w/ a constant # of operands (2, if the original problem is broken into 2 subproblems); see the formulation in the above figure. • Also, there are no dependencies between sub-problems • If the two sub-problems of the D&C approach are balanced (of the same size or as close to it as possible), then unfolding the D&C results in a balanced operation tree of the type for the xor/parity function seen earlier of (log n) delay

  10. x(14) x(1) x(0) x(15) w(3,1) w(3,7) w(3,3) w(3,5) w(3,0) w(3,2) w(3,4) w(3,6) w(2,3) w(2,1) w(2,0) w(2,2) w(1,1) w(1,0) w(0,0) = f D&C for Associative Operations (cont’d) • Parity detector example stitch-up function = 2-bit parity/xor Breakup by operands 8-bit parity 8-bit parity Delay = (# of levels in AND-OR tree) * td = log2 (n) *td 16-bit parity

  11. Divide-&-Conquer: More on Breaking by Operand Size Legend: : D&C breakup arrows : Operand breakup arrows Root Function F(n) A(n bits) B(n bits) Operand breakup A(MSB n/2) A(LSB n/2) B(MSB n/2) B(LSB n/2) Problem breakup possibilities Subprob Function 3 F(n/2) Subprob Function 2 F(n/2) Subprob Function 1 F(n/2) Subprob Function 4 F(n/2) A(MSB n/2) A(LSB n/2) A(MSB n/2) A(LSB n/2) B(LSB n/2) B(LSB n/2) B(MSB n/2) B(MSB n/2) Possible subproblems: 2-4 of them will be actual subproblems of F depending on the functionality of F • The subproblems can be anywhere between 2-4 of the above possible 4 subproblems • This will depend on the problem F being solved: need to think through the problem, and determine its properties to analytically arrive at what the problem breakup needs to be

  12. If A1,1 result is > or < take A1,1 result else take A1,2 result If A1,1,1 result is > or < take A1,1,1 result else take A1,1,2 result If A1 result is > or < take A1 result else take A2 result D&C Approach for a 2-Operand Function: n-bit > Comparator • O/P = 1 if A > B, else 0 • Is this is associative? Issue of associativity mainly applies for n operands, not on the n-bits of 2 operands • For a non-associative function, determine its properties that allow determining a break-up & a correct stitch-up function • Useful property: At any level, comp. of MS (most significant) half determines o/p if result is > or < else comp. of LS ½ determines o/p • Can thus break up problem at any level into MS ½ and LS ½ comparisons & based on their results determine which o/p to choose for the higher-level (parent) result • However, need to solve an extended version of the root problem in the sub-probs to be able to realize the stitch-up function: need to think through the problem almost from scratch—no one size fit all recipe! • No sub-problem dependency A Comp. A[7..0]],B[7..0] Stitch-up of solns to A1 and A2 to form the complete soln to A A1 A2 Comp A[7..4],B[7..4] Comp A[3..0],B[3..0] A1,2 A1,1 Comp A[5,4],B[5,4] Comp A[7..6],B[7..6] Breakup by size/bits A1,1,1 A1,1,2 Comp A[6],B[6] Comp A[7],B[7] Small enough to be designed using a TT

  13. The TT may be derived directly or by first thinking of and expressing its computation in a high-level programming language and then converting it to a TT. If A1,1 result is > or < take A1,1 result else take A1,2 result If A1,1,1 result is > or < take A1,1,1 result else take A1,1,2 result If A1 result is > or < take A1 result else take A2 result If A[i] = B[i] then { f1(i)=0; f2(i) = 1; /* f2(i) o/p is an i/p to the stitch logic */ /*f2(i) =1 meansf1( ), f2( ) o/ps of parent should be that of the LS ½ of this subtree should be selected by the stitch logic as its o/ps */ else if A[i] < B[i} then { f1(i) = 0; /* indicates < */ f2(i) = 0 } /* indicates f1(i), f2(i) o/ps should be selected as parent’s o/p */ else if A[i] > B[i] then {f1(i) = 1; /* indicates > */ f2(i) = 0 } /* indicates f1(i), f2(i) o/ps should be selected as parent’s o/p */ A[i] B[i] f1(i) f2(i) 0 0 0 1 0 1 0 0 1 0 1 0 1 1 0 1 D&C Approach for a 2-Operand Function: n-bit > Comparator (cont’d) A Comp. A[7..0]],B[7..0] Stitch-up of solns to A1 and A2 to form the complete soln to A A2 A1 Comp A[7..4],B[7..4] Comp A[3..0],B[3..0] A1,2 A1,1 Comp A[5,4],B[5,4] Comp A[7..6],B[7..6] Breakup by size/bits A1,1,1 A1,1,2 Comp A[6],B[6] Comp A[7],B[7] Small enough to be designed using a TT (2-bit 2-o/p comparator)

  14. Stitch up logic details for subprobs i & i-1: If f2(i) = 0 then { my_op1=f1(i); my_op2=f2(i) } /* select MS ½ comp o/ps */ else /* select LS ½ comp. o/ps */ {my_op1=f1(i-1); my_op2=f2(i-1) } my_op1 my_op2 If A1,1 result is > or < take A1,1 result else take A1,2 result If A1 result is > or < take A1 result else take A2 result If A1,1,1 result is > or < take A1,1,1 result else take A1,1,2 result my_op 2 Stitch-up logic 2-bit 2:1 Mux f2(i) A[i] B[i] f1(i) f2(i) 0 0 0 1 0 1 0 0 1 0 1 0 1 1 0 1 f1(i) f2(i) f1(i-1) f2(i-1) my_op1 my_op2 X 0 X X f1(i) f2(i) X 1 X X f1(i-1) f2(i-1) I1 I0 2 2 f(i)=f1(i),f2(i) f(i-1) f1(i-1) f2(i-1) f1(i) f2(i) (Direct design) Comparator Circuit Design Using D&C (contd.) • Once the D&C tree is formulated it is easy to get the low-level & stitch-up designs • Stitch-up design shown here A Comp. A[7..0]],B[7..0] Stitch-up of solns to A1 and A2 to form the complete soln to A A1 A2 Comp A[7..4],B[7..4] Comp A[3..0],B[3..0] A1,2 A1,1 Comp A[5,4],B[5,4] Comp A[7..6],B[7..6] A1,1,1 A1,1,2 Comp A[6],B[6] Comp A[7],B[7] OR (Compact TT)

  15. F= my1(6) 1-bit 2:1 Mux I1 I0 my(1) my(0) my(2) my(3) 2 2 2 2 2-bit 2:1 Mux 2-bit 2:1 Mux 2-bit 2:1 Mux 2-bit 2:1 Mux f2(7) = f(7)(2) f(3)(2) f(1)(2) f(5)(2) I1 I1 I1 I1 I0 I0 I0 I0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 f(7) f(6) f(5) f(4) f(1) f(2) f(0) f(3) 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator A[1] B[1] A[6] B[6] A[0] B[0] A[5] B[5] A[2] B[2] A[4] B[4] A[3] B[3] A[7] B[7] Comparator Circuit Design Using D&C – Final Design • Delay(8-bit comp.) = 3*(delay of 2:1 Mux) + delay of 2-bit comp. • Note parallelism at work – multiple logic blocks are processing simult. • Delay(n-bit comp.) = (log n)*(delay of 2:1 Mux) + delay of 1-bit comp. • H/W_cost(8-bit comp.) = 7*(H/W_cost(2:1 Muxes)) + 8*(H/W_cost(1-bit comp.) • H/W_cost(n-bit comp.) = (n-1)*(H/W_cost(2:1 Muxes)) + n*(H/W_cost(1-bit comp.)) my(5)(2) Critical path (all paths in this ckt are critical) my(4)(1) my(5)(1) my(4) my(5) 2 2 Log n levels of Muxes 2-bit 2:1 Mux 2-bit 2:1 Mux my(1)(2) my(3)(2) I1 I1 I0 I0 2 2

  16. n-1 2:1 Mux Sn-1 I n-1 n 2 2n-1 :1 MUX Sn-2 S0 Mux Design Using D&C 2n :1 mux problem: When control inputs = j, 0 <= j <= 2n -1, input Ij is connected to the output. I0 I0 All bits except msb should have different combinations; msb should be at a constant value (here 0) Stitch-up 2n :1 MUX Breakup by operands (data) Simultaneous breakup by bits (select) MSB value should differ among these 2 groups All bits except msb should have different combinations; msb should be at a constant value (here 1) 2n-1 :1 MUX Two sets of operands: Data operands (2n) and control/select operand (n bits) Sn-2 Sn-1 S0 S0 (a) Top-Down design (D&C)

  17. I0 I1 I2 I3 I4 I5 I6 I7 Z 8:1 MUX 2:1 MUX 2:1 MUX 2:1 MUX 2:1 MUX S2 S1 S0 S2 S1 S0 Opening up the 8:1 MUX’s hierarchical design and a top-down view All bits except msb should have different combinations; msb should be at a constant value (here 0) 4:1 Mux I0 I0 2:1 MUX Selected when S0 = 0, S1 = 1, S2=1 I1 MSB value should differ among these 2 groups S0 I2 I2 I2 2:1 MUX I6 I3 S1 Z I6 S0 I4 I4 I5 S0 Selected when S0 = 0, S1 = 1. These i/ps should differ in S2 I6 I6 2:1 MUX I7 4:1 Mux All bits except msb should have different combinations; msb should be at a constant value (here 1) • Cost: Number of 2:1 muxes? • Delay in number of 2:1 mux delay unit?

  18. 2:1 S0 2:1 2n-1 2:1 MUXes S0 2:1 2n-1 :1 MUX S0 (b) Bottom-Up (“Reduce-and-Accumulate”) Sn-1 S1 Top-Down vs Bottom-Up: Mux Design I0 I1 I2 I3 2 • Generally better to try top-down (D&C) first. For example, it will be much more difficult to solve the comparator problem bottom-up.

  19. Selected when S0 = 0 Z Z An 8:1 MUX example (bottom-up) I0 I0 2:1 MUX I1 S0 I1 I2 I2 2:1 MUX I3 4:1 MUX I0 I1 I2 I3 I4 I5 I6 I7 I3 I5 8:1 MUX S0 I4 I4 2:1 MUX I5 S2 S1 S0 I6 I6 S2 S1 S0 2:1 MUX I7 I7 These inputs should have different lsb or S0 values, since their sel. is based on S0 (all other remaining, i.e., unselected bit values should be the same). Similarly for other i/p pairs at 2:1 Muxes at this level. S0 Selected when S0 = 1

  20. Multiplier D&C + + PL(n2n) PM2(n2n) PM1(n2n) PH(n2n) AXB: n-bit mult Breakup by bits (operand size) Stitch up: Align and Add = 2n*W + 2n/2*X + 2n/2*Y + Z n n n n X Y W Z AhXBh: (n/2)-bit mult AhXBl: (n/2)-bit mult AlXBh: (n/2)-bit mult AlXBl: (n/2)-bit mult • Multiplication D&C idea: • A x B = (2n/2*Ah + Al)(2n/2*Bh + Bl), where Ah is the higher n/2 bits of A, and Al the lower n/2 bits = 2n*Ah*Bh + 2n/2*Ah*Bl + 2n/2*Al*Bh + Al*Bl = PH + PM1 + PM2 + PL • Example: • 10111001 = 185 X 00100111 = 39 = 0001110000101111 = 7215 • D&C breakup: (10111001) X (00100111) = (24(1011) + 1001) X (24(0010) + 0111) = 28(1011 X 0010) + 24(1011 x 0111 + 1001 X 0010) + 1001 X 0111 = 28(00010110) + 24(01001101 + 00010010) + 00111111 • = bbbbbbbb00111111 = PL + bbbb01001101bbbb = PM1 + bbbb00010010bbbb = PM2 + 00010110bbbbbbbb = PH _____________________ 0001110000101111 = 7215 Cost = 3 2n-bit adders = 6n FAs (full adders) for RCAs (ripple-carry adders) Stitch-Up Design 1 (inefficient) 2n-bit adders + Critical path: Delay (using RCAs) = (a) too high-level analysis: 2*((2n)-bit adder delay) = 4n*(FA delay) (b) More exact considering overall critical path: (i+2n-i+1) = 2n+1 FA delays What is the delay of the n-bit multiplier using such a stitch up (# 1)? 2n

  21. FA7 carry o/p ci+1 = aibi + aici + bici is 5n-i/p delay units FA7 z7 z6 z5 z4 z3 z2 z1 z0 FA7 Critical paths (3 of n) going through (n+1) FAs Delay for adding 3 numbers X, Y, Z using two RCAs? Ans: (n+1) FA delay units or 5(n+1) i/p delay units

  22. Multiplier D&C (cont’d) Stitch-Up Design 2 (efficient) • Ex: 10111001 = 185 X 00100111 = 39 = 0001110000101111 = 7215 D&C breakup: (10111001) X (00100111) = (24(1011) + 1001) X (24(0010) + 0111) = 28(1011 X 0010) + 24(1011 x 0111 + 1001 X 0010) + 1001 X 0111 = 28(00010110) + 24(01001101 + 00010010) + 00111111 = bbbbbbbb00111111 = PL + bbbb01001101bbbb = PM1 + bbbb00010010bbbb = PM2 + 00010110bbbbbbbb = PH _____________________ 0001110000101111 = 7215 n + PL + PM1 @ del=n/2 (Arrows in adds on the left show Couts of lower-order adds propagating as Cin to next higher-order adds) cin + PM2 Cout 000Cin (n/2)-bit adders @ del=n/2+1 Cost = 5 (n/2)-bit Adders = 2.5 n FAs for RCAs cin @ del=2[n/2] + Intermediate Sums cin Critical path: Delay = 3*((n/2)+1)-bit adder delay) = (1.5n+1)*(FA delay) for RCAs + PH 00 ….0 Cin Cin @ del=2[n/2] +1 lsb @ del=n/2 +2 lsb of MS half @ del=n/2+2 @ del=3[n/2] +1 n/2 n/2 n/2 n/2

  23. Multiplier D&C (cont’d) Stitch-Up Design 2 (efficient) • The (1.5n + 1) FA delay units is the delay assuming PL … PH have been computed. • What is the delay of the entire multiplier? • Does the stitch-up need to wait for all bits of PL … PH to be available before it “starts”? • The stitch up of a level can start when the lsb of the msb half of the product bits of each of the 4 products PL … PH are available: for the top level this is at n/4 + 2 after the previous level’s such input is avail (= when lsb of msb half of i/p n-bit prod. avail; see analysis for 2n-bit product in figure) • Using RCAs: (n-1) [this is delay after lsb of msb half avail. at top level o/p ) + { (n/2 +2) + (n/4 +2) + … + (2+2) (stopping at 4-bit mult) + 2 [this is boundary-case 2-bit mult delay at bit 3—lsb of msb half of 4-bit product] + 1/3 [this is the delay of a 2-i/p AND gate translated in terms of FA delay units which, using 2-i/p gates, is 3 2-i/p gate delays] } • = (n-1 ) + {(1/2)[(Si=0logn 2i ) + 2logn – 1.17} [corrective term: -[½(21+20) – 1/3] for taking prev. summation up to i=1,0] = n-1 + (1/2)[2n-1] + 2logn - 1.17 ~ 2(n+log n ) ~ Q(2n) FA delays—similar to the well-known array multiplier that uses carry-save adders • Why do we need 2 FA delay units for a 2-bit mult after 4 1-bit prods of 1-bit mults avail? n + PL + PM1 @ del=n/2 cin + PM2 lsb @ del =n/2+1 (n/2)-bit adders cin @ del= n/2+1 @ del=2[n/2] + Intermediate Sums cin + PH 00 ….0 Cin Cin @ del=2[n/2] +1 lsb @ del=n/2 +2 lsb of MS half @ del=n/2+2 @ del=3[n/2] +1 n/2 n/2 n/2 n/2 • We were able to obtain this similar-to-array-multiplier design using D&C & using basic D&C guidelines. It did not require extensive ingenuity as it might have for the designers of the array multiplier • But, needed some ingenuity in efficient stitchup and skillful analysis • We can obtain an even faster multiplier (Q(log n) delay) using D&C and carry-save adders for stitch-up; see appendix

  24. SU2(n) SU2 = Stitch up design 2 for multiplication n/2 2n n n n n SU2(n/2) SU2(n/2) SU2(n/2) SU2(n/2) SU2(n/4) SU2(n/4) SU2(n/4) SU2(n/4) • What is its cost in terms of # of FAs? • The level below the root (root = 1st level) has 4 (n/2)-bit multiplies to generate the PL …. PH of the root, 16 (n/4)-bit multiplies in the next level, upto 2-bit mults. at level logn. • Thus FAs used = 2.5[n + 4(n/2 )+ 16(n/4) + …+ 4 logn -2*(n/ 2 logn -2) ] + 4 logn -1*(2) + 4 logn *(1/8) [the last two terms are for the boundary cases of 2-bit and 1-bit multipliers that each require 2 and 1/8 FAs, resp.; see Qs below) = 2.5n(Si=0logn – 2 2i) + 2(n/2)2 + (1/8)n2 = 2.5[n(n/2 -1]/(2 -1)) + 0.625n2 = 1.25n2 -2.5n + 0.625n2 ~ 1.875n2 = Q(n2) • Why do we need 2 FA cost units for a 2-bit multiplication (with 4 1-bit products of 1-bit mults available)? Can we count an even lower cost for 2-bit multiplication (when 4 1-bit prods. avail)? • Assuming we use only 2-input gates, why do we add (n/8) FA cost units for each 1-bit multiplier (which is a 2-i/p AND gate)? Hint: Cost of 2-i/p xor/xnor gates is twice that of 2-i/p and/or/nand/nor gates (why?—look at transistor-level design) • Using carry-save adders or CSvA’s [see appendix], the cost is similar (quadratic in n, i.e., Q(n2)).

  25. D&C Example Where a “Straightforward” Breakup Does Not Work • Problem: n-bit Majority Function (MF): Output f = 1 when a majority of bits (> n/2) is 1, else f =0 Root problem A: n-bit MF [MF(n)] f St. Up (SU) Subprob. A2 MF(MS n/2 bits) Subprob. A1 MF(LS n/2 bits) f2 f1 • Need to ask (general Qs for any problem): Is the stitch-up function SU required in the above straightforward breakup of MF(n) into two MF’s for the MS and LS n/2 bits: • Computable? • Efficient in both hardware and speed? • Try all 4 combinations of f1, f2 values and check if its is possible for any function w/ i/ps f1, f2 to determine the correct f value: • f1 = 0, f2 = 0  # of 1’s in minority (<= n/4) in both halves, so totally # of 1’s <= n/2  f = 0 • f1 = 1, f2 = 1  # of 1’s in majority (> n/4) in both halves, so totally # of 1’s > n/2  f = 1 • f1 = 0, f2 = 1  # of 1’s <= n/4 in LS n/2 and > n/4 in MS n/2, but this does not imply if total # of 1’s is <= n/2 or > n/2. So no function can determine the correct f value (it will need more info, like exact count of 1’s) • f1 = 1, f2 = 0: same situation as the f1 = 0, f2 = 1 case. • Thus the stitch-up function is not even computable in the above breakup of MF(n).

  26. D&C Example Where a “Straightforward” Breakup Does Not Work (contd.) • Try another breakup, this time of MF(n) into functions that are different from MF. Root problem A: n-bit MF [MF(n)] f • Subprob. A2: • (> compare of A1 o/p • and floor(n/2) • Subprob. A1: • Count # of 1’s • in the n bits f1 (log n)+1 D&C tree for A1 D&C tree for A2 • Have seen (log n) delay for > comparator for two n-bit #s using D&C • Can we do 1-counting using D&C? How much time will this take?

  27. Root problem A Subprob. A2 Subprob. A1 Data flow Dependency Resolution in D&C:(1) The Wait Strategy • So far we have seen D&C breakups in which there is no data dependency between the two (or more) subproblems of the breakup • Data dependency leads to increased delays • We now look at various ways of speeding up designs that have subproblem dependencies in their D&C breakups • Strategy 1: Wait for required o/p of A1 and then perform A2, e.g., as in a ripple-carry adder: A = n-bit addition, A1 = (n/2)-bit addition of the L.S. n/2 bits, A2 = (n/2)-bit addition of the M.S. n/2 bits • No concurrency between A1 and A2: t(A) = t(A1) + t(A2) + t(stitch-up)= 2*t(A1) + t(stitch-up) if A1 and A2 are the same problems of the same size • Note that w/ no dependency, the delay expression is: t(A) = max{t(A1), t(A2)} + t(stitch-up) = t(A1) + t(stitch-up) if A1 and A2 are the same problems of the same size

  28. Add n-bit #s X, Y Add MS n/2 bits of X,Y Add LS n/2 bits of X,Y FA FA FA FA (a) D&C for Ripple-Carry Adder Adder Design using D&C • Example: Ripple-Carry Adder (RCA) • Stitching up: Carry from LS n/2 bits is input to carry-in of MS n/2 bits at each level of the D&C tree. • Leaf subproblem: Full Adder (FA)

  29. Example of the Wait Strategy in Adder Design FA7 • Note: Gate delay is propotional to # of inputs (since, generally there is a series connection of transistors in either the up or down network = # of inputs  R’s of the transistors in series add up and is prop to # of inputs  delay ~ RC (C is capacitive load) is prop. to # of inputs) • The 5-i/p gate delay stated above for a FA is correct if we have 2-3 i/p gates available (why?), otherwise, if only 2-i/p gates are available, then the delay will be 6-i/p gate delays (why?). • Assume each gate i/p contributes 2 ps of delay • For a 16-bit adder the delay will be 160 ps • For a 64 bit adder the delay will be 640 ps

  30. Add n-bit #s X, Y Add n-bit #s X, Y Add 3rd n/4 bits Add 3rd n/4 bits Add ls n/4 bits Add ls n/4 bits Add ms n/4 bits Add ms n/4 bits Add 2nd n/4 bits Add 2nd n/4 bits (a) D&C for Carry-Lookahead Adder w/ Linear Global P, G Ckt (b) D&C for Carry-Lookahead Adder w/ a Tree-like Global P, G Ckt Adder Design using D&C—Lookahead Wait (not in syllabus) • Example: Carry-Lookahead Adder (CLA) • Division: 4 subproblems per level • Stitching up: A more complex stitching up process (generation of global “super” P,G’s to connect up the subproblems) • Leaf subproblem: 4-bit basic CLA with small p, g bits. • More intricate techniques (like P,G generation in CLA) for complex stitching up for fast designs may need to be devised that is not directly suggested by D&C. But D&C is a good starting point. P, G P, G P, G P, G P, G P, G P, G P, G Linear connection of local P, G’s from each unit to determine global or super P, G for each unit. But linear delay, so not much better than RCA But, the global (P,G)for each unit is an associative function. So can be done in max log (n/4) time. Carry-ins to the last 3 (n/4)-bit adds is determined in constant time using the combined (P,G)’s, and it takes another log (n/4) time for all carry-ins to each bit add to be determined. Tree connection of local P, G’s from each unit to determine global P, G for each unit (P is associative) to do a prefix computation

  31. Root problem A Subprob. A2 Subprob. A2 Subprob. A2 Subprob. A2 Subprob. A1 00 I/p00 4-to-1 Mux 01 I/p01 I/p10 10 Select i/p I/p11 11 Dependency Resolution in D&C:(2) The “Design-for-all-cases-&-select (DAC)” Strategy • Strategy 2: DAC: For a k-bit i/p from A1 to A2, design 2k copies of A2 each with a different hardwired k-bit i/p to replace the one from A1. • Select the correct o/p from all the copies of A2 via a (2k)-to-1 Mux that is selected by the k-bit o/p from A1 when it becomes available (e.g., carry-select adder) • t(A) = max(t(A1), t(A2)) + t(Mux) + t(stitch-up) = t(A1) + t(Mux) + t(stitch-up) if A1 and A2 are the same problems

  32. Subprob. A2 Subprob. A1 Subprob. A22 Subprob. A12 Subprob. A12 Subprob. A12 Subprob. A12 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A11 Subprob. A21 00 00 I/p00 I/p00 4-to-1 Mux 4-to-1 Mux 01 01 I/p01 I/p01 I/p10 I/p10 10 10 Select i/p Select i/p I/p11 I/p11 11 11 (2) The “Design-for-all-cases-&-select (DAC)” Strategy: How this looks across 2 levels Root problem Data dependency Data dependency resolved via DAC at the 2nd level breakup. Two options for the 1st level breakup: Wait or DAC

  33. Subprob. A12 Subprob. A12 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A12 Subprob. A12 (2) The “Design-for-all-cases-&-select (DAC)” Strategy: How this looks across 2 levels (cont’d) Root problem Wait Subprob. A1 Subprob. A2 Subprob. A21 Subprob. A11 00 00 I/p00 I/p00 4-to-1 Mux 4-to-1 Mux 01 01 I/p01 I/p01 I/p10 I/p10 10 10 Select i/p Select i/p I/p11 I/p11 11 11 Data dependency resolved via DAC at the 2nd level breakup. Choosing option Wait at the 1st level breakup,

  34. Root problem A Subprob. A2 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A2 Subprob. A2 Subprob. A2 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A1 00 I/p00 4-to-1 Mux 01 I/p01 I/p10 10 Select i/p I/p11 11 (2) The “Design-for-all-cases-&-select (DAC)” Strategy: How this looks across 2 levels (cont’d) Root problem DAC Subprob. A1 Subprob. A2 Subprob. A21 Subprob. A21 Subprob. A21 Subprob. A21 Subprob. A21 00 4-to-1 Mux 01 Subprob. A12 Subprob. A11 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 00 00 00 00 00 00 Select i/p I/p00 I/p00 I/p00 I/p00 I/p00 I/p00 Subprob. A12 4-to-1 Mux 4-to-1 Mux 4-to-1 Mux 4-to-1 Mux 4-to-1 Mux 4-to-1 Mux 01 I/p01 01 01 01 01 01 I/p01 I/p01 I/p01 I/p01 I/p01 10 Subprob. A12 Works. But do we need to replicate so much? I/p10 I/p10 I/p10 I/p10 I/p10 I/p10 10 10 10 10 10 10 I/p11 I/p11 I/p11 I/p11 I/p11 I/p11 Subprob. A12 Select i/p Select i/p Select i/p Select i/p Select i/p Select i/p 11 11 11 11 11 11 11 Data dependency resolved via DAC at the 2nd level breakup. Choosing option DAC at the 1st level breakup,

  35. Root problem A Subprob. A22 Subprob. A21 Subprob. A12 Subprob. A12 Subprob. A12 Subprob. A22 Subprob. A2 Subprob. A2 Subprob. A2 Subprob. A2 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A22 Subprob. A21 Subprob. A21 Subprob. A22 Subprob. A21 Subprob. A1 00 I/p00 4-to-1 Mux 01 I/p01 I/p10 10 Select i/p I/p11 11 (2) The “Design-for-all-cases-&-select (DAC)” Strategy: How this looks across 2 levels (cont’d) Root problem DAC Note: The DAC based replication will apply to the smallest subproblem (subcircuit) of A2 that directly depends on A1’s o/ps, but does not use the DAC strategy, and not to all of A2. Similarly for lower-level DACs. Thus the non-dependent parts (A22 for A1) need not be replicated for the input values from the dependency-causing (to sibling subproblem (A2)) subproblem (A1) Subprob. A1 Subprob. A2 Subprob. A21 A much more streamlined design where A22 replication vis-à-vis A1 is not needed (only A21—directly dependent on A1—replication needed) 00 00 Subprob. A22 Subprob. A12 Subprob. A11 I/p00 I/p00 00 00 I/p00 I/p00 4-to-1 Mux 4-to-1 Mux 01 01 I/p01 I/p01 4-to-1 Mux 4-to-1 Mux 01 01 I/p01 I/p01 I/p10 I/p10 10 10 I/p10 I/p10 10 10 I/p11 I/p11 I/p11 I/p11 Select i/p Select i/p Select i/p 11 11 11 11 Data dependency resolved via DAC at the 2nd level breakup. Choosing option DAC at the 1st level breakup,

  36. (2) The “Design-for-all-cases-&-select (DAC)” Strategy (cont’d) Note: The DAC based replication will apply to the smallest subproblem (subcircuit) of A2 that directly depends on A1’s o/ps, but does not use the DAC strategy, and not to all of A2. Similarly for lower-level DACs. Root problem A Subprob. A2 Subprob. A1 SUP SUP SUP SUP SUP SUP SUP Wait Wait Wait Wait DAC DAC DAC Subprob. A2,1 Subprob. A2,2 Subprob. A1,1 Subprob. A1,2 Generally, wait strategy will be used at all lower levels after the 1st wait level Figure: A D&C tree w/ a mix of DAC and Wait strategies for dependency resolution between subproblems • The DAC strategy has a MUX delay involved, and at small subproblems, the delay of a subproblem may be smaller than a MUX delay or may not be sufficiently large to warrant extra replication or mux cost. • Thus a mix of DAC and Wait strategies, as shown in the above figure, may be faster, w/ DAC used at higher levels and Wait at lower levels.

  37. Example of the DAC Strategy in Adder Design Simplified Mux Cout 1 4 • For a 16-bit adder, the delay is (9*4 – 4)*2 = 64 ps (2 ps is the delay for a single i/p); a 60% ((160-64)*100/160) improvement (or 2.5 times faster) over RCA (5*16*2=160 ps) • For a 64-bit adder, the delay is (9*8 – 4)*2 = 136 ps; a 79% improvement (or about 5 times faster) over RCA (5*64*2 = 640 ps)

  38. Dependency Resolution in D&C:(3) Speculative Strategy • Need to consider the average-case delay and not worst-case delay here (otherwise there is no point in speculating to speed up processing when the speculation is correct). • This can be accomplished via generating a completion signal when an output is generated via correct speculation (before the worst-case delay, which will correspond to incorrect speculation) via an FSM controller that monitors the correctness of speculation and also knows worst-case delays for non-speculative (regular) logic modules. Root problem A completion signal for higher level problem Subprob. A2 Subprob. A1 I1 op(A1A2) 2-to-1 Mux I0 Estimate based on analysis or stats 01 FSM Controller: On getting completion signal from A2: If o/p(A1A2) = estimate(A2) (compare when A1 generates a completion signal) then generate a completion signal after some delay (in a subsequent state) corresponding to stitch up (worst-case) delay else set i/p to A2 = o/p(A1 A2) and generate completion signal after delay of A2 + stitch up (worst-cases) select i/p to Mux completion signal • Strategy 3: Have a single copy of A2 but choose a highly likely value of the k-bit i/p and perform A1, A2 concurrently. If after k-bit i/p from A1 is available and selection is incorrect, re-do A2 w/ correct available value. • t(A) = p(correct-choice)*(max(t(A1), t(A2)) + (1-p(correct-choice))*[t(A2) + t(A1)) + t(stitch-up), where p(correct-choice) is probability that our choice of the k-bit i/p for A2 is correct. • For t(A1) = t(A2), this becomes: t(A) = p(correct-choice)*t(A1) + (1-p(correct-choice))*2t(A1)+ t(stitch-up) = t(A1) + (1-p(correct-choice))*t(A1)+ t(stitch-up) • Need a completion signal to indicate when the final o/p is available for A; assuming worst-case time (when the choice is incorrect) is meaningless in such designs • Need an FSM controller for determining if estimate is correct and if not, then redoing A2 (allowing more time for generating the completion signal) .

  39. Root problem A A2_dep Subprob. A1 Data flow A2_indep Subprob. A2 Dependency Resolution in D&C:(4) The “Independent Pre-Computation” Strategy Concept v’ x’ u v x w’ x y w z’ a1 u’ x a1 v’ x’ u v x w’ x y w z’ u’ x a1 a1 A2 A2_indep A2_dep Critical path after a1 avail (8-unit delay) Critical path after a1 avail (4-unit delay) • Strategy 4: Reconfigure the design of A2 so that it can do as much processing as possible that is independent of the i/p from A1 (A2_indep). This is the “independent” computation that prepares for the final computation of A2 (A2_dep) that can start once A2_indep and A1 are done. • t(A) = max(t(A1), t(A2_indep)) + t(A2_dep) + t(stitch-up) • E.g., Let a1 be the i/p from A1 to A2 w/ A2’s logic: a2 = v’x’ + uvx + w’xy + wz’a1 + u’xa1. If this were impl. using 2-i/p AND/OR gates, the delay will be 8 delay units (1 unit = delay for 1 i/p) after a1 is available. a2 a2 • If the logic is re-structured as a2= (v’x’ + uvx + w’xy) + (wz’ + u’x)a1, and if the logic in the 2 brackets (A2_indep) are performed before a1 is available, then the delay is only 4 delay units after a1 is available. • Such a strategy requires factoring of the external i/p a1 in the logic for a2, and grouping & implementing all the non-a1 logic (A2_indep), and then adding logic (A2_dep) to “connect” up the non-a1 logic to a1 as the last stage. • May not always work very well or at all (e.g, for addition, we need the carry out of A1 to start A2; it has an A2_indep, but helps only a little; how much?)

  40. D&C Summary • For complex digital design, we need to think of the “computation” underlying the design in a structured manner—are there properties of this computation that can be exploited for the D&C approach? Think of: • Breakup into >= 2 subprobs via breakup of (# of operands) or (operand sizes [bits]) • Stitch-up: is it computable? If not,may need to break up problem into those w/ different functionalities and then perform D&C on the latter. • Leaf functions (when to stop D&C) • Dependencies between sub-problems and how to resolve them • The design is then developed in a structured manner & the corresponding circuit may be synthesized by hand or described compactly using a HDL (e.g., structural VHDL) • For an operation/func f on n operands f(an-1, an-2, …… a0 ) if f is associative, the D&C approach gives an “easy” stitch-up function, which is f on 2 operands (o/ps of f on each half sub-problem). This results in a tree-structured circuit with (log n) delay instead of a linearly-connected circuit with (n) delay can be synthesized. • If f is non-associative or has only a small # of operands (e.g., 2), more ingenuity and determination of properties of f is needed to determine the breakup and the stitch-up function. The resulting design may or may not be tree-structured • If there is dependency between the 2 subproblems, then we saw strategies for addressing these dependencies: • Wait (slowest, least hardware cost) • Design-for-all-cases (high speed, high hardware cost) • Speculative (medium speed, medium hardware cost) • Independent pre-computation (medium-slow speed, low hardware cost)

  41. A x y A x y B(0,0) 0 0 4:1 Mux z B(0,1) 0 B(1,0) 1 1 0 B(1,1) 1 1 Strategy 2: A general view of DAC computations (w/ or w/o D&C) B z • If there is a data dependency between two or more portions of a computation (which may be obtained w/ or w/o using D&C), don’t wait for the the “previous” computation to finish before starting the next one • Assume all possible input values for the next computation/stage B (e.g., if it has 2 inputs from the prev. stage there will be 4 possible input value combinations) and perform it using a copy of the design for possible input value. • All the different o/p’s of the diff. Copies of B are Mux’ed using prev. stage A’s o/p • E.g. design: Carry-Select Adder (at each stage performs two additions one for carry-in of 0 and another for carry-in of 1 from the previous stage) (a) Original design: Time = T(A)+T(B) (b) DAC computation: Time = max(T(A),T(B)) + T(Mux). Works well when T(A) approx = T(B) and T(A) >> T(Mux)

  42. Registers inputs inputs start Unary Division Ckt (good ave case: Q(n/2.8) subs, bad worst case: Q(2n) subs) Non- Restoring Div. Ckt (bad ave case [Q(n) subs], good worst case: Q(n) subs) Ext. FSM done1 done2 output output Mux select Register Strategy 3: Get the Best of Both Worlds (Average and Worst Case Delays)! Approximate analysis: Avg. dividend D value = 2n-1 For divisor V values in the “lower half range”[1, 2n-1], the average quotient Q value is the Harmonic series (1+ ½ + 1/3 + … + 1/ 2n-1) – this comes from a Q value of x for V in the range (2n-1/x) to 2n-1/(x-1)) - 1, i.e., for approx. (2n-1/x2) #s, which have a probability of 1/x2 , giving a probabilistic value of x(1/x2) = 1/x in the average Q calculation. The above summation is ~ ln (2n-1) ~( n-1)/1.4 (integration of 1/k from k = 1 to 2n-1) Q for divisors in the upper half range [2n-1 +1, 2n] is 0  overall avg. quotient = (n-1)/2.8  avg. subtractions needed = 1 + (n-1)/2.8 = Q(n/2.8) • Use 2 circuits with different worst-case and average-case behaviors • Use the first available output • Get the best of both (ave-case, worst-case) worlds • In the above schematic, we get the good ave case performance of unary division (assuming uniformly distributed inputs w/o the disadvantage of its bad worst-case performance): best case = Q(1) subs, ave case = Q(n/2.8) subs, worst case: Q(n) subs

  43. Original ckt or datapath Stage 1 Stage 2 Conversion to a simple level-partitioned pipeline (all modules/gates at the same level from i/ps belong to the same stage). Note that level partition may not always be possible but other pipeline-able partitions may be. Stage k Strategy 4a: Pipeline It! (Synchronous Pipeline) Clock Registers • Throughput is defined as # of outputs / sec • Non-pipelined throughput = (1 / D), where D = delay of original ckt’s datapath • Pipeline throughput = 1/ (max stage delay + register delay) • Special case: If original ckt’s datapath is divided into k stages, each of equal delay, and dr is the delay of a register, then pipeline throughput = 1/((D/k)+dr). • If dr is negligible compared to D/k, then pipeline throughput = k/D, k times that of the original ckt • In general, the registers can be clocked w/ clock period Tclk = max stage delay + register delay

  44. F= my1(6) 1-bit 2:1 Mux I1 I0 my(2) my(0) my(1) my(3) 2 2 2 2 2-bit 2:1 Mux 2-bit 2:1 Mux 2-bit 2:1 Mux 2-bit 2:1 Mux f2(7) = f(7)(2) f(1)(2) f(3)(2) f(5)(2) I1 I1 I1 I1 I0 I0 I0 I0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 f(2) f(0) f(3) f(1) f(7) f(5) f(6) f(4) 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator A[6] B[6] A[4] B[4] A[0] B[0] A[3] B[3] A[7] B[7] A[5] B[5] A[2] B[2] A[1] B[1] Strategy 4a: (Synchronous) Pipeline It! (contd.) • Comparator o/p produced every 1 unit of time, instead of every (log n) +1 unit of time w/o pipelining, where 1 time unit here = delay of mux or 1-bit comparator (both will have the same or similar delay) • We can reduce reg. cost by inserting at every 2 levels, throughput decreases to 1 per every 2 units of time Legend : Register Ij(t k) = processed version of i/p Ij @ time k. We assume delay of each basic odule below is 1 unit. my(5)(2) output I1(t=4) I2(t=5) my(4)(1) my(5)(1) my(4) my(5) 2 2 stage 4 i/ps I1(t=3) I2(t=4) I3(t=5) Log n level of Muxes 2-bit 2:1 Mux 2-bit 2:1 Mux my(1)(2) my(3)(2) I1 I1 I0 I0 2 2 stage 3 i/ps I1(t=2) I2(t=3) I3(t=4) I4(t=5) stg.2 i/ps I1(t=1) I2(t=2) I3(t=3) I4(t=4) I5(t=5) I1(t=0) I2(t=1) I3(t=2) I4(t=3) I5(t=4) I6(t=5) time axis

  45. Strategy 4a: (Synchronous) Pipeline It! (contd.) Pipelined Ripple Carry Adder • Problem: I/P and O/P data direction is not the same as the computation direction.They are perpendicular! In other words, each stage has i/ps & o/ps, rather than i/ps and o/ps only appearing at the beginning and end, resp., of the pipeline. I/ps need to stream in with delay = single stage delay. • Thus at the i/p of each stage, need to have regs to hold new i/ps until earlier i/ps have been processed by that stage. Thus more regs are needed for later stages as they will process their current i/pslater,by which time more i/ps will have streamed in. • Similarly, at the o/p of each stage, need to hold o/ps in regs until last stage’s o/p appears for the earliest i/p still being processed. Thus need more regs in earlier stages as they will have produced more o/ps by the time the o/p, corresponding to the earliest i/p being processed, appears Input register: Next 3 S0, S1 o/ps S7, S6 o/ps for i/ps recvd 4 cc back S5, S4 o/ps for i/ps recvd 4 cc back S3, S2 o/ps for i/ps recvd 4 cc back S1, S0 o/ps for i/ps recvd 4 cc back Intermediate or output register: Assume 1 cc = 2 FA + register delay O/p produced every 2 unit’s of FA delay instead of every n units of FA delay in an n-bit RCA

  46. Strategy 4b: Pipeline It!—Wave (or Asynchronous) Pipelining • Wave pipelining is essentially pipelining of combinational circuits w/o the use of registers and clocking, which are expensive and consume significant amount of power. • The min. safe input period (MSIP: interval at which subsequent inputs are given to the wave-pipelined circuit) is determined by the difference in max and min delays across various module or gate outputs and inputs, respectively, and is designed so that a later input’s processing, however fast it might be, will never “overwrite” that data at the input of any gate or module that is still processing the previous input. • Consider two modules m1 and m2 above (the modules could either be subcircuits like a full adder or even more complex or can be as simple as a single gate). Let tmin(i/p:mj) be the min-delay (i.e., min arrival time) over all i/ps to mj, and let tmax(o/p:mj) be the max delay at the o/p(s) of mj. • Certainly a safe i/p period (SIP) period for wave pipelining the above configuration is tmax(o/p:m2). But can we do it safely at a higher rate (lower period)? • The min safe i/p period (MSIP) period at the circuit i/ps, for m2, will correspond to a situation in which any new i/p appears at m2 only after the processing of its current i/ps is over so that while it is still processing its current i/ps are held stable. Thus if the 1st i/p appears at time 0 at ckt i/p, the 2nd i/p should appear at m2 no earlier than 0+tmax(o/p:m2)  the 2nd i/p to the circuit itself, i.e., at m1 should not appear before tmax(o/p:m2) - tmin(i/p:m2). • Similarly, for safe 1st i/p operation of m1, the 2nd i/p should not appear before 0+tmax(o/p:m1) - tmin(i/p:m1) = tmax(o/p:m1) as tmin(i/p:m1) = 0. • Thus for safe operation of the 1st i/ps of both m1 and m2, the 2nd i/p should not appear at the ckt i/ps before max(tmax(o/p:m1), tmax(o/p:m2) - tmin(i/p:m2)), the MSIP after the 1st i/p tmin(i/p:m2) tmax(o/p:m2) m1 m2

  47. Strategy 4b: Pipeline It!—Wave Pipelining (contd.) Safe for mj if fastest i’th i/p: (i-1)tsafe + tmin(i/p:mj) >= slowest (i-1)’th o/p = (i-2)tsafe + tmax(o/p:mj), i.e., if tsafe >= tmax(o/p:mj) - tmin(i/p:mj). Thus safe. m1 m2 mj slowest (i-1)’th o/p: (i-2)tsafe + tmax(o/p:mj) Fig. 1 Fig. 2 • The Q. now is whether tsafe(1) (the min i/p safe period after the 1st i/p) = max(tmax(o/p:m1), tmax(o/p:m2) - tmin(i/p:m2)), will also be a safe period after the ith i/p for i > 1? • The 2nd o/p from m2 appears at time tsafe(1) + tmax(o/p:m2). Consider that the 3rd i/p appears at the ckt i/p tsafe(1) time after the 2nd i/p. Thus the 3rd i/p appears at m2 at 2 tsafe(1) + tmin(i/p:m2) >= tsafe(1) + tmax(o/p:m2) (since tsafe(1) = max(tmax(o/p:m1), tmax(o/p:m2) - tmin(i/p:m2)) >= tmax(o/p:m2) - tmin(i/p:m2) and thus no earlier than when the 2nd o/p of m2 appears, and is thus safe. • A similar safety analysis will show that tsafe(1) is a safe i/p rate period for any ith i/p for both m2 and m1, for any i. Since it is also the min. such period (at the module level) for the 1st i/p (and in fact for any ith i/p) as we have established earlier, tsafe(1) is the min. safe i/p rate period for any ith i/p for both m2 and m1. We term this min. i/p rate period tsafe (= tsafe(1) ). • If there are k modules m1, …, mk, a simple extension of the above analysis gives us that the MSIP tsafe = maxi=1 to k{tmax(o/p:mi) - tmin(i/p:mi)}; see Fig. 2 above. tmin(i/p:m2) tmax(o/p:m2) m1 m2

  48. Strategy 4b: Pipeline It!—Wave Pipelining (contd.) • Interesting Qs: • Is tsafe a safe period not just at the module level but for every gate gj in the circuit (e.g., will tsafe be a safe period for every gate gj in m2), which it needs to be in order for this period to work? • Can we have a better (smaller) MISP determination by considering the above analysis at the level of individual gates than at the level of modules? tmin(i/p:m2) = tmin(i/p:m2a) tmax(o/p:m2a) tmin(i/p:m2) m2b m1 m2a Finer Analysis tmax(o/p:m2) tmax(o/p:m2b) = tmax(o/p:m2) m1 m2 tmin(i/p:m2b) Fig. 2: Finer granularity analysis by splitting m2 into m2a U m2b. Does MISP increase, decrease or unchanged? Fig. 1

  49. v’ u v x’ x w’ x y w z’ a1 u’ x a1 Strategy 4b: Pipeline It!—Wave Pipelining: Example 1 • Let max delay of a 2-i/p gate be 2 ps and min delay be 1 ps. • What is the tsafe for this ckt for the two modules shown? • tsafe = max (4 ps, 10-1 = 9 ps) = 9 ps • Thus MSIP = 9 ps a little better than 10 ps corresponding to the max o/p delay. So this is not a circuit than can be effectively wave pipelined. • Can the circuit be modified in a simple way to achieve effective wave pieplining? • Generally a ckt that has more balanced max o/p and min i/p delay for each module and gate is one that can be effectively wave pipelined, i.e., one whose MSIP is much lower than the max o/p delay of the circuit m1 tmax(o/p:m1) = 4 ps tmin(i/p:m2) = 1 ps m2 tmax(o/p:m2) = 10 ps

  50. F= my1(6) 1-bit 2:1 Mux I1 I0 my(3) my(1) my(2) my(0) 2 2 2 2 2-bit 2:1 Mux 2-bit 2:1 Mux 2-bit 2:1 Mux 2-bit 2:1 Mux f2(7) = f(7)(2) f(3)(2) f(5)(2) f(1)(2) I1 I1 I1 I1 I0 I0 I0 I0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 f(0) f(3) f(2) f(1) f(6) f(5) f(4) f(7) 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator 1-bit comparator A[6] B[6] A[4] B[4] A[0] B[0] A[3] B[3] A[7] B[7] A[5] B[5] A[2] B[2] A[1] B[1] Strategy 4b: Pipeline It!—Wave Pipelining: Example 2 • Let max delay of a basic unit (1-bit comp., 2:1 mux) is 3 ps and min delay be 2 ps. • What is the tsafe for this ckt for the two modules shown? • tsafe = max (6-0 ps, 12-4 = 8 ps) = 8 ps • Thus MSIP = 8 ps, 33% lower than 12 ps corresponding to the max o/p delay. • So this is a circuit that can be reasonably effectively wave pipelined. This is due to the balanced nature of the circuit where all i/p  o/p paths are of the same length (the diff. betw. max and min delays come from the max and min delays of the components or gates themselves). 12 ps m2 my(5)(2) my(4)(1) my(5)(1) my(4) my(5) 4 ps 2 2 Log n level of Muxes 2-bit 2:1 Mux 2-bit 2:1 Mux 6 ps my(1)(2) my(3)(2) I1 I1 I0 I0 2 2 m1 • What if we divide the circuit into 4 modules, each corresponding to a level of the circuit, and did the analysis for that? See next slide.

More Related