1 / 39

L12 : Lower Power High Level Synthesis(3)

L12 : Lower Power High Level Synthesis(3). 1999. 8 성균관대학교 조 준 동 교수 http://vada.skku.ac.kr. Matrix-vector product algorithm. Retiming. Flip- flop insertion to minimize hazard activity moving a flip- flop in a circuit. Exploiting spatial locality for interconnect power reduction. Global

noma
Download Presentation

L12 : Lower Power High Level Synthesis(3)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. L12 : Lower Power High Level Synthesis(3) 1999. 8 성균관대학교 조 준 동 교수 http://vada.skku.ac.kr

  2. Matrix-vector product algorithm

  3. Retiming Flip- flop insertion to minimize hazard activity moving a flip- flop in a circuit

  4. Exploiting spatial locality for interconnect power reduction Global Local Adder1 Adder2

  5. Balancing maximal time-sharing and fully-parallel implementation A fourth-order parallel-form IIR filter (a) Local assignment (2 global transfers), (b) Non-local assignment (20 global transfers)

  6. Retiming/pipelining for Critical path

  7. Effective Resource Utilization

  8. Hazard propagation elimination by clocked sampling By sampling a steady state signal at a register input, no more glitches are propagated through the next combinational logics.

  9. Regularity • Common patterns enable the design of less complex architecture and therefore simpler interconnect structure (muxes, buffers, and buses). Regular designs often have less control hardware.

  10. Module Selection • Select the clock period, choose proper hardware modules for all operations(e.g., Wallace or Booth Multiplier), determine where to pipeline (or where to put registers), such that a minimal hardware cost is obtained under given timing and throughput constraints. • Full pipelining: ineffective clock period mismatches between the execution times of the operators. performing operations in sequence without immediate buffering can result in a reduction of the critical path. • Clustering operations into non-pipelining hardware modules, the reusability of these modules over the complete computational graph be maximized. • During clustering, more expensive but faster hardware may be swapped in for operations on the critical path if the clustering violates timing constraints

  11. Estimation • Estimate min and max bounds on the required resources to • delimit the design space min bounds to serve as an initial solution • serve as entries in a resource utilization table which guides the transformation, assignment and scheduling operations • Max bound on execution time is tmax: topological ordering of DFG using ASAP and ALAP • Minimum bounds on the number of resources for each resource class Where NRi: the number of resources of class Ri dRi : the duration of a single operation ORi : the number of operations

  12. Exploringthe Design Space • Find the minimal area solution constrained to the timing constraints • By checking the critical paths, it determine if the proposed graph violates the timing constraints. If so, retiming, pipelining and tree height reduction can be applied. • After acceptable graph is obtained, the resource allocation process is • initiated. • change the available hardware (FU's, registers, busses) • redistribute the time allocation over the sub-graphs • transform the graph to reduce the hardware requirements. • Use a rejectionless probabilistic iterative search technique (a variant of Simulated Annealing), where moves are always accepted. This approach reduces computational complexity and gives faster convergence.

  13. Data path Synthesis

  14. Scheduling and Binding • The scheduling task selects the control step, in which a given operation will happen, i.e., assign each operation to an execution cycle • Sharing: Bind a resource to more than one operation. • Operations must not execute concurrently. • Graph scheduled hierachically in a bottom-up fashion • Power tradeoffs • Shorter schedules enable supply voltage (Vdd) scaling • Schedule directly impacts resource sharing • Energy consumption depends what the previous instruction was • Reordering to minimize the switching on the control path • Clock selection • Eliminate slacks • Choose optimal system clock period

  15. Algorithm HAL Example ASAP Scheduling

  16. Algorithm ALAP Scheduling • HAL Example

  17. Force Directed Scheduling • Used as priority function. • Force is related to concurrency. • Sort operations for least force. • Mechanical analogy: • Force = constant displacement. • constant = operation-type distribution. • displacement = change in probability.

  18. Force Directed Scheduling

  19. Example : Operation V6

  20. Force-Directed Scheduling • Algorithm (Paulin)

  21. Probability of scheduling operations into control steps Probability of scheduling operations into control steps after operation o3 is scheduled to step s2 Force-Directed Scheduling Example • Operator cost for multiplications in a • Operator cost for multiplications in c

  22. The scheduled DFG DFG with mobility labeling (inside <>) ready operation list/resource constraint List Scheduling

  23. DFG Partial schedule of five nodes Priority list Static-List Scheduling The final schedule

  24. Divide-and-Conquer to minimize the power consumption • Decompose a computation into strongly connected components • Any adjacent trivial SCCs are merged into a sub part; • Use pipelining to isolate the sub parts; • For each sub part • Minimize the number of delays using retiming; • If (the sub part is linear) • Apply optimal unfolding; • Else • Apply unfolding after the isolation of nonlinear operations; • Merge linear sub parts to further optimize; • Schedule merged sub parts to minimize memory usage

  25. Choosing Optimal Clock Period

  26. SCC decomposition step Using the standard depth-first search-based algorithm [Tarjan,1972] which has a low order polynomial-time complexity. For any pair of operations A and B within an SCC, there exist both a path from A to B and a path from B to A. The graph formed by all the SCCs is acyclic. Thus, the SCCs can be isolated from each other using pipeline delays, which enables us to optimize each SCC separately.

  27. Idetifying SCC • The first step of the approach is to identify the computation's strongly connected components,.

  28. Choosing Optimal Clock Period

  29. Supply Voltage Scaling Lowering Vdd reduces energy, but increase delays

  30. Multiple Supply VoltagesFilter Example

  31. Shut-down을 이용한 Scheduling: |a-b|

  32. Loop Scheduling • Sequential Execution • Partial loop unrolling • Loop folding

  33. Loop folding • Reduce execution delay of a loop. • Pipeline operations inside a loop. • Overlap execution of operations. • Need a prologue and epilogue. • Use pipeline scheduling for loop graph model.

  34. DFG2 DFG2 after redundant operation insertion DFG Restructuring

  35. Minimizing the bit transitions for constants during Scheduling

  36. Control Synthesis • Synthesize circuit that: • Executes scheduled operations. • Provides synchronization. • Supports: • Iteration. • Branching. • Hierarchy. • Interfaces.

  37. Allocation • Bind a resource to more than one operation.

  38. Optimum binding

  39. Example

More Related