1 / 43

High-Level Synthesis for Reconfigurable Systems

High-Level Synthesis for Reconfigurable Systems. High-Level Synthesis. New Computing Platforms: Its success depends on the easiness of programming Microprocessors DSPs Parallel Computing: Why not as successful as sequential programming?

bcontreras
Download Presentation

High-Level Synthesis for Reconfigurable Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High-Level Synthesis for Reconfigurable Systems

  2. High-Level Synthesis New Computing Platforms: Its success depends on the easiness of programming Microprocessors DSPs Parallel Computing: Why not as successful as sequential programming? Parallel implementation of a given application requires two steps: The application must be partitioned to identify the dependencies among the parts (to set the parts that can be executed in parallel). Requires a good knowledge on the structure of the application. Mapping must allocate the independent blocks of the application to the computing hardware resources. Requires a good understanding of the underlying hardware structure. Reconfigurable computing: Faces similar problem: Must write code in an HDL Not easy to decouple the implementation process from the hardware High-level descriptions increase the acceptance of a platform

  3. High-Level Synthesis in FPGAs • Vivado HLS (UG910) (UG871): • Gets C/C++/SystemC • Produces RTL VHDL/Verilog • Data types for bit-wise operations • Not in C/C++ but can produce more efficient results • Subset of high-level language constructs: • Most system calls not supported • printf, fprintf(), getc, time, sleep, scanf, alloc/malloc, free, …

  4. Modeling High-level descriptions: Modeling is a key aspect in the design of a system Models used must be powerful enough Capture all user’s need Easy to understand and manipulate Powerful models: Dataflow graphs, Sequencing graphs, Finite State Machine with Datapath (FSMD)

  5. Dataflow Graph DFG: Given a set of tasks {T1,…,Tk}, a DFG is a DAG G(V,E), where V (= T): the set of nodes: operators E: the set of edges: data dependency between tasks DFG for the quadratic root computation using:

  6. Definitions The latencyti of vi : The time it takes to compute the function of vi using Hvi vi hi ai = li× hi node vi V li implementation Hvi • Weightwij of eij  E: • width of bus connecting two components Hvi and Hvj vi wij vj • Latencytij of eij: • the time needed to transmit data from Hvi to Hvj • Simplification: vi used for a node and its hardware implementation Hvi .

  7. DFG • DFG limitation: • Any high-level program can be compiled into a DFG, provided that: • No loops • No branching instructions. • Therefore: • DFG is used for parts of a program. • Extend to other models: • Sequencing graph • FSMD • CDFG

  8. Finite State Machine with Datapath (FSMD) FSMD: Extension of DFG with an FSM is a 6-tuple <S, I, O, F, H, s0>: S = {s0, …, sl}: states s0: an initial state I = {i0, …, im}: inputs O = {o0, …, on}: outputs V = {v0, …, vn}: variables F: S × I × V  S: a transition function that maps a tuple(si,ij,vk) to a state H: S  O + V: an action function that maps the current state to output and variable FSMD vs. FSM FSMD operates on arbitrary complex data types Not just Boolean vars F and H may include arithmetic operations Not just Boolean operations

  9. FSMD Assignment statement a single state is created that executes the assignment action. An arc connecting the state with the state of the next statement is created. • Modeling with FSMD: • The transformation of a program into a FSMD is done by • Transforming the statements of the program into FSMD states. • The statements are classified in three categories: • assignment statements • branch statements • loop statements

  10. FSMD: Loop Loop statement: Added nodes (with no action): Condition state: C Join state: J Added arcs: Label: loop condition Connects: C to the state of the first statement in the loop body. Label: complement of the loop condition Connects: C to the first statement after the loop body. Connects: the state of the last statement in the loop to the join state Connects: join state back to the conditional state.

  11. FSMD: Branch Branch statement: Added nodes (with no action): Condition state: C Join state: J Added arcs: Label: first branch condition Connects: C to the state of the first statement of the branch Label: complement of the first condition ANDed with the second branch condition Connects: C to the first statement of the branch. …. Each state corresponding to the last statement in a branch is connected to the join state. • 5. Join state is connected to the state corresponding to the first statement after the branch.

  12. FSMD: Example int gcd(int x, int y) { fin = false; while (!fin) { if (x > y) x = x – y; else if (x < y) y = y – x; else fin = true; } return x; }

  13. FSMD: Datapath Synthesis • Datapath Synthesis Steps: • A register for each variable • An FU for each arithmetic operation • Connecting FU ports to registers • Creating a unique identifier for each control input/output of FUs • (Resource sharing by MUXes).

  14. Controller Synthesis

  15. High-Level Synthesis High-Level Synthesis (Architectural Synthesis): Transforming an abstract model of circuit behavior into a data path and a control unit. Steps: Allocation: defines the resource types required by the design, and for each type the number of instances. Binding: maps each operation to an instance of a given resource. Scheduling: sets the temporal assignment of resources to operators. Decides which operator owns the resource at a given time

  16. Allocation Allocation (Formal Definition): For a given specification with a set of operators or tasks T = {t1, t2, · · · , tn} to be implemented on a set of resource types R = {r1, r2, · · · , rt}, allocationis a functionα : R → Z+, where α(ri) = ziis the number of available instances of resource typeri

  17. Allocation • Example: • T = {*, *, *, -, -, *, *, *, +, +, <} • R = {Adder, MUL} = {1,2} • Adder: add, sub, compare α(1) = 5 α(2) = 6

  18. Allocation • T = {*, *, *, -, -, *, *, *, +, +, <} • R = {Adder, MUL} = {1,2} • Another Definition: • α : T → Z+, • Operators in the specification are mapped to resource types. α(t1) = 2 α(t2) = 2 … α(t5) = 1 … α(t11) = 1 • More than one resource type may exists for an operator: • {CLA, RipAdd, BoothMul, SerMul, ..} •  Resource selection needed (α is one-to-many)

  19. Binding Binding (Formal Definition): For T = {t1, t2, · · · , tn} and R = {r1, r2, · · · , rt}, bindingis a functionβ : T → R × Z+, where β(ti) = (ri, bi), (1 ≤ bi ≤ α(ri))is the instance of the resource type ri on which ti is mapped to.

  20. Binding • Example: • T = {*, *, *, -, -, *, *, *, +, +, <} • R = {Adder, MUL} = {1,2} • Adder: add, sub, compare β(t1) = (2,1) β(t2) = (2,2) β(t3) = (2,3) β(t4) = (1,1) β(t5) = (1,2) β(t6) = (2,4) β(t7) = (2,5) β(t8) = (2,6) … β(t11) = (1,5)

  21. Scheduling Scheduling (Formal Definition): For T = {t1, t2, · · · , tn} schedulingis a functionς : V → Z+, where ς(ti)denotes the starting time of task ti.

  22. High-Level Synthesis Fundamental differences in RCS: Uniform resources:  It is possible to implement any task on a given part of a device (provided that the available resource are enough).

  23. General vs. RCS High-Level Synthesis add x mul • Example: sub * • Assumptions on “resource fixed” device: • Multiplication needs 100 units of area • The adder and the subtractor need 50 units each. • Allocation selects “one” instance of each resource type. •  Two subtractors cannot be used in the first level. • The adder cannot be used in the first step • due to data dependency • Minimum execution time: 4 steps *

  24. General vs. RCS High-Level Synthesis Assumptions on a reconfigurable device Multiplication needs 100 LUTs. Adder/subtractor need 50 LUTs each. Total available amount of resources: 200 LUTs. The two subtractors can be assigned in the first step. Minimum execution time: 3 steps * *

  25. Temporal Partitioning • Resources on the device are not allocated to only one operator but to a set of operators that must be placed at the same time and removed. •  An application must be partitioned in sets of operators. • The partitions will then be successively implemented at different time on the device. Temporal Partitioning

  26. Configuration • Configuration: • Given a reconfigurable processing unit H and • a set of tasks T = {t1, ...., tn} available as cores C = {c1, ...., cn}, • we define the configuration ζiof the RPU at time sito be the set of cores ζi= {ci1, ..., cik}  C running on H at time si. • A core (module) cifor each tiin library: • Hard / soft / firm module.

  27. Firm vs. Soft Modules Placement of soft modules Placement of firm modules

  28. Schedule • Schedule: • is a function ς : V → Z+, where ς(vi) denotes the starting time of the node vithat implements a task ti. • Feasible Schedule: • ς is feasible if:eij= (vi, vj)  E, ς(tj) ≥ ς(ti) + T(ti) + tij • eijdefines a data dependency between tasks tiand tj, • tijis the latency of the edge eij, • T(ti) is the time it takes the node vi to complete execution.

  29. Ordering Relation • Ordering relation ≤ • vi ≤ vj schedule ς, ς(vi) ≤ ς(vj). • ≤ is a partial ordering, as it is not defined for all pairs of nodes in G.

  30. Partition • Partition: • A partition P of the graph G = (V,E) is its division into some disjoint subsets P1, ..., Pmsuch that Uk=1,…,mPk = V • Feasible Partition: • A partition is feasible for a reconfigurable device H (area = a(H), pin count = p(H)) if: • Pk P: a(Pk) = (∑viPkai) ≤ a(H) • 1/2∑eijEwij≤ p(H) • for eij∩Pk≠Ø, • eij-Pk≠Ø (crossing edges) • Crossing edge: • an edge that connects one component in a partition with another component out of the partition.

  31. Run Time • Run time of a partition r(Pi): • the maximum time from the input of the data to the output of the result.

  32. Ordering Relation • Ordering relation for partitions: • Pi ≤ Pj vi Pi, vj Pj • either vi ≤ vj • or viand vjare not in relation. • Ordered partitions: • A partitioning P is ordered an ordering relation ≤ exists on P.

  33. Temporal Partitioning • Temporal partitioning: • Given a DFG G = (V,E) and a reconfigurable device H, a temporal partitioning of G on H is an ordered partitioning P of G with respect to H.

  34. Cycles are not allowed in DFG. Otherwise, the resulting partition may not be schedulable on the device. Temporal Partitioning Cycle

  35. Goal: Computation and scheduling of a Configuration graph A configuration graphis a graph in which: Nodes are partitions or bitstreams Edges reflect the precedence constraints in a given DFG Temporal partitioning P2 P3 P1 P4 P5 Configuration Graph • Formal Definition: • Given a DFG G = (V,E) • and a temporal partitioning P = {P1, ..., Pn} of G, we define aConfiguration graphof G relative to the P, with notation Γ(G/P) = (P,EP) in which the nodes are partitions in P. An edge eP= (Pi, Pj) EPe = (vi, vj)  E with vi Pi and vj Pj . • Configuration: • For a given partition P, each node Pi P has an associated configuration ζithat is the implementation of Pifor the given device H.

  36. Whenever a new partition is downloaded, the partition that was running is destroyed. Communication through inter-configuration registers (or communication memory) May sit in main memory May sit at the boundary of the device to hold the input and output values Configuration sequence is controlled by the host processor Temporal partitioning P2 P3 P1 P4 P5 Inter-configuration registers IO Register IO Register Bus IO Register Block IO Register IO Register IO Register IO Register IO Register Processor FPGA Device’s register mapping into the processor address spaces

  37. Steps (for Piand Pj, (Pi ≤ Pj): Configuration for Piis first downloaded into the device. Executes. Picopies all the data it needs to send to other partitions into the communication memory. The device is reconfigured to implement the partition Pj Accesses the communication memory and collect the data. Temporal partitioning P2 P3 P1 P4 P5 Inter-configuration registers

  38. Objectives for optimization: Overall computation delay depends on the partition run time reconfiguration time speed of data exchange (from/to comm. memory) # interconnections: very important, since it minimizes: The amount of exchanged data The amount of memory for temporally storing the data # produced blocks (partitions) Reduces the number of reconfigurations (total time?) Overall amount of wasted resources on the chip When components with shorter run-times are placed in the same partition with other components with longer run-time, those with the shorter components remain idle for a longer period of time. Temporal partitioning

  39. Wasted Resources • Wasted resourcewr(vi) of a nodevi: • Unused area occupied by the node viduring the computation of a partition wr(vi) = (t(Pi)−T(ti))×ai t(Pi):run-time of partition Pi. T(ti)): run-time of the component vi ai: area of vi • Wasted resourcewr(Pi) of a partition (Pi= {vi1, .., vin}: wr(Pi) = j =1,…,n wr(vi) • Wasted resource of a partitioning P: wr(P) = j =1,…,k wr(Pj) Run time T2 T6 TN T1 Reconfigurable Device Area

  40. Communication Cost: modelled asgraph connectivity: Connectivity of a graph G=(V,E): con(G) = 2*|E|/(|V|2 - |V|) (|V|2 - |V|)/2: the number of all edges that can be built with V. Communication Overhead 8 2 1 3 7 9 4 6 5 10 Connectivity = 0.24

  41. Quality of Partitioning P = {P1,…,Pn}: Average connectivity over P: Q(P) = 1/n i=1,…,ncon(Pi) Communication Overhead 8 2 1 3 7 9 4 6 5 10 3 2 6 9 1 7 8 4 2 8 1 5 7 9 3 4 6 5 10 10 Connectivity = 0.24 Quality = 0.25 Quality = 0.45

  42. Communication Overhead • Minimizing communication overhead by • minimizing the weighted sum of crossing edges among the partitions. •  minimize the size of the communication memory and •  minimize the communication time. • Heuristic: • Highly connected components are placed in the same partition (High quality partitioning)

  43. References • [Bobda07] Christophe Bobda, “Introduction to Reconfigurable Computing: Architectures, Algorithms and Applications,” Springer, 2007.

More Related