1 / 46

ECE 636 Reconfigurable Computing Lecture 13 Mid-term I Review

ECE 636 Reconfigurable Computing Lecture 13 Mid-term I Review. Q. Read or Write. P1. Q. P2. Out. P3. Data. P4. Programming Bit. I1. I2. 2-Input LUT. SRAM-based FPGA. SRAM bits can be programmed many times Each programming bit takes up five transistors

hedda
Download Presentation

ECE 636 Reconfigurable Computing Lecture 13 Mid-term I Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 636Reconfigurable ComputingLecture 13Mid-term I Review

  2. Q Read or Write P1 Q P2 Out P3 Data P4 Programming Bit I1 I2 2-Input LUT SRAM-based FPGA • SRAM bits can be programmed many times • Each programming bit takes up five transistors • Larger device area reduces speed versus EPROM and antifuse.

  3. Field Programmable Gate Array

  4. Tracks Logic Cluster T0 T1 Out T2 Out FC = 3 IO pin T0 T0 T1 T1 T2 T2 Connection Box Flexibility • Fc -> How many tracks does an input pin connect to? • If logic cluster is small, FC is large FC = W • If logic cluster is large, Fc can be less. • Approximately 0.2W for Xilinx XC4000EX, Virtex

  5. 0 1 0 1 0 0 1 1 Switchbox Flexibility • Switch box provides optimized interconnection area. • Flexibility found to be not as important as FC • Six transistors needed for FS= 3

  6. Switchbox Issues

  7. Bidirectional vs Directional

  8. New connectivityconstraint Single-driver Wiring!!! Dir Architecture Directional Wiring: Outputs can use switch block muxes

  9. A D Fine-grained Approach 16X1 • For 4-input LUTs 16 bits of information available • Can be chained together through programmable network. • Decoder and multiplexer an issue. • Flexibility is a key aspect. A D LUT1 16X1 Addr LUT2

  10. Hill Climbing Algorithms • To avoid getting trapped in local minima, consider “hill-climbing” approach • Need to accept worse solutions or make “bad” moves to get global minima. • Acceptance is probabalistic. Only accept cost-increasing moves some of the time. Cost Solution space

  11. L S L C Maze Routing • Evaluate shortest feasible paths based on a cost function • Like row-based device global route allocates channel bandwidth not specific solutions. • Formulate cost function as needed to address desired goal.

  12. Routing Tradeoffs • Bias router to find first, best route. • Vary number of node expansions using: pcosti = (1 – a) x pcosti-1 + ncosti + a x disti

  13. Architectural Limitation • Routing architecture necessitates domain selection. • Bigger effect for multi-fanout nets

  14. Pathfinder • Use a non-decreasing history value to represent congestion. • Similarities to multi-commodity flow • Can be implemented efficiently but does require substantial run time • Only update after an interation. ci = (1 + hn * hfac) * (1 + pn * pfac) + bn, n-1

  15. Bipartitioning • Perhaps biggest problem in multi-FPGA design is partitioning • Partitioner must deal with logic and pin constraints. • Could simultaneously attempt partitioning across all devices. Even “simple” algorithms are O(n3) • Better to recursively bipartition circuit.

  16. Bin 1 Bin 2 KLFM Partitioning • Identify nodes to swap to reduce overall cut size • Lock moved nodes • Algorithm continues until no un-locked node can be moved without violating size constraints

  17. KLFM Partitioning • Key issue is implementing node costs in lists that can be easily accessed and updated. • Many extensions to consider to speed up overall optimization • Reasonably easy to implement in software

  18. Partition Preprocessing: Clustering • Identify bin size • Choose a seed block (node) • Identify node with highest connectivity to join cluster • Terminate when cluster size met. • In practical terms cluster size of 4 works best

  19. Cluster KLFM uncluster KLFM Clustering • Technology mapping before partitioning is typically ineffective since frequently area is secondary to interconnect • Frequently bipartitioning continues after unclustering as well. • This allows for additional fine-grain moves.

  20. Logic Replication • Attempt to reduce cutset by replicating logic. • Every input of original cell must also input the replicated cell. • Replication can either be integrated into the partitioning process or used as a post-process technique.

  21. Logic Emulation • Emulation takes a sizable amount of resources • Compilation time can be large due to FPGA compiles • One application: also direct ties to other FPGA computing applications.

  22. Are Meshes Realistic? • The number of wires leaving a partition grows with Rent’s Rule P = KGB • Perimeter grows as G0.5 but unfortunately most circuits grow at GB where B > 0.5 • Effectively devices highly pin limited • What does this mean for meshes?

  23. Multi-FPGA Software • Missing high-level synthesis • Global placement and routing similar to intra-device CAD

  24. Virtual Wires • Overcome pin limitations by multiplexing pins and signals • Schedule when communication will take place.

  25. Virtual Wires Software Flow • Global router enhanced to include scheduling and embedding. • Multiplexing logic synthesized from FPGA logic.

  26. Why Compiling C is Hard • General Language • Not Designed For Describing Hardware • Features that Make Analysis Hard • Pointers • Subroutines • Linear code • C has no direct concept of time • C (and most procedural languages) are inherently sequential • Most people think sequentially. • Opportunities primarily lie in parallel data

  27. voidmain(void) { unsigned6a; a=45; } a = 1 0 1 1 0 1 = 0x2d MSB LSB Variables • Handel-C has onebasic type - integer • May be signed orunsigned • Can be any width, not limited to 8, 16, 32 etc. Variables are mapped to hardwareregisters.

  28. DeepC Compiler • Consider loop based computation to be memory limited • Computation partitioned across small memories to form tiles • Inter-tile communication is scheduled • RTL synthesis performed on resulting computation and communication hardware

  29. DeepC Compiler • Parallelizes compilation across multiple tiles • Orchestrates communication between tiles • Some dynamic (data dependent) routing possible.

  30. Control FSM • Result for each tile is a datapath, state machine, and memory block

  31. FPGA Fabric Striped Architecture Condition Codes • Same basic approach, pipelined communication, incremental modification • Functions as a linear pipeline • Each stripe is homogeneous to simplify computation • Condition codes allow for some control flexibility Microprocessor Interface Control Unit Address Control & Next Addr Configuration Configuration Cache

  32. Piperench Internals • Only multi-bit functional units used • Very limited resources for interconnect to neighboring programming elements • Place and route greatly simplied

  33. Convolutional Encoder • Accepts information bits as a continuous stream • Operates on the current b-bit input, where b ranges from 1 to 6 and some number of immediately preceding b-bit inputs to produce V output bits, V > b b =1, V =2 + 0 FF FF 1 0 0 1 +

  34. Definitions • Constraint Length • Number of successive b-bit groups of information bits for each encoding operation • Denoted by K • Code Rate (or) Rate • b/V • Typical values • K : 7 • Rate : 1/2, 1/3

  35. The Viterbi Algorithm • Finds a bit-sequence in the set of all possible transmitted bit-sequences that most closely resembles the received data. • Maximum likelihood algorithm • Each bit received by decoder associated with a measure of correctness. • Practical for short constraint length convolutional codes

  36. State diagram 0/00 • State • Encoder memory • Branch • k/ij, where i and j represent the output bits associated with input bit k 00 0/11 1/11 1/00 10 01 0/10 1/01 0/01 11 1/10

  37. Trellis Diagram K = 3 Rate ½ T=0 T=1 T=2 T=3 Accumulated metric 0 0 2 3 00 00 00 00 2+2,3+0 : 3 11 11 11 3 11 1 01 0+1,3+1 : 1 Total number of states = 2K-1 00 10 2 0 2 10 10 2+0,3+1 : 2 01 01 3 1 11 0+1,3+1 : 1 10 ENC IN : 0 1 0 ENC OUT : 00 11 10 RECEIVED: 00 11 11

  38. Adaptive Viterbi Algorithm • Motivation • Extremely large memory and logic for Viterbi • Algorithm • Fewer number of paths retained • Reduced memory and computation • Definitions • Path – Bit sequence • Path metric or cost – Accumulated error metric of a path • Survivor – Path which is retained for the subsequent time step

  39. Adaptive Viterbi Algorithm Criterion for path survival • A threshold T is introduced such that a path is retained if and only if current path metric is less than dm+T, where dmis the minimum cost among all survivors of the previous time step. • The total number of survivors per time step is limited to a critical number called Nmax selected by user. • Only best Nmaxpaths have to be retained at any time.

  40. Trellis Diagram for AVA

  41. Architecture (contd.) di < dm + T Count paths Elimination of sorting yes Add b1 Count < Nmax sum1 Update memory yes Add b2 no di < dm + T yes sum2 T = T-2

  42. Virtual Router Independent routing policies for each virtual router Key challenges Isolation Performance Flexibility Scalability

  43. Virtualization using FPGAs A novel network virtualization substrate which Uses FPGA to implement high performance virtual routers Introduces scalability through virtual routers in host software Exploits reconfiguration to customize hardware virtual routers

  44. Partial Reconfiguration Use partial reconfiguration to independently configure virtual routers

  45. Full FPGA Reconfiguration • Two virtual routers (A, B) initially in FPGA • During reconfiguration router A migrated to software, the other eliminated • After reconfiguration two virtual routers (A, B’) again in FPGA Reduced Throughput

  46. Partial FPGA Reconfiguration • A remains in hardware and operates at full speed • 20X speedup in reconfiguration down time due to partial reconfiguration Sustained Throughput

More Related