1 / 24

Instructor: Dr. Phillip Jones (phjones@iastate) Reconfigurable Computing Laboratory

ECpE 583 Reconfigurable Computing Lecture 25: Tue 12/2/2008 Design Patterns & Compute Models: Part I. Instructor: Dr. Phillip Jones (phjones@iastate.edu) Reconfigurable Computing Laboratory Iowa State University Ames, Iowa, USA http://www.ece.iastate.edu

oriel
Download Presentation

Instructor: Dr. Phillip Jones (phjones@iastate) Reconfigurable Computing Laboratory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECpE 583Reconfigurable ComputingLecture 25: Tue 12/2/2008Design Patterns & Compute Models: Part I Instructor: Dr. Phillip Jones (phjones@iastate.edu) Reconfigurable Computing Laboratory Iowa State University Ames, Iowa, USA http://www.ece.iastate.edu http://class.ece.iastate.edu/cpre583 (coming soon) http://www.arl.wustl.edu/~phjones/cpre583 (temporary)

  2. Class Announcements • Updated schedule: • Design Patterns & Compute Models (12/2) • Parallelism (12/4) • Project Presentations (12/9) • Project Presentations (12/11) • Final (Monday 12/15 12-2pm) • Project concerns • Group Presentation schedule announce next class

  3. Outline • Design patterns • Why are they useful? • Examples • Compute models • Why are they useful? • Examples

  4. References • Reconfigurable Computing (2008) [1] • Chapter 5: Compute Models and System Architectures • Scott Hauck, Andre DeHon • Design Patterns for Reconfigurable Computing [2] • Andre DeHon (FCCM 2004) • Type Architectures, Shared Memory, and the Corollary of Modest Potential [3] • Lawrence Snyder: Annual Review of Computer Science (1986)

  5. Outline • Design patterns • Why are they useful? • Examples • Compute models • Why are they useful? • Examples

  6. Design Patterns • Design patterns: are a solution to reoccurring problems.

  7. Reconfigurable Hardware Design • “Building good reconfigurable designs requires an appreciation of the different costs and opportunities inherent in reconfigurable architectures” [2] • “How do we teach programmers and designers to design good reconfigurable applications and systems?” [2] • Traditional approach: • Read lots of papers for different applications • Over time figure out ad-hoc tricks • Better approach?: • Use design patterns to provide a more systematic way of learning how to design • It has been shown in other realms that studying patterns is useful • Object oriented software [93] • Computer Architecture [79]

  8. Common Langue • Provides a means to organize and structure the solution to a problem • Provide a common ground from which to discuss a given design problem • Enables the ability to share solutions in a consistent manner (reuse)

  9. Describing a Design Pattern [2] • 10 attributes suggested by Gamma (Design Patters, 1995) • Name: Standard name • Intent: What problem is being addressed?, How? • Motivation: Why use this pattern • Applicability: When can this pattern be used • Participants: What components make up this pattern • Collaborations: How do components interact • Consequences: Trade-offs • Implementation: How to implement • Known Uses: Real examples of where this pattern has been used. • Related Patterns: Similar patterns, patterns that can be used in conjunction with this pattern, when would you choose a similar pattern instead of this pattern.

  10. Example Design Pattern • Coarse-grain Time-multiplexing • Template Specialization

  11. Coarse-grain Time-Multiplexing M2 M1 M3 A B M1 M2 M1 M2 A B M3 Temp M3 Temp Configuration 1 Configuration 2

  12. Coarse-grain Time-Multiplexing • Name: Coarse-grained Time-Multiplexing • Intent: Enable a design that is too large to fit on a chip all at once to run as multiple subcomponents • Motivation: Method to share limited fixed resources over to implement a design that is too large as a whole.

  13. Coarse-grain Time-Multiplexing • Applicability: • Configuration can be done on large time scale • No feedback loops in computation • Feedback loop only spans the current configuration • Feedback loop is very slow • Participants: • Computational graph • Control algorithm • Collaborations: Control algorithm manages when sub-graphs are loaded onto the device

  14. Coarse-grain Time-Multiplexing • Consequences: Often platforms take millions of cycles to reconfigure • Need an app that will run for 10’s of millions of cycles before needing to reconfigure • May need large buffers to store data during a reconfiguration • Known Uses: • Video processing pipeline [Villasenor] • “Video Communications using Rapidly Reconfigurable Hardware”, Transactions on Circuits and Systems for Video Technology 1995 • Automatic Target Recognition [[Villasenor] • “Configurable Computer Solutions for Automatic Target Recognition”, FCCM 1996

  15. Coarse-grain Time-Multiplexing • Implementation: • Break design into multiple sub graphs that can be configured onto the platform in sequence • Design a controller to orchestrate the configuration sequencing • Take steps to minimize configuration time • Related patterns: • Streaming Data • Queues with Back-pressure

  16. Coarse-grain Time-Multiplexing M2 M1 M3 A B M1 M2 M1 M2 A B M3 Temp M3 Temp Configuration 1 Configuration 2

  17. Template Specialization Empty LUTs A(1) A(0) LUT LUT LUT LUT - - - - - - - - - - - - - - - - C(3) C(2) C(1) C(0) Mult by 3 Mult by 5 A(1) A(1) A(0) A(0) LUT LUT LUT LUT LUT LUT LUT LUT 0 0 0 1 0 0 1 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 0 1 0 0 1 1 0 1 0 1 0 3 6 9 0 5 10 15 C(3) C(2) C(1) C(0) C(3) C(2) C(1) C(0)

  18. Template Specialization • Name: Template Specialization • Intent: Reduce the size or time needed for a computation. (note primary difference from Template pattern is this pattern aims to minimize run-time reconfiguration) • Motivation: Use early-bound data and slowly changing data to reduce circuit size and execution time.

  19. Template Specialization • Applicability: When circuit specialization can be adapted quickly • Example: Can treat LUTs as small memories that can be written. No interconnect modifications • Participants: • Template cell: Contains specialization configuration • Template filler: Manages what and how a configuration is written to a Template cell • Collaborations: Template filler manages Template cell

  20. Template Specialization • Consequences: Can not optimize as much as when a circuit is fully specialize for a given instance. Overhead need to allow template to implement several specializations. • Known Uses: • Multiply-by-Constant • String Matching • Implementation: Multiply-by-Constant • Use LUT as memory to store answer • Use controller to update this memory when a different constant should be used.

  21. Template Specialization • Related patterns: • CONSTRUCTOR • EXCEPTION • TEMPLATE

  22. Template Specialization Empty LUTs A(1) A(0) LUT LUT LUT LUT - - - - - - - - - - - - - - - - C(3) C(2) C(1) C(0) Mult by 3 Mult by 5 A(1) A(1) A(0) A(0) LUT LUT LUT LUT LUT LUT LUT LUT 0 0 0 1 0 0 1 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 0 1 0 0 1 1 0 1 0 1 0 3 6 9 0 5 10 15 C(3) C(2) C(1) C(0) C(3) C(2) C(1) C(0)

  23. Next Lecture • Compute Models & Parallelism

  24. Questions/Comments/Concerns • Write down • Main point of lecture • One thing that’s still not quite clear • If everything is clear, then give an example of how to apply something from lecture OR

More Related