Introduction to chrec
This presentation is the property of its rightful owner.
Sponsored Links
1 / 19

Introduction to CHREC PowerPoint PPT Presentation


  • 101 Views
  • Uploaded on
  • Presentation posted in: General

Introduction to CHREC. Alan D. George, Ph.D. Professor of ECE, Univ. of Florida Director, NSF Center for High-Performance Reconfigurable Computing (CHREC). What is CHREC?. NSF Center for High-Performance Reconfigurable Computing Pronounced “shreck”  Under development since Q4 of 2004

Download Presentation

Introduction to CHREC

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Introduction to chrec

Introduction to CHREC

Alan D. George, Ph.D.

Professor of ECE, Univ. of Florida

Director, NSF Center for High-Performance

Reconfigurable Computing (CHREC)


What is chrec

What is CHREC?

  • NSF Center for High-Performance Reconfigurable Computing

    • Pronounced “shreck” 

    • Under development since Q4 of 2004

      • Lead institution grant by NSF to Florida awarded on 09/05/06

      • Partner institution grant by NSF to GWU awarded on 12/04/06

      • Partner institution grants anticipated for BYU and VT in 2007

    • Kickoff workshop held in Dec’06; operations began in Jan’07

  • Under auspices of I/UCRC Program at NSF

    • Industry/University Cooperative Research Center

      • CHREC supported by CISE & Engineering Directorates @ NSF

    • CHREC is both a Center and a Research Consortium

      • University groups form research base (faculty, students)

      • Industry and government organizations are research partners, sponsors, collaborators, and technology-transfer recipients


What is a reconfigurable computer

What is a Reconfigurable Computer?

  • System capable of changing hardware structure to address application demands

    • Static or dynamic reconfiguration

    • Reconfigurable computing, configurable computing, custom computing, adaptive computing, etc.

    • Typically a mix of conventional and reconfigurable processing technologies (control-flow, data-flow)

  • Enabling technology?

    • Field-programmable hardware (e.g. FPGAs)

  • Applications?

    • Broad range – satellites to supercomputers!

    • Faster, smaller, cheaper, less power & heat, more versatile


When and where do we need rc

When and where do we need RC?

  • When do we need RC?

    • When performance & versatility are critical

      • Hardware gates targeted to application-specific requirements

      • System mission or applications change over time

    • When the environment is extremely restrictive

      • Limited power, weight, area, volume, etc.

      • Limited communications bandwidth for work offload

    • When autonomy and adaptivity are paramount

  • Where do we need RC?

    • In conventional HPC systems & clusters where apps amenable

      • Field-programmable hardware fits many demands (but certainly not all)

      • High DOP, finer grain, direct dataflow mapping, bit manipulation, selectable precision, direct control over H/W (e.g. perf. vs. power)

    • In space, air, sea, undersea, and ground systems

      • Embedded & deployable systems can reap many advantages w/ RC


Example nasa honeywell uf research

Example: NASA/Honeywell/UF Research

Dependable Multiprocessor (DM)

  • 1st Space Supercomputer

    • In-situ sensor processing

    • Autonomous control

    • Speedups of 100 and more

    • First fault-tolerant, parallel, reconfigurable computer for space (NMP ST-8 orbit in 2009)

  • Infrastructure for fault-tolerant high-speed computing in space

    • Robust system services

    • Fault-tolerant MPI services

    • FPGA services

    • Application services

    • Standard design framework

    • Providing transparent API to various resources for earth & space scientists

COTS!

First Mission:

ST-8

2009 launch

Poster on Project in Friday Session


Artist s depiction of st 8 spacecraft

Artist’s Depiction of ST-8 Spacecraft

Dependable Multiprocessor

ST-8 Orbit: - sun-synchronous

- 320km x 1300km @ 98.5o inclination


Objectives for chrec

Objectives for CHREC

  • Establish first multidisciplinary NSF research center in reconfigurable high-performance computing

    • Basis for long-term partnership and collaboration amongst industry, academe, and government; a research consortium

    • RC: from supercomputing to high-performance embedded systems

  • Directly support research needs of our Center members

    • Highly cost-effective manner with pooled, leveraged resources and maximized synergy

  • Enhance educational experience for a diverse set of high-quality graduate and undergraduate students

    • Ideal recruits after graduation for our Center members

  • Advance knowledge and technologies in this field

    • Commercial relevance ensured with rapid technology transfer


Chrec faculty

CHREC Faculty

  • University of Florida

    • Dr. Alan D. George, Professor of ECE – Center Director

    • Dr. Herman Lam, Associate Professor of ECE

    • Dr. K. Clint Slatton, Assistant Professor of ECE and CCE

    • 1 or 2 new tenure-track faculty members in RC likely hired in 2007

  • George Washington University

    • Dr. Tarek El-Ghazawi, Professor of ECE – GWU Site Director

    • Dr. Ivan Gonzalez, Research Scientist in ECE

    • Dr. Mohamed Taher, Research Scientist in ECE

  • Brigham Young University

    • Dr. Brent E. Nelson, Professor of ECE – BYU Site Director

    • Dr. Michael J. Wirthlin, Associate Professor of ECE

    • Dr. Brad L. Hutchings, Professor of ECE

  • Virginia Tech

    • Dr. Shawn A. Bohner, Associate Professor of CS – VT Site Director

    • Dr. Peter Athanas, Professor of ECE

    • Dr. Wu-Chun Feng, Associate Professor of CS and ECE

    • Dr. Francis K.H. Quek, Professor of CS


20 founding members in chrec

Altera

Air Force Research Lab

Arctic Region SC

Honeywell

HP

IBM Research

Intel

NASA Goddard

NASA Langley

NASA Marshall

National Recon Office

National Security Agency

NCI/SAIC

Oak Ridge National Lab

Office of Naval Research

Raytheon

Rockwell Collins

Sandia National Labs

SGI

Smiths Aerospace

20 Founding Members in CHREC

BLUE = Member with UF, RED = Member with GW, GREEN = Member with both


Benefits of center membership

Benefits of Center Membership

  • Research and collaboration

    • Selection of project topics that your membership resources support

    • Direct influence over cutting-edge research of prime interest

    • Review of results on semiannual formal basis & continual informal basis

    • Rapid transfer of results and IP from projects @ ALL sites of CHREC

  • Leveraging and synergy

    • Highly leveraged and synergistic pool

    • Cost-effective R&D in today’s budget-tight environment

  • Multi-member collaboration

    • Many benefits between members

    • e.g. new industrial partnerships and teaming opportunities

  • Personnel

    • Access to strong cadre of faculty, students, post-docs

  • Recruitment

    • Strong pool of students with experience on industry & govt. R&D issues

  • Facilities

    • Access to university research labs with world-class facilities


Y1 projects at uf site of chrec

Y1 Projects at UF Site of CHREC

F1: Simulative Performance Prediction

  • Before you invest major $$$ in new systems, software design, & hardware design, better to first predict potential benefits

    F2: Performance Analysis & Profiling

  • Without new concepts and powerful tools to locate and resolve performance bottlenecks, max. speedup is extremely elusive

    F3: Application Case Studies & HLLs

  • RC for HPC or HPEC is relatively new & immature; need to build/share new knowledge with apps & tools from case studies

    F4: Partial RTR Architecture for Qualified HPEC Systems

  • Many potential advantages to be gained in performance, adaptability, power, safety, fault tolerance, security, etc.

    F5: FPLD Device Architectures & Tradeoffs

  • How to understand and quantify performance, power, et al. advantages of FPLDs vs. competing processing technologies

Performance Prediction

Performance Analysis

Application Case Studies & HLLs

Systems Architecture

Device Architecture

F1

F2

F3

F4

F5

Performance, Adaptability,Fault Tolerance, Scalability, Power, Density


Conclusions

Conclusions

  • New NSF Center in reconfigurable computing

    • Overarching theme

      • CHREC forms basis for research consortium with partners from industry, academia, and government

      • Focus upon basic & applied research in RC for HPC and HPEC with major educational component

    • Technical emphasis at outset primarily towards aerospace & defense

      • Building blocks, systems & services, design automation, applications

      • Opportunities for expansion and synergy in many other areas of RC application

    • Focused now on Y1 success at official sites and support for new sites

      • UF and GW now active on Y1 projects, began ops in Jan’07

      • BYU and VT are working to become CHREC sites and begin ops by Jan’08

    • We invite government & industry groups to join CHREC consortium

      • Leverage and build upon common interests and synergy in RC

      • Pooled resources & matched resources: maximal ROI, modest membership fee


Thanks for listening

Thanks for Listening!

  • More information

    • Web: www.chrec.org

    • Email: [email protected]

  • Questions?


Appendix

APPENDIX


Bridging the gaps

Bridging the Gaps

  • Vertical Gap

    • Semantic gap between design levels

      • Application design by scientists & programmers

      • Hardware design by electrical & computer engineers

    • We must bridge this gap to achieve full potential

      • Better programming languages to express parallelism of multiple types and at multiple levels

      • Better design tools, compilers, libraries, run-time systems

      • Evolutionary and revolutionary steps

    • Emphasis: integrated SW/HW design for multilevel parallelism

  • Horizontal Gap

    • Architectures crossing the processing paradigms

      • Cohesive, optimal collage of CPUs, FPGAs, interconnects, memory hierarchies, communications, storage, et al.

      • Must we assume simple retrofit to conventional architecture?


Research challenge stack

Research Challenge Stack

Performance

Prediction

Performance

Analysis

Numerical

Analysis

Languages

& Compilers

System

Services

Portable

Libraries

System

Architectures

Device

Architectures

  • Performance prediction

    • When and where to exploit RC?

  • Performance analysis

    • How to optimize complex systems and apps?

  • Numerical analysis

    • Must we throw DP floats at every problem?

  • Programming languages & compilers

    • How to express & achieve multilevel parallelism?

  • System services

    • How to support variety of run-time needs?

  • Portable core libraries

    • Where cometh building blocks?

  • System architectures

    • How to scalably feed hungry FPGAs?

  • Device architectures

    • How will/must FPLD roadmaps track for HPC or HPEC?

Performance, Adaptability,Fault Tolerance, Scalability, Power, Density


Center management structure

Center Management Structure

BYU – B. Nelson

VT– S. Bohner


Membership fee structure

Membership Fee Structure

  • NSF provides base funds for CHREC via I/UCRC grants

    • Base grant to each participating university site to defray admin costs

  • Industry and govt. partners support CHREC through memberships

    • NOTE: Each membership is associated with ONE university

    • Partners may hold multiple memberships (and thus support multiple students) at one or multiple participating universities (e.g. NSA)

  • Full Membership: fee is $35K in cash per year

    • Why $35K unit? Approx. cost of graduate student for one year

      • Stipend, tuition, and related expenses (IDC is waived, otherwise >$50K)

    • Fee represents tiny fraction of size & benefits of Center

      • CHREC budget projected to exceed $2.5M/yr by 2008 (UF+GW+BYU+VT)

      • Equivalent to >$10M if Center founded in govt. or industry

  • Each university invests for various costs of its CHREC operations

    • 25% matching of industry membership contributions

    • Indirect Costs waived on membership fees (~1.5× multiplier)

    • Matching on administrative personnel costs

More bang

for your buck!


General policies for chrec

General Policies for CHREC

  • We follow the I/UCRC Standard Membership Agreement

    • As defined by NSF

  • CHREC publication-delay policy

    • Results from funded projects shared with members 30 days prior to publication

    • Any industry member may delay publication for up to 90 days for IP issues

  • Industrial Advisory Board (IAB)

    • Each full member in CHREC holds a seat on IAB

    • Board members elect IAB chair and vice-chair on annual basis; in Y1:

      • Chair: Alan Hunsberger (NSA), Vice-Chair: Nick Papageorgis (Smiths Aerospace)

    • Number of votes commensurate with number of memberships

      • On Center policies: 1 vote per full membership

      • On Center projects: 35 votes per full membership (flexibility; may support multiple projects)

  • Focus in Y1 on full memberships, but other options possible in future

    • Examples

      • Supplemental membership for large equipment donation (subject to approval)

      • Associate membership for SBI (subject to approval) with reduced rights and fees

    • All membership options require review and approval by IAB


  • Login