1 / 31

A Reconfigurable Computing Architecture Utilizing a Switch Fabric Network

A Reconfigurable Computing Architecture Utilizing a Switch Fabric Network. P. Rudolph 1 , E Bentley 1 , W. Turri 1 , K. Hill 2 1 Systran Federal Corporation, Dayton OH 2 AFRL/IFTA, WPAFB, OH . Outline. Project Background Architecture Overview RapidIO Basics Architecture Details Benefits

lida
Download Presentation

A Reconfigurable Computing Architecture Utilizing a Switch Fabric Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Reconfigurable Computing Architecture Utilizing a Switch Fabric Network P. Rudolph1, E Bentley1, W. Turri1, K. Hill2 1Systran Federal Corporation, Dayton OH 2AFRL/IFTA, WPAFB, OH E5

  2. Outline • Project Background • Architecture Overview • RapidIO Basics • Architecture Details • Benefits • Current Status • Future Plans E5

  3. Project Background • Work funded by AFRL Information Directorate • Two related SBIR (Small Business Innovative Research) contracts: • DERC (Development Environment for Reconfigurable Computing) – Develop tools to streamline the development of reconfigurable computing applications • RIFTSS (Reconfigurable Integrated Fault Tolerance for Spaceborne Systems) – Add fault tolerance functions to the tools developed under DERC E5

  4. Project Background • As part of the DERC/RIFTSS effort, we have developed a new general purpose reconfigurable computing board. • The DERC board uses the switch fabric architecture that is the focus of this presentation • Other DERC / RIFTSS deliverables include: • Logic libraries to support development of user-defined logic • Software libraries (for use on the host PC) to support development of user-defined software E5

  5. Architecture Overview E5

  6. Architecture Overview • The DERC prototype consists of four nodes linked by RapidIO connections • Three Processing Nodes • One I/O Node • Doubled lines (red) represent RapidIO links • Prototype uses PCI form factor • Includes off-board connections to additional DERC boards or other RapidIO devices E5

  7. RapidIO • RapidIO is a high-speed, packet-switched interconnect technology, able to support a variety of switch fabric architectures • Primarily aimed at chip-to-chip and board-to-board connections • Original standard specified a parallel physical layer; a serial version of the standard has recently been defined • DERC prototype uses the parallel implementation E5

  8. RapidIO • A RapidIO system is composed of: Switches – to route data packets to the correct destination • Endpoints – can send and receive data packets • Each endpoint has a unique Node ID used for packet routing E5

  9. RapidIO • The RapidIO standard supports a variety of transaction types, and uses a request/response format • Examples: • Read Transaction: → Read request (contains address) ← Read response (contains data payload) • Write Transaction: → Write request (contains address and data) ← { optional write response (write confirmation) } E5

  10. RapidIO • The RapidIO standard defines 8- and 16-bit versions of the parallel interface • RapidIO uses double data rate transfers, resulting in the following maximum data rates: • 250 MHz (500 MB/s @ 8 bits) • 375 MHz (750 MB/s @ 8 bits) • 500 MHz (1000 MB/s @ 8 bits) • 750 MHz (1500 MB/s @ 8 bits) • 1000 MHz (2000 MB/s @ 8 bits) • For the DERC prototype, we have implemented the RapidIO interfaces using physical layer core from Xilinx, which implements an 8-bit parallel interface operating at 250 MHz. E5

  11. Architecture E5

  12. Processing Node - supports user-defined application-specific logic If a larger form factor were used, additional processing nodes could be added I/O Node – bridge between the on-board RapidIO network and any non-RapidIO interface At least one I/O Node is needed for the interface to the host If other data ports are used, additional I/O Nodes may be added Architecture E5

  13. Processing Node E5

  14. Processing Node • Each Processing Node consists of: • Switching Element (Xilinx Virtex-II FPGA) • Supports all RapidIO links to the node • Controls configuration of Processing Element • Processing Element (Xilinx Virtex-II FPGA) • Supports user logic • Dynamically configurable from host • Prototype PCB footprint can support Virtex-II FPGAs of various sizes, from XC2V1000 to XC2V10000 • Memory Element (128 MB SDRAM) • Connected to: • Switching Element – for access by other nodes • Processing Element - for direct access by local user logic E5

  15. I/O Node E5

  16. I/O Node • The host I/O node consists of: • Switching Element (Xilinx Virtex-II FPGA) • Supports all RapidIO links to the node • I/O Element • Supports PCI interface to host PC • Controls configuration of Switching Elements • Startup Logic • CPLD used to configure I/O Element from on-board flash memory • Memory Element (optional) • (Not included on PCI prototype) E5

  17. Switching Element E5

  18. Switching Element • The Switching Element is currently implemented using a XC2V3000 FPGA • The Switching Element includes: • Three instances of the RapidIO physical layer (small squares) • RapidIO transport layer and routing tables • RapidIO Endpoint (including logical layer) • Interface to Processing Element or I/O element • Internal control logic E5

  19. Benefits • The DERC architecture offers the following benefits: • Scalability • Performance • Support for multiple user applications • Fault Tolerance E5

  20. Scalability • Multiple boards can be linked together to create a large-scale reconfigurable computing resource E5

  21. Scalability • Boards could be linked in a mesh topology… E5

  22. Scalability • …or in a tree topology E5

  23. Performance • We have simulated the system performance with two linked boards, using this simple, pipelined test application. (The data path is the heavy dotted line.) E5

  24. Performance • To measure performance of the switch fabric network, our simulation makes the following assumptions: • Data transfer speed is not limited by the user logic in each processing node • Data transfer speed is not limited by the interface to the host • Using the smallest (i.e., least efficient) RapidIO data packet, transfer of actual user data (i.e., not counting packet headers) was measured at 119 MB/s • Use of larger data packets should increase this data rate • If the data flow is not a simple pipeline, however, latency issues will result in slower throughput. E5

  25. Multiple Applications If data paths (dotted lines) are chosen to avoid bottlenecks, multiple applications can operate simultaneously with minimal impact on each others performance. E5

  26. Fault Tolerance • Since all processing nodes are identical, and the switch fabric network provides redundant data paths, this architecture can support fault tolerant applications. • In the following example, when node 3 of application A fails, an unused node can be configured as a duplicate of the failed node, and the switch fabric routing tables updated to send data through this replacement node. E5

  27. Fault Tolerance Before E5

  28. Fault Tolerance After E5

  29. Current Status • PCI prototype board being fabricated • Development of logic and software libraries in progress • Performing additional simulations using simple test applications • Raytheon Corporation and U. of Cincinnati are developing two separate demonstration applications utilizing this architecture E5

  30. Future Plans • Develop a VME implementation of the architecture, with additional processing and I/O nodes • Investigate use of serial RapidIO for off-board interconnects • Update design to use Virtex-II Pro FPGAs E5

  31. Contact Info Paul Rudolph Systran Federal Corp. 937-429-9008 x209 prudolph@systranfederal.com E5

More Related