experimental performance evaluation for reconfigurable computer systems the gram benchmarks n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Experimental Performance Evaluation For Reconfigurable Computer Systems: The GRAM Benchmarks PowerPoint Presentation
Download Presentation
Experimental Performance Evaluation For Reconfigurable Computer Systems: The GRAM Benchmarks

play fullscreen
1 / 47

Experimental Performance Evaluation For Reconfigurable Computer Systems: The GRAM Benchmarks

86 Views Download Presentation
Download Presentation

Experimental Performance Evaluation For Reconfigurable Computer Systems: The GRAM Benchmarks

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Experimental Performance Evaluation For Reconfigurable Computer Systems: The GRAM Benchmarks Chitalwala. E., El-Ghazawi. T., Gaj. K., The George Washington University, George Mason University. MAPLD 2004, Washington DC

  2. Abbreviations • BRAM – Block RAM • GRAM - Generalized Reconfigurable Architecture Model • LM - Local Memory • Max – Maximum • MAP – Multi Adaptive Processor • MPM - Microprocessor Memory • OCM - On-Chip Memory • PE - Processing Element • Trans Perms -Transfer of Permissons 2

  3. Outline • Problem Statement • GRAM Description • Assumptions and Methodology • Testbed Description: SRC-6E • Results • Conclusion and Future Direction 3

  4. Problem Statement • Develop a standardized model of Reconfigurable Architectures. • Define a set of synthetic benchmarks based on this model to analyze performance and discover bottlenecks. • Evaluate the system against the peak performance specifications given by the manufacturer. • Prove the concept by using these benchmarks to assess and dynamically characterize the performance of a reconfigurable system, using the SRC-6E as a test case. 4

  5. Generalized Reconfigurable Architecture Model (GRAM)

  6. GRAM Benchmarks: Objective • To measure maximum sustainable data transfer rates and latency between the various elements of the GRAM. • Dynamically characterize the performance of the system against system peak performance. 6

  7. Generalized Reconfigurable Architecture Model (GRAM) 7

  8. GRAM Elements • PE – Processing Element • OCM – On-Chip Memory • LM – Local Memory • Interconnect Network / Shared Memory • Bus Interface • Microprocessor Memory 8

  9. GRAM Benchmarks • OCM – OCM: Measure max. sustainable bandwidth and latency between two OCMs residing on different PEs. • OCM – LM: Measure max. sustainable bandwidth and latency between OCM and LM in either direction. • OCM - Shared Memory: Measure max. sustainable bandwidth and latency between OCM and Shared Memory in either direction. • Shared Memory – MPM: Measure max. sustainable bandwidth and latency between Shared Memory and MPM in either direction. 9

  10. GRAM Benchmarks • OCM – MPM: Measure max. sustainable bandwidth and latency between OCM and MPM in either direction. • LM – MPM: Measure max. sustainable bandwidth and latency between LM and MPM in either direction. • LM – LM: Measure max. sustainable bandwidth and latency between LM and LM in either direction. • LM – Shared Memory: Measure max. sustainable bandwidth and latency between LM and Shared Memory in either direction. 10

  11. GRAM Assumptions

  12. Assumptions • All devices on board are fed through a single clock • No direct path between the Local Memories of individual elements • Connections for add-on cards may exist but not shown • The generalized architecture has been created based on precedents set by past and current manufacturers of Reconfigurable Systems. 12

  13. Methodology • Data paths can be parallelized to the maximum extent possible. • Inputs and Outputs have been kept symmetrical. • Hardware timers have been used to measure times taken to transfer data. • Measurements have been taken for transfers of increasingly large amounts of data. • Data must be verified for correctness after transfers. • Multiple paths may exist between the elements specified. Our aim will be to measure the fastest path available. • All experiments will be conducted using the programming model and library functions of the system. 13

  14. Testbed Description: SRC-6E

  15. Hardware Architecture of the SRC-6E 800/1600 Mbytes/sec 800/1600 Mbytes/sec 64 x 6 64 x 6 64 800 Mbytes/sec 800 Mbytes/sec 64 x 6 64 x 6 15

  16. APPLICATION .c or .f Files .mc or .mf Files .vhd or .v Files Logic Synthesis μP Compiler MAP Compiler .v Files .o Files .o Files .ngo FILES Linker Place & Route Application Executable .bin Files Programming Model of the SRC-6E 16

  17. mP Board mP Board P3/P4 mP (1/3 GHz) P3/P4 mP (1/3 GHz) P3/P4 mP (1/3 GHz) P3/P4 mP (1/3 GHz) 800/1600 MBytes/Sec 800/1600 MBytes/s Shared Memory to MPM 8000 8000 OCM - MPM MAP III Board L2 L2 L2 L2 800 800 4800 (6 x 800) 4800 (6 x 800) Control Chip Control Chip MIOC MIOC SNAP SNAP µ Processor Memory (1.5 GB) µ Processor Memory (1.5 GB) On-Board Memory (24 MB) On-Board Memory (24 MB) OCM – Shared Memory PCI Slot PCI Slot 4800 (6 x 800) 4800 (6 x 800) OCM - OCM User Chip User Chip User Chip User Chip 2400 (4800*) 2400 (4800*) Ethernet GRAM Benchmarks for the SRC-6E 17

  18. GRAM Benchmarks for the SRC-6E 18

  19. Results

  20. Block Diagram for a Single Bank transfer between OCM to Shared Memory Start_timer Read_timer(ht0) µProcessor Memory to Shared Memory (DMA_in) Read_timer(ht1) Shared Memory to OCM Read_timer(ht2) OCM to Shared Memory Read_timer(ht3) Shared Memory to µProcessor Memory (DMA_out) Read_timer(ht4) 20

  21. Latency *1 word = 64 bits 21

  22. Latency • The difference between read and write times for the OCM and Shared Memory is due to the read latency of OBM (6 clocks) vs. BRAM (1 clock). • When transferring data from the MPM to Shared Memory, writes are issued at each clock cycle and there is no startup latency involved. • When reading data from the Shared Memory to the MPM, there is an additional five clock cycles required to transfer data after the read has been issued. 22

  23. Shared Memory A 4 MB B 4 MB C 4 MB D 4 MB E 4 MB F 4 MB PROCESSING ELEMENT (FPGA) PROCESSING ELEMENT (FPGA) 64 64 64 64 64 64 64 64 OCM 1 OCM 2 OCM 1 OCM 2 192 Data Path from OCM to OCM Using Transfer Of Permissions 23

  24. Shared Memory A 4 MB B 4 MB C 4 MB D 4 MB E 4 MB F 4 MB PROCESSING ELEMENT (FPGA 1) PROCESSING ELEMENT (FPGA 2) OCM 1 OCM 1 64 64 64 Data Path from OCM to OCM Using The Bridge Port and the Streaming Protocol 24

  25. P III & IV: Bandwidth: OCM and OCM (BM#1) 25

  26. P III: Bandwidth: OCM and OCM (BM#1) 26

  27. P IV :Bandwidth: OCM and OCM (BM#1) 27

  28. P IV: Bandwidth: OCM and OCM (BM#1) (Streaming Protocol in Bridge Port) 28

  29. Control FPGA SNAP Shared Memory MICROPROCESSOR MEMORY A 4 MB B 4 MB C 4 MB D 4 MB E 4 MB F 4 MB 64 64 64 PROCESSING ELEMENT (FPGA) 64 64 64 OCM 1 OCM 2 OCM 3 Data Path from OCM to MPM and Shared Memory to MPM 29

  30. P III: Bandwidth: OCM and Shared Memory for a single bank 30

  31. P III: Bandwidth: OCM and Shared Memory 31

  32. P IV: Bandwidth: OCM and Shared Memory 32

  33. P III: Bandwidth: OCM and µP Memory 33

  34. P IV: Bandwidth: OCM and µP Memory 34

  35. P III: Bandwidth: Shared Memory and µP Memory (BM#5) 35

  36. P IV: Bandwidth: Shared Memory and µP Memory 36

  37. P III: Bandwidth: Shared Memory and µP Memory 37

  38. P IV: Bandwidth: Shared Memory and µP Memory 38

  39. Shared Memory A 4 MB B 4 MB C 4 MB D 4 MB E 4 MB F 4 MB PROCESSING ELEMENT (FPGA 1) Register 64 64 Data Path from FPGA Register to Shared Memory 39

  40. P III: Bandwidth: Shared Memory and Register 40

  41. Conclusion & Future Direction

  42. GRAM Summation for Pentium III 42

  43. GRAM Summation for Pentium IV 43

  44. Conclusions • Type of components used has a major role to play in determining the performance of the system as seen in the performance of the Pentium III and the Pentium IV versions of the SRC-6E. • Software environment and state of development plays a role in determining how effectively the program is able to utilize the hardware. This is clear when observing the difference in bandwidth achieved across the Bridge ports using the Carte 1.6.2 release and the Carte 1.7 release. 44

  45. Conclusions … • The GRAM Summation Tables help to serve machine architects in the following ways: • The efficiency column indicates how well a particular communication channel is being utilized within the hardware context. If the efficiency is low, architects may be able to improve performance using a firmware improvement. If efficiency is high and the normalized bandwidth is low then they should consider a hardware upgrade. • By looking at the normalized bandwidths obtained from the GRAM benchmarks, designers can also determine whether the data transfer rates are balanced across the architectural modules. This helps identifying bottlenecks. • Designers can find out which channels have the maximum efficiency and can hence fine tune their application to exploit these channels to achieve the maximum data transfer rate. 45

  46. Conclusions … • In addition, the GRAM Summation tables also provide the following information to application developers: • The tables can tell a designer what bottlenecks to expect and where these bottlenecks lie. • By comparing the figures for Efficiency and the Normalized transfer rates, designers can understand if the bottlenecks being created are by the hardware or the software. • By observing the GRAM summarization tables, designers can actually predict the performance of a pre-designed application on a particular reconfigurable system. 46

  47. Future Direction • Benchmarks can be expanded to include end-to-end performance from asymmetrical and synthetic workloads. • The Benchmarks can also include tables to characterize the performance of reconfigurable computers as it compares to modern parallel architectures. A performance to cost analysis can also be considered. 47