1 / 16

Sparse Linear Solver for Power System Analysis using FPGA

Sparse Linear Solver for Power System Analysis using FPGA. Jeremy Johnson, Prawat Nagvajara, Chika Nwankpa Drexel University. Goal & Approach. To design an embedded FPGA-based multiprocessor system to perform high speed Power Flow Analysis.

ordell
Download Presentation

Sparse Linear Solver for Power System Analysis using FPGA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sparse Linear Solver for Power System Analysisusing FPGA Jeremy Johnson, Prawat Nagvajara, Chika Nwankpa Drexel University HPEC 2004

  2. Goal & Approach • To design an embedded FPGA-based multiprocessor system to perform high speed Power Flow Analysis. • To provide a single desktop environment to solve the entire package of Power Flow Problem (Multiprocessors on the Desktop). • Solve Power Flow equations using Newton-Raphson, with hardware support for sparse LU. • Tailor HW design to systems arising in Power Flow analysis. HPEC 2004

  3. HOST Ybus Data Input Extract Data Jacobian Matrix Backward Substitution Forward Substitution LU Factorization NO Update Jacobian matrix Mismatch < Accuracy YES Post Processing HOST Algorithm and HW/SW Partition HPEC 2004

  4. Results • Software solutions (sparse LU needed for Power Flow) using high-end PCs/workstations do not achieve efficient floating point performance and leave substantial room for improvement. • High-grained parallelism will not significantly improve performance due to granularity of the computation. • FPGA, with a much slower clock, can outperform PCs/workstations by devoting space to hardwired control, additional FP units, and utilizing fine-grained parallelism. • Benchmarking studies show that significant performance gain is possible. • A 10x speedup is possible using existing FPGA technology HPEC 2004

  5. Benchmark • Obtain data from power systems of interest HPEC 2004

  6. System Profile HPEC 2004

  7. System Profile • More than 80% of rows/cols have size < 30 HPEC 2004

  8. Software Performance • Software platform • UMFPACK • Pentium 4 (2.6GHz) • 8KB L1 Data Cache • Mandrake 9.2 • gcc v3.3.1 HPEC 2004

  9. Hardware Model & Requirements • Store row & column indices for non-zero entries • Use column indices to search for pivot. Overlap pivot search and division by pivot element with row reads. • Use multiple FPUs to do simultaneous updates (enough parallelism for 8 – 32, avg. col. size) • Use cache to store updated rows from iteration to iteration (70% overlap, memory  400KB - largest). Can be used for prefetching. • Total memory required  22MB (largest system) HPEC 2004

  10. Architecture SRAM Cache SDRAM Memory CACHE Controller SDRAM Controller Processing Logic FPGA HPEC 2004

  11. Pivot Hardware Pivot logic Physical index Read colmap Translate to virtual Index reject Memory read FP compare Pivot index Pivot value Virtual index Column value Pivot column Pivot HPEC 2004

  12. Parallel FPUs HPEC 2004

  13. Update Hardware Column Word FMUL Update Row FADD Select Write Logic Memory read row Merge Logic colmap Update Submatrix Row HPEC 2004

  14. Performance Model • C program which simulates the computation (data transfer and arithmetic operations) and estimates the architecture’s performance (clock cycles and seconds). Model Assumptions • Sufficient internal buffers • Cache write hits 100% • Simple static memory allocation • No penalty on cache write-back to SDRAM HPEC 2004

  15. Performance • … HPEC 2004

  16. GEPP Breakdown HPEC 2004

More Related