1 / 18

CS591x Cluster Computing

CS591x Cluster Computing. Interconnect Performance. Interconnect Performance. Major factor in computational performance… For communication intensive applications – collaborative processes Less important for compute intensive/low communications applications.

awen
Download Presentation

CS591x Cluster Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS591x Cluster Computing Interconnect Performance

  2. Interconnect Performance • Major factor in computational performance… • For communication intensive applications – collaborative processes • Less important for compute intensive/low communications applications

  3. Comparison of Interconnect Technology -estimates

  4. Others • 10 GigE • Coming along • Still expensive • Cray RapidArray Interconnect - XD1 • 9 Gbs bandwidth • 1.8 us latency • Low overhead

  5. Interconnect summary • Fast Ethernet cheap, poor performance • GigE – getting cheaper, better bandwidth, latency and overhead problems • Myrinet, Quadrics – very good bandwidth, low latency, low overhead – expensive – likely to stay expensive

  6. … Interconnect Summary • 10 GigE is coming along – too early to tell • Infiniband is the technology to watch • Very good performance • Expensive, but getting cheaper • Custom “Fabrics” • Very high performance • Cray XD1

  7. Interconnect • Performance influenced by transport protocol (i.e. IP) • Shared memory for communication results in better performance, but may have other problems • Application design may be tuned for particular interconnect architecture

  8. Compiling Programs on energy • C/C++Compilers • gcc/g++ • icc • pgcc/pgCC • mpicc, mpiCC

  9. Compiling Programs on energy • Fortran Compilers • gcc/g77 • Fortran 77 (maybe more coming) • Gcc recognizes file extension (.f, .for,…) • Ifort - Intel Fortran 95 • mpif77 • pgf77 • pgf90

  10. Compiler options • Common options • -o specifies compiler output file • -l specifies libraries that should be linked • -L specifies library paths (important) • -i specifies files to include • -I specifies the path for include files • -i/-I/-l/-L can occur multiple times

  11. Compiler Options • Many compile/link options • Many optimization options (more on this later)

  12. So… • Imagine that you wrote a program called simplehello.c … #include <stdio.h> main() { printf("Hello, World:\n"); }

  13. To compile with gcc • Like this… • gcc simplehello.c –osimplehello • If all goes well, you get a shell prompt (means no errors) • Review compiler errors carefully • Unresolved references usually means something is not defined/can’t find something

  14. Or … • With Intell c/c++ • icc simplehello.c –osimplehello • With mpicc • mpicc simplehello.c –osimplehello • Note: different compilers may have different defaults/definitions

  15. … mpi… • We have to give the compiler more direction … • gcc hello.c –ohellogcc –lmpich –I/usr/local/packages/mpich/include –L/usr/local/packages/mpich/lib • icc hello.c –ohellogcc –lmpich –I/usr/local/packages/mpich/include –L/usr/local/packages/mpich/lib

  16. With mpicc • mpicc hello.c -ohellompi

More Related