1 / 32

Using PC clusters for scientific computing: do they really work?

Using PC clusters for scientific computing: do they really work?. Roldan Pozo Mathematics and Computational Sciences Division NIST. Alternate Title. “Supercomputing: the view from below…”. Fire Dynamics Applied Economics Polymer Combustion Research DNA Chemistry

astra
Download Presentation

Using PC clusters for scientific computing: do they really work?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using PC clusters for scientific computing: do they really work? Roldan Pozo Mathematics and Computational Sciences Division NIST

  2. Alternate Title • “Supercomputing: the view from below…”

  3. Fire Dynamics Applied Economics Polymer Combustion Research DNA Chemistry Applied Computational Chemistry Reacting Flow Simulation Microbeam analysis Atmospheric and Chemometric Research Analytical Mass Spectrometry for Biomolecules Trace-Gas analysis Neutron Activation Analysis Plasma Chemistry Thin-Film Process Metrology Nanoscale Cryoelectronics Computer Security Computer-aided Manufacturing Polymer Characterization ... NIST Activities

  4. NIST Computing • Cray C-90 • IBM SP2 • SGI Origin 2000 • Convex C3820 • small workstation (Alpha, RS6K, etc.) clusters • small PC clusters

  5. Parallel / Scalable / Distributed / Computing? • Thanks, but parallel computing is (still) hard... • don’t have the • time • resources • development cycle • economic justfication • don’t really need it

  6. Flops is not the issue... • Development time • Turn-around time

  7. The Big Supercomputing Maxim: • “The bigger the machine, the more you share it”

  8. From Big Iron to clusters... • Migration of conventional supercomputer users (Cray, etc.) to less expensive platforms • are small clusters the answer? • care to parallelize your apps?

  9. User Responses • “Go away.” • “DM: Been there, done that.” • “Parallel Computing failed.” • “Can’t the compiler do that?”

  10. Alternate approach... • MASSIVELY SEQUENTIAL COMPUTING • personal compute server • mainly sequential jobs • some occasional small (2-8 processor) parallel jobs

  11. Sequential rules • Big applications are hardly ever run once. • Most simulations consist of many runs of the same code with different input data. • Memory constraints? Buy more memory!

  12. Benefits of a personal supercomputer • Don’t have to share it with anyone! • Often reduced turn-around time • No batch queues, CPU limits, disk quotas, etc. • direct control over the resource • You get to decide how to best use it

  13. JazzNet I • 9 processors • Intel BaseTX Express Hub • JazzNet II • 18 processors • Myrinet Gigabit network • 8-port 3Com SuperStackII 3000TX fast ethernet switch • 16-port Bay Networks BayStack 350T fast ethernet switch

  14. Parallel adaptive multigrid (PHAML)(William F. Mitchell, MCSD) • Adaptive multigrid for finite element modeling • 2D elliptic partial differential equations • uses Fortran 90 and PVM/MPI • originally developed on the IBM SP2

  15. PHAML performance

  16. 3D Helmholtz equation solver(Karin A. Remington, MCSD) • Fast, direct method for solving elliptic PDEs via “matrix decomposition” • Handles Dirichlet, Neumann, or periodic boundary conditions

  17. Helmoltz solver implementation • 1D decomposition, f77/C, PVM & MPI • FFT across processors • personalized “all-to-all” communication

  18. 3D Helmholtz performance

  19. Optimal wing shape in viscous flows(Anthony J. Kearsley, MCSD) • Optimization problem to minimize vorticity • CFD around trial shapes with constrained shape methods • uses domain decomposition and domain embedding • hybrid constrained direct search method

  20. Optimal wing shape performance

  21. Phase-field algorithm for solidification modeling(Bruce Murray, NIST/SUNY) • set of two time-dependent, nonlinear parabolic PDEs • Fortran 77 Cray application • finite difference / ADI method

  22. solidification modeling performance(1200x600 grid, 50 steps)

  23. solidification modeling performance(1200x600 grid, 50 steps)

  24. JazzNet Pentium II nodes • ASUS KN97X motherboard (440FX PCI chipset) • 266MHz Pentium IIs (512KB cache) • 128 MB RAM (60ns SIMMs) • integrated EIDE controller • 2GB EIDE disk • Kingston Tech. EtherRx 10/100 NIC

  25. 8 nodes, 2GB RAM, 64 GB disk: ($25,000) 400 MHz Pentium IIs, rack-mount case 256 MB RAM each 8 GB Ultra-ATA disks 16 port Fast Ethernet switch 4 UPS DDS-3 SCSI DAT backup monitor, cables, etc. Example Configuration(8 nodes, fast ethernet switch)

  26. PC clusters will work if... • You have many independent jobs to run (compute server) • supercomputing resources are busy • you have ready-to-run parallel applications • have portable Unix f77/C codes • apps not highly vectorizable • willing to use Linux/PC

  27. PC clusters will not work if... • Proprietary library/app not available • expect parallel computing to be easy and solve all your problems… • have extreme memory bandwidth requirements • need more RAM/disk space than physically available on PC architectures

  28. NIST Scalable Computing Testbed Project Beowulf Berkeley NOW Illinois HPVM DAISy (Sandia) Grendel TORC (ORNL/Tenn.) FermiLab Brahma Aenes PACET MadDog … and many more Related Projects

  29. From Big Iron to clusters... • Migration of conventional supercomputer users (Cray, etc.) to less expensive platforms • are small clusters the answer? • care to parallelize your apps?

  30. What could we do? • Give each user their personal server • help them port their apps • provide some consultation • for jobs too big, contract out.

  31. Departing thoughts • The Ultra-high-end is sexy, but… • the end-user audience shrinks to zero • The real opportunities for the greatest influence is at the low/middle level. • This is where the other 99.9% of the needs are, and users there feel ignored.

More Related