1 / 28

Dr. George Chiu IEEE Fellow IBM T.J. Watson Research Center Yorktown Heights, NY

IBM Blue Gene/P. Dr. George Chiu IEEE Fellow IBM T.J. Watson Research Center Yorktown Heights, NY. President Obama Honors IBM's Blue Gene Supercomputer With National Medal Of Technology And Innovation

byrons
Download Presentation

Dr. George Chiu IEEE Fellow IBM T.J. Watson Research Center Yorktown Heights, NY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IBM Blue Gene/P Dr. George Chiu IEEE Fellow IBM T.J. Watson Research Center Yorktown Heights, NY

  2. President Obama Honors IBM's Blue Gene Supercomputer With National Medal Of Technology And Innovation Ninth time IBM has received nation's most prestigious tech award Blue Gene has led to breakthroughs in science, energy efficiency and analytics WASHINGTON, D.C. - 18 Sep 2009: President Obama recognized IBM (NYSE: IBM) and its Blue Gene family of supercomputers with the National Medal of Technology and Innovation, the country's most prestigious award given to leading innovators for technological achievement. President Obama will personally bestow the award at a special White House ceremony on October 7.  IBM, which earned the National Medal of Technology and Innovation on eight other occasions, is the only company recognized with the award this year.   Blue Gene's speed and expandability have enabled business and science to address a wide range of complex problems and make more informed decisions -- not just in the life sciences, but also in astronomy, climate, simulations, modeling and many other areas.  Blue Gene systems have helped map the human genome, investigated medical therapies, safeguarded nuclear arsenals, simulated radioactive decay, replicated brain power, flown airplanes, pinpointed tumors, predicted climate trends, and identified fossil fuels – all without the time and money that would have been required to physically complete these tasks.  The system also reflects breakthroughs in energy efficiency. With the creation of Blue Gene, IBM dramatically shrank the physical size and energy needs of a computing system whose processing speed would have required a dedicated power plant capable of generating power to thousands of homes.  The influence of the Blue Gene supercomputer's energy-efficient design and computing model can be seen today across the Information Technology industry. Today, 18 of the top 20 most energy efficient supercomputers in the world are built on IBM high performance computing technology, according to the latest Supercomputing 'Green500 List' announced by Green500.org in July, 2009. 

  3. Blue Gene/Q Power Multi Core Scalable to 20 PF Blue Gene/P (PPC 450 @ 850MHz) Scalable to 3.56 PF Blue Gene/L (PPC 440 @ 700MHz) Scalable to 595 TFlops Blue Gene Technology Roadmap Performance 2004 2007 2010 Note: All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

  4. BlueGene Roadmap • BG/L (5.7 TF/rack) – 130nm ASIC (1999-2004GA) • 104 racks, 212,992 cores, 596 TF/s, 210 MF/W; dual-core system-on-chip, • 0.5/1 GB/node • BG/P (13.9 TF/rack) – 90nm ASIC (2004-2007GA) • 72 racks, 294,912 cores, 1 PF/s, 357 MF/W; quad core SOC, DMA • 2/4 GB/node • SMP support, OpenMP, MPI • BG/Q • 20 PF/s

  5. TOP500 Performance Trend IBM has most aggregate performance for last 20 lists IBM has #1 system for last 10 lists (13 in total) 22.6 PF 1.1 PF # 1 Total Aggregate Performance 275 TF 17.1 TF # 10 # 500 Source: www.top500.org Blue Square Markers Indicate IBM Leadership

  6. HPCC 2008 IBM BG/P 365 TF linpack (32 racks, 450TF peak) • Number 1 on FFT (4485.72) • Number 1 on Random Access (6.82) Cray XT5 1059 TF linpack • Number 1 on HPL • Number 1 on Stream Source: www.top500.org

  7. November 2007 Green 500 Linpack GFLOPS/W 0.09 0.05 0.02

  8. IBM BG/P Relative power, space and cooling efficiencies(Published specs per peak performance)

  9. Node Card (32 chips 4x4x2) 32 compute, 0-1 IO cards System BlueGene/P 72 Racks, 72x32x32 Cabled 8x8x16 Rack 32 Node Cards 1 PF/s 144 (288) TB 13.9 TF/s 2 (4) TB Compute Card 435 GF/s 64 (128) GB 1 chip, 20 DRAMs Chip 13.6 GF/s 2.0 GB DDR2 (4.0GB 6/30/08) 4 processors 13.6 GF/s 8 MB EDRAM

  10. BlueGene/P compute ASIC 32k I1/32k D1 Snoop filter snoop PPC450 Shared L3 Directory for eDRAM w/ECC 4MB eDRAM L3 Cache or On-Chip Memory Multiplexing switch 128 L2 Double FPU 256 512b data 72b ECC 32k I1/32k D1 Snoop filter 256 PPC450 128 L2 Double FPU 32 Shared SRAM Multiplexing switch Snoop filter 32k I1/32k D1 Shared L3 Directory for eDRAM w/ECC 4MB eDRAM L3 Cache or On-Chip Memory PPC450 128 L2 512b data 72b ECC Double FPU Snoop filter 32k I1/32k D1 PPC450 128 L2 Double FPU Arb DMA Hybrid PMU w/ SRAM 256x64b DDR-2 Controller w/ ECC DDR-2 Controller w/ ECC JTAG Access Torus Global Barrier Collective Ethernet 10 Gbit 4 global barriers or interrupts JTAG 10 Gb/s 6 3.4Gb/sbidirectional 3 6.8Gb/sbidirectional 13.6 Gb/s DDR-2 DRAM bus

  11. Execution Modes in BG/P per Node node core core Hardware Abstractions Black Software Abstractions Blue • Next Generation HPC • Many Core • Expensive Memory • Two-Tiered Programming Model core core SMP Mode 1 Process 1-4 Threads/Process Dual Mode 2 Processes 1-2 Threads/Process Quad Mode (VNM) 4 Processes 1 Thread/Process P1 P0 P0 P0 P0 P2 T0 T0 T0 T2 T0 T0 T0 T1 T1 T1 T3 T1 P1 P3 P1 P0 P0 P0 T0 T0 T0 T0 T0 T2 T0 T1

  12. 2 x 16GB interface to 2 or 4 GB SDRAM-DDR2 BGQ ASIC 29mm x 29mm FC-PBGA NVRAM, monitors, decoupling, Vtt termination All network and IO, power input BG/P 4 core compute card (target 100 FITs – 25% SER)

  13. BPC Node Card 32 Compute nodes Local DC-DC regulators (6 required, 8 with redundancy) Optional IO card (one of 2 possible) with 10Gb optical link

  14. First BG/P Rack (2 Midplanes)

  15. Hydro-Air Concept for BlueGene/P 36” Air-Cooled BG/L 25 kW/Rack3000 CFM/Rack (drawn to scale) 48” Air-Cooled BG/P 40 kW/Rack5000 CFM/Rack 36” Hydro-Air Cooled BG/P Key: BG Rack withCardsandFans Airflow Air Plenum Air-to-Water Heat Exchanger 40 kW/Rack5000 CFM/Row 11

  16. Main Memory Capacity per Rack

  17. Peak Memory Bandwidth per node (byte/flop)

  18. Main Memory Bandwidth per Rack

  19. BlueGene/P Interconnection Networks 3 Dimensional Torus • Interconnects all compute nodes (73,728) • Virtual cut-through hardware routing • 3.4 Gb/s on all 12 node links (5.1 GB/s per node) • 0.5 µs latency between nearest neighbors, 5 µs to the farthest • MPI: 3 µs latency for one hop, 10 µs to the farthest • Communications backbone for computations • 1.7/3.9 TB/s bisection bandwidth, 188TB/s total bandwidth Collective Network • One-to-all broadcast functionality • Reduction operations functionality • 6.8 Gb/s of bandwidth per link per direction • Latency of one way tree traversal 1.3 µs, MPI 5 µs • ~62TB/s total binary tree bandwidth (72k machine) • Interconnects all compute and I/O nodes (1152) Low Latency Global Barrier and Interrupt • Latency of one way to reach all 72K nodes 0.65 µs, MPI 1.6 µs

  20. Interprocessor Peak Bandwidth per node (byte/flop)

  21. 800 394 127 1 Failures per Month per @ 100 TFlops (20 BG/L racks)unparalleled reliability Results of survey conducted by Argonne National Lab on 10 clusters ranging from 1.2 to 365 TFlops (peak); excluding storage subsystem, management nodes, SAN network equipment, software outages

  22. Reproducibility of Floating Point Operations • The example below is illustrated with single precision floating point operations (~7 digits accuracy), the same principle applies to double precision floating point operations (~14 digits accuracy) • A = 1234567 • B = 1234566 • C = 0.1234567 • A-B+C = 1.123457 • A+C-B = 1 • Caution: floating point with finite number of digits in accuracy is not commutative, and not associative. • BG/L,P enforces execution order, so all calculations are reproducible.

  23. Summary • Blue Gene/P: Facilitating Extreme Scalability • Ultrascale capability computing when nothing else will satisfy • Provides customer with enough computing resources to help solve grand challenge problems • Provide competitive advantages for customers’ applications looking for extreme computing power • Energy conscious solution supporting green initiatives • Familiar open/standards operating environment • Simple porting of parallel codes • Key Solution Highlights • Leadership performance, space saving design, low power requirements, high reliability, and easy manageability

  24. Backup

  25. Current HPC Systems Characteristics

  26. Blue Gene/L Customers with 232 racks sold! • Advanced Industrial Science and Technology (AIST Yutaka Akiyama) 4 racks 2/05 • Argonne National Laboratory Consortium (Rick Stevens) 1 rack 12/04 • ASTRON Lofar, Holland - Stella (Kjeld van der Schaaf) 6 racks 3/30/05, replaced with BG/P • Boston University 1 rack 2004 • Brookhaven National Laboratory/SUNY at Stony Brook (NewYorkBlue) 18 racks 2007 • Centre of Excellence for Applied Research and Training (CERT, UAE) 1 rack 09/06 • CERFACS 1 rack 07/07 • Council for the Central Laboratory of the Research Councils (CCLRC) 2 racks 1/07 • DIAS at HEANet 2 racks 2008 • Ecole Polytechnique Federale de Lausanne (EPFL, Henry Markram) 4 racks 06/05 • Electricite de France (EDF),France 4 racks 10/06 • Forschungszentrum Jülich GmbH (Thomas Lippert) 8 racks 12/05 • Harvard University (geophysics, computational chemistry) 2 racks 06/06 • IBM Yorktown Research Center (BGW) 20 racks 05/05 • IBM Almaden Research Center 2 racks 03/05 • IBM Zürich Research Lab 2 rack 03/05 • India Institute of Science (IISc), Bangalore 4 racks 9/07 • Iowa State University (Srinivas Aluru for genome classification) 1 rack 12/05 • Karolinska Institutet (neuroscience) 1 rack 1/07 • KEK, High Energy Accelerator Research Org. (Shoji Hashimoto) 10 racks 03/01/06 • Lawrence Livermore National Laboratory (Mark Seager) 105 racks 09/05, 08/07 • MIT (John Negele) 1 rack 09/05 • NARSS/MCIT 1 rack 2007 • National Center for Atmospheric Research (NCAR Richard Loft) 1 rack 3/05 • New Intelligent Workstation Systems Co. Ltd. (NIWS Ikuo Suesada) 1 rack 1/05 • Princeton University – (Orangena) 1 rack 9/05 • Rensselaer Polytechnic Institute- (CCNI) 17 racks 5/07 • RIKEN 1 rack 2007 • San Diego Supercomputing Center (Wayne Pfeiffer) - Intimidata 3 racks 12/17/04,11/06 • University of Alabama, Birmingham 1 rack 2007 • University of Canterbury, NZ (Blue Fern) 2 racks 2007 • University of Edinburgh (Richard Kenway) 1 rack 12/04 • University of Minnesota (Hormel Institute) 1 rack 2008 • University of North Carolina, Chapel Hill (RENCI, Dan Reed) 2 racks 4Q06, 1Q07

  27. Blue Gene/P Customers • Argonne National Laboratory, Intrepid 40 racks, Surveyor 1 rack, Intrepid 40 41 racks 9 in 07, 32 in 08 • ASTRON 3 racks 2008 • Brookhaven/Stony Brook Consortium 2 racks 2007 • Council for the Central Laboratory of the Research Councils (CCLRC) 1 rack 2007 • CHPC, South Africa 1 rack 2008 • Dassault 1 rack 2008 • Dublin Institute for Advanced Studies (DIAS) on HEANet 1 racks 2007 • Ecole Polytechnique Federale de Lausanne (EPFL, Henry Markram) 4 racks 07/09 • Electricite de France (EDF), France 8 racks 2008 • Forschungszentrum Jülich GmbH JuGene (Thomas Lippert) 72 racks 16 in 07, 16 in 08, 40 in 09 • Fritz Haber Institute (IPP) 2 rack 2007 • IBM On Demand Center (JEMTs) 4 racks 2008 • IBM Yorktown Research Center (BGW) 4 racks 2008 • IBM Zurich 1 rack 2008 • ICT, Bulgaria 2 racks 2008 • Institute for Development and Resources in Intensive Scientific computing (IDRIS, France) Laboratoire Bordelais de Recherche en Informatique (LABRI) 10 racks 2008 • KAUST 16 racks 2009 • Lawrence Livermore National Laboratory 38 racks 2009 • Moscow State University, Russia 2 racks 2008 • Oak Ridge National Laboratory (up to 16?) 2 racks 2007 • RZG/Max-Planck-Gesellschaft /Fritz Haber Ins., IPP Institut für Plasma Physik 3 racks 2 in 07, 1 in 08 • Science & Technology Facilities Control (STFC) at Daresbury 1 rack 2007 • Tata Institute of fundamental Research (TIFR) in India 1 rack 2008 • University of Rochester, NY 1 rack 2009 Total BG/P Racks: 220 24 sites Total BG/L Racks: 232 34 sites

More Related