Observations of an Accidental
Download
1 / 45

tific Computing Research - PowerPoint PPT Presentation


  • 234 Views
  • Updated On :

Observations of an Accidental Computational Scientist SIAM/NSF/DOE CSME Workshop 25 March 2003 David Keyes Department of Mathematics & Statistics Old Dominion University & Institute for Scientific Computing Research Lawrence Livermore National Laboratory . Academic and lab backgrounds.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'tific Computing Research' - Pat_Xavi


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Slide1 l.jpg

Observations of an Accidental Computational ScientistSIAM/NSF/DOE CSME Workshop25 March 2003David KeyesDepartment of Mathematics & Statistics Old Dominion University&Institute for Scientific Computing ResearchLawrence Livermore National Laboratory


Academic and lab backgrounds l.jpg
Academic and lab backgrounds

  • 74-78: B.S.E., Aerospace and Mechanical/Engineering Physics

  • 78-84: M.S. & Ph.D., Applied Mathematics

  • 84-85: Post-doc, Computer Science

  • 86-93: Asst./Assoc. Prof., Mechanical Engineering

  • 93-99: Assoc. Prof., Computer Science

  • 99-03: Prof., Mathematics & Statistics

  • 03- : Prof., Applied Physics & Applied Mathematics

  • 86-02: ICASE, NASA Langley

  • 99- : ISCR, Lawrence Livermore

  • 03- : CDIC, Brookhaven


Computational science engineering l.jpg
Computational Science & Engineering

  • A “multidiscipline” on the verge of full bloom

    • Envisioned by Von Neumann and others in the 1940’s

    • Undergirded by theory (numerical analysis) for the past fifty years

    • Empowered by spectacular advances in computer architecture over the last twenty years

    • Enabled by powerful programming paradigms in the last decade

  • Adopted in industrial and government applications

    • Boeing 777’s computational design a renowned milestone

    • DOE NNSA’s “ASCI” (motivated by CTBT)

    • DOE SC’s “SciDAC” (motivated by Kyoto, etc.)


Niche for computational science l.jpg
Niche for computational science

  • Has theoretical aspects (modeling)

  • Has experimental aspects (simulation)

  • Unifies theory and experiment by providing common immersive environment for interacting with multiple data sets of different sources

  • Provides “universal” tools, both hardware and software

    Telescopes are for astronomers, microarray analyzers are for biologists, spectrometers are for chemists, and accelerators are for physicists, but computers are for everyone!

  • Costs going down, capabilities going up every year


Simulation complements experimentation l.jpg

Engineering

electromagnetics

aerodynamics

Physicscosmology

radiation transport

Ex #2

Ex #3

Ex #1

Energycombustion fusion

Ex #4

personal examples

Simulation complements experimentation

Experiments prohibited or impossible

Experiments difficult to instrument

Experiments dangerous

Experiments expensive

Environment

global climate

wildland firespread

Scientific Simulation


Example 1 wildland firespread l.jpg

“It looks as if all of Colorado is burning” –

Bill Owens, Governor

“About half of the U.S. is in altered fire regimes” –

Ron Myers, Nature Conservancy

Example #1: wildland firespread

Simulate fires at the wildland-urban interface, leading to strategies for planning preventative burns, fire control, and evacuation

Joint work between ODU, CMU, Rice, Sandia, and TRW


Example 1 wildland firespread cont l.jpg
Example #1: wildland firespread, cont.

  • Objective

    Develop mathematical models for tracking the evolution of wildland fires and the capability to fit the model to fires of different character (fuel density, moisture content, wind, topography, etc.)

  • Accomplishment to date

    Implemented firefront propagation with level set method with empirical front advance function; working with firespread experts to “tune” the resulting model

  • Significance

    Wildland fires cost many lives and billions of dollars annually; other fire models pursued at national labs are more detailed, but too slow to be used in real time; one of our objectives is to offer practical tools to firechiefs in the field


Example 2 aerodynamics l.jpg
Example #2: aerodynamics

Simulate airflows over wings and streamlined bodies on highly resolved grids leading to superior aerodynamic design

1999 Gordon Bell Prize

Joint work between ODU, Argonne, LLNL, and NASA-Langley


Example 2 aerodynamics cont l.jpg
Example #2: aerodynamics, cont.

  • Objective

    Develop analysis and optimization capability for compressible and incompressible external aerodynamics

  • Accomplishment to date

    Developed highly parallel nonlinear implicit solvers (Newton-Krylov-Schwarz) for unstructured grid CFD, implemented in PETSc, demonstrated on a “workhorse” NASA code running on the ASCI machines (up to 6,144 processors)

  • Significance

    Windtunnel tests of aerodynamic bodies are expensive and difficult to instrument; computational simulation and optimization (as for the Boeing 777) will greatly reduce the engineering risk of developing new fuel-efficient aircraft, cars, etc.


Example 3 radiation transport l.jpg
Example #3: radiation transport

Simulate “flux-limited diffusion” transport of radiative energy in inhomogeneous materials

Joint work between ODU, ICASE, and LLNL


Example 3 radiation transport cont l.jpg
Example #3: radiation transport, cont.

  • Objective

    Enhance accuracy and reliability of analysis methods used in the simulation of radiation transport in real materials

  • Accomplishment to date

    Leveraged expertise and software (PETSc) developed for aerodynamics simulations in a related physical application domain, also governed by nonlinear PDEs discretized on unstructured grids, where such methods were less developed

  • Significance

    Under current stockpile stewardship policies, DOE must be able to reliably predict the performance of high-energy devices without full-scale physical experiments


Example 4 fusion energy l.jpg
Example #4: fusion energy

Simulate plasmas in tokomaks, leading to understanding of plasma instability and (ultimately) new energy sources

Joint work between ODU, Argonne, LLNL, and PPPL


Example 4 fusion energy cont l.jpg
Example #4: fusion energy, cont.

  • Objective

    Improve efficiency and therefore extend predictive capabilities of Princeton’s leading magnetic fusion energy code “M3D” to enable it to operate in regimes where practical sustained controlled fusion occurs

  • Accomplishment to date

    Augmented the implicit linear solver (taking up to 90% of execution time) of original code with parallel algebraic multigrid; new solvers are much faster and robust, and should scale better to the finer mesh resolutions required for M3D

  • Significance

    An M3D-like code will be used in DOE’s Integrated Simulation and Optimization of Fusion Systems, and ITER collaborations, with the goal of delivering cheap safe fusion energy devices by early-to-mid 21st century


We lead the tops project l.jpg
We lead the “TOPS” project

U.S. DOE has created the Terascale Optimal PDE Simulations (TOPS) project within the Scientific Discovery through Advanced Computing (SciDAC) initiative; nine partners in this 5-year, $17M project, an “Integrated Software Infrastructure Center”


Toolchain for pde solvers in tops project l.jpg

Optimizer

Sens. Analyzer

Time integrator

Nonlinear solver

Eigensolver

Linear solver

Indicates dependence

Toolchain for PDE Solvers in TOPS* project

  • Design and implementation of “solvers”

    • Time integrators

    • Nonlinear solvers

    • Constrained optimizers

    • Linear solvers

    • Eigensolvers

  • Software integration

  • Performance optimization

(w/ sens. anal.)

(w/ sens. anal.)

*Terascale Optimal PDE Simulations: www.tops-scidac.org


Slide16 l.jpg

17 projects in scientific software and network infrastructure

SciDAC apps and infrastructure

4 projects in high energy and nuclearphysics

14 projects in biological and environmental research

10 projects in basic energy sciences

5 projects in fusion energy science


Optimal solvers l.jpg

iters infrastructure

200

unscalable

150

Time to Solution

100

50

procs

scalable

0

time

1000

1

10

100

Problem Size (increasing with number of processors)

Optimalsolvers

  • Convergence rate nearly independent of discretization parameters

    • Multilevel schemes for linear and nonlinear problems

    • Newton-like schemes for quadratic convergence of nonlinear problems

AMG shows perfect iteration scaling, above, in contrast to ASM, but still needs performance work to achieve temporal scaling, below, on CEMM fusion code, M3D, though time is halved (or better) for large runs (all runs: 4K dofs per processor)


We have run on most asci platforms l.jpg

Plan infrastructure

Develop

Use

We have run on most ASCI platforms…

100+ Tflop / 30 TB

Livermore

50+ Tflop / 25 TB

30+ Tflop / 10 TB

Capability

10+ Tflop / 4 TB

White

3+ Tflop / 1.5 TB

Blue

Livermore

Red

1+ Tflop / 0.5 TB

‘97

‘98

‘99

‘00

‘01

‘02

‘03

‘04

‘05

‘06

Time (CY)

Sandia

Los Alamos

NNSA has roadmap to go to 100 Tflop/s by 2006

www.llnl.gov/asci/platforms


And now the scidac platforms l.jpg
…and now the SciDAC platforms infrastructure

  • IBM Power3+ SMP

  • 16 procs per node

  • 208 nodes

  • 24 Gflop/s per node

  • 5 Tflop/s (doubled in February to 10)

Berkeley

  • IBM Power4 Regatta

  • 32 procs per node

  • 24 nodes

  • 166 Gflop/s per node

  • 4Tflop/s (10 in 2003)

Oak Ridge


Computational science at old dominion l.jpg
Computational Science at Old Dominion infrastructure

  • Launched in 1993 as “High Performance Computing”

  • Keyes appointed ‘93; Pothen early ’94

  • Major projects:

    • NSF Grand, National, and Multidisciplinary Challenges (1995-1998) [w/ ANL, Boeing, Boulder, ND, NYU]

    • DoEd Graduate Assistantships in Areas of National Need (1995-2001)

    • DOE Accelerated Strategic Computing Initiative “Level 2” (1998-2001) [w/ ICASE]

    • DOE Scientific Discovery through Advanced Computing (2001-2006) [w/ ANL, Berkeley, Boulder, CMU, LBNL, LLNL, NYU, Tennessee]

    • NSF Information Technology Research (2001-2006) [w/ CMU, Rice, Sandia, TRW]


Cs e at odu today l.jpg

Center for Computational Science at ODU established 8/2001; new 80,000 sq ft building (for Math, CS, Aero, VMASC, CCS) opens 1/2004; finally getting local buy-in

ODU’s small program has placed five PhDs at DOE labs in the past three years

CS&E at ODU today


Post doctoral and student alumni l.jpg

PETSc new 80,000 sq ft building (for Math, CS, Aero, VMASC, CCS) opens 1/2004; finally getting local buy-in

CS&E

faculty

PETSc

PETSc

CCA

Comp

Bio

TOPS

Comp

Bio

David Hysom, LLNL

Florin Dobrian, ODU

Gary Kumfert, LLNL

Post-doctoral and student alumni

Linda Stals, ANU

Dinesh Kaushik, ANL

Lois McInnes, ANL

Satish Balay, ANL

D. Karpeev, ANL


Begin pontification phase five models that allow cs e to prosper l.jpg
<Begin> “pontification phase” new 80,000 sq ft building (for Math, CS, Aero, VMASC, CCS) opens 1/2004; finally getting local buy-inFive models that allow CS&E to prosper

  • Laboratory institutes (hosted at a lab)

    ICASE, ISCR (more details to come)

  • National institutes (hosted at a university)

    IMA, IPAM

  • Interdisciplinary centers

    ASCI Alliances, SciDAC ISICs, SCCM, TICAM, CAAM, …

  • CS&E fellowship programs

    CSGF, HPCF

  • Multi-agency funding (cyclical to be sure, but sometimes collaborative)

    DOD, DOE, NASA, NIH, NSF, …


Llnl s iscr fosters collaborations with academe in computational science l.jpg

Serves as lab’s point of contact for computational science interests

Influences the external research community to pursue laboratory-related interests

Manages LLNL’s ASCI Institute collaborations in computer science and computational mathematics

Assists LLNL in technical workforce recruiting and training

LLNL’s ISCR fosters collaborations with academe in computational science


Iscr s philosophy science is borne by people l.jpg
ISCR’s philosophy: interestsScience is borne by people

  • Be “eyes and ears” for LLNL by staying abreast of advances in computer and computational science

  • Be “hands and feet” for LLNL by carrying those advances into the laboratory

  • Three principal means for packaging scientific ideas for transfer

    • papers

    • software

    • people

  • People are the most effective!


Iscr brings visitors to llnl through a variety of programs fy 2002 data l.jpg

Seminars & Visitors interests

180 visits from 147 visitors 66 ISCR seminars

ISCR

Summer Program43 grad students 29 undergrads 24 faculty

B451

Postdocs & Faculty 9 postdoctoral researchers3 faculty-in-residence

Workshops & Tutorials10 tutorial lectures6 technical workshops

ISCR brings visitors to LLNL through a variety of programs (FY 2002 data)


Iscr is the largest of llnl s six institutes l.jpg
ISCR is the largest of LLNL’s six institutes interests

  • Founded in 1986

  • Under current leadership since June 1999

ISCR has grown with LLNL’s increasing reliance on simulation as a predictive science


Our academic collaborators are drawn from all over l.jpg

ASCI ASAP-1 Centers interests

Caltech

Stanford University

University of Chicago

University of Illinois

University of Utah

Our academic collaborators are drawn from all over

  • Other Universities

    Carnegie Mellon

    Florida State University

    MIT

    Ohio State University

    Old Dominion University

    RPI

    Texas A&M University

    University of Colorado

    University of Kentucky

    University of Minnesota

    University of N. Carolina

    University of Tennessee

    University of Texas

    University of Washington

    Virginia Tech

    and more!

  • University of California

    Berkeley

    Davis

    Irvine

    Los Angeles

    San Diego

    Santa Barbara

    Santa Cruz

  • Major European Centers

    University of Bonn

    University of Heidelberg


Internships in terascale simulation technology itst tutorials l.jpg
Internships in Terascale Simulation Technology (ITST) tutorials

Students in residence hear from enthusiastic members of lab divisions, besides their own mentor, including five authors* of recent computational science books, on a variety of computational science topics

Lecturers: David Brown, Eric Cantu-Paz*, Alej Garcia*, Van Henson*, Chandrika Kamath, David Keyes, Alice Koniges*, Tanya Kostova, Gary Kumfert, John May*, Garry Rodrigue


Iscr pipelines people between the university and the laboratory l.jpg

Students tutorials

Faculty

Lab Employees

Faculty visit the ISCR, bringing students

Most faculty return to university, with lab priorities

Some students become lab employees

Some students become faculty, with lab priorities

A few faculty become lab employees

ISCR pipelines people between the university and the laboratory

Universities

ISCR

Lab programs


Iscr impact on doe computational science hiring l.jpg
ISCR impact on DOE computational science hiring tutorials

  • 178 ISCR summer students in past five years (many repeaters)

  • 51 have by now emerged from the academic pipeline

  • 23 of these (~45%) are now working for the DOE

    • 15 LLNL

    • 3 each LANL and Sandia

    • 1 each ANL and BNL

  • 11 of these (~20%) are in their first academic appointment

    • In US: Duke, Stanford, U California, U Minnesota, U Montana, U North Carolina, U Pennsylvania, U Utah, U Washington

    • Abroad: Swiss Federal Institute of Technology (ETH), University of Toronto


Iscr sponsors and conducts meetings on timely topics for lab missions l.jpg
ISCR sponsors and conducts meetings on timely topics for lab missions

  • Bay Area NA Day

  • Common Component Architecture

  • Copper Mountain Multigrid Conference

  • DOE Computational Science Graduate Fellows

  • Hybrid Particle-Mesh AMR Methods

  • Mining Scientific Datasets

  • Large-scale Nonlinear Problems

  • Overset Grids & Solution Technology

  • Programming ASCI White

  • Sensitivity and Uncertainty Quantification


We hosted a power programming short course to prepare llnl for asci white l.jpg
We hosted a “Power Programming” short course to prepare LLNL for ASCI White

  • Steve White, IBMASCI White overview, POWER3 architecture, tuning for White

  • Larry Carter, UCSD/NPACIdesigning kernels and data structures for scientific applications, cache and TLB issues

  • David Culler, UC Berkeleyunderstanding performance thresholds

  • Clint Whalley, U Tennesseecoding for performance

  • Bill Gropp, Argonne National LabMPI-1, Parallel I/O, MPI/OpenMP tradeoffs

65 internal attendees over 3 days


We launched the terascale simulation lecture series to receptive audiences l.jpg
We launched the Terascale Simulation Lecture Series to receptive audiences

  • Fred Brooks, UNC

  • Ingrid Daubechies, Princeton

  • David Johnson, AT&T

  • Peter Lax, NYU

  • Michael Norman, UCSD

  • Charlie Peskin, NYU

  • Gil Strang, MIT

  • Burton Smith, Cray

  • Eugene Spafford, Purdue

  • Andries Van Dam, Brown


Continue pontification phase concluding swipes l.jpg
<Continue> “pontification phase” receptive audiencesConcluding swipes

  • A curricular challenge for CS&E programs

  • Signs of the times for CS&E

    • “Red skies at morning” ( “sailers take warning”)

    • “Red skies at night” (“sailers delight”)

  • Opportunities in which CS&E will shine

  • A word to the sponsors


A curricular challenge l.jpg
A curricular challenge receptive audiences

  • CS&E majors without a CS undergrad need to learn to compute!

  • Prerequisite or co-requisite to becoming useful interns at a lab

  • Suggest a “bootcamp” year-long course introducing:

    • C/C++ and object-oriented program design

    • Data structures for scientific computing

    • Message passing (e.g., MPI) and multithreaded (e.g., OpenMP) programming

    • Scripting (e.g., Python)

    • Linux clustering

    • Scientific and performance visualization tools

    • Profiling and debugging tools

  • NYU’s sequence G22.1133/G22.1144 is an example for CS


Red skies at morning l.jpg
“Red skies at morning” receptive audiences

  • Difficult to get support for maintaining critical software infrastructure and “benchmarking” activities

  • Difficult to get support for hardware that is designed with computational science and engineering in mind

  • Difficult for pre-tenured faculty to find reward structures conducive to interdisciplinary efforts

  • Unclear how stable is the market for CS&E graduates at the entrance to a 5-year pipeline

  • Political necessity of creating new programs with each change of administrations saps time and energy of managers and community


Red skies at night l.jpg
“Red skies at night” receptive audiences

  • DOE’s SciDAC model being recognized and propagated

  • NSF’s DMS budgets on a multi-year roll

  • SIAM SIAG-CSE attracting members from outside of traditional SIAM departments

  • CS&E programs beginning to exhibit “centripetal” potential in traditionally fragmented research universities

    e.g., SCCM’s “Advice” program

  • Computing at the large scale is weaning domain scientists from “Numerical Recipes” and MATLAB and creating thirst for core enabling technologies (NA, CS, Viz, …)

  • Cost effectiveness of computing, especially cluster computing, is putting a premium on graduate students who have CS&E skills


Opportunity nanoscience modeling l.jpg
Opportunity: nanoscience modeling receptive audiences

  • Jul 2002 report to DOE

  • Proposes $5M/year theory and modeling initiative to accompany the existing $50M/year experimental initiative in nano science

  • Report lays out research in numerical algorithms and optimization methods on the critical path to progress in nanotechnology


Opportunity integrated fusion modeling l.jpg
Opportunity: integrated fusion modeling receptive audiences

  • Dec 2002 report to DOE

  • Currently DOE supports 52 codes in Fusion Energy Sciences

  • US contribution to ITER will “major” in simulation

  • Initiative proposes to use advanced computer science techniques and numerical algorithms to improve the US code base in magnetic fusion energy and allow codes to interoperate


A word to the sponsors l.jpg
A word to the sponsors receptive audiences

  • Don’t cut off the current good stuff to start the new stuff

  • Computational science & engineering workforce enters the pipeline from a variety of conventional inlets (disciplinary first, then interdisciplinary)

  • Personal debts:

    • NSF HSSRP in Chemistry (SDSU)

    • NSF URP in Computer Science (Brandeis) – precursor to today’s REU

    • NSF Graduate Fellowship in Applied Mathematics

    • NSF individual PI grants in George Lea’s computational engineering program – really built community (Benninghof, Farhat, Ghattas, C. Mavriplis, Parsons, Powell + many others active in CS&E at labs, agencies, and universities today) at NSF-sponsored PI meetings, long before there was any university support at all


Related urls l.jpg
Related URLs receptive audiences

  • Personal homepage: papers, talks, etc.

    http://www.math.odu.edu/~keyes

  • ISCR (including annual report) http://www.llnl.gov/casc/iscr

  • SciDAC initiative

    http://www.science.doe.gov/scidac

  • TOPS software project

    http://www.math.odu.edu/~keyes/scidac


The power of optimal algorithms l.jpg

64 receptive audiences

64

2u=f

64

*On a 16 Mflop/s machine, six-months is reduced to 1 s

The power of optimal algorithms

  • Advances in algorithmic efficiency rival advances in hardware architecture

  • Consider Poisson’s equation on a cube of size N=n3

  • If n=64, this implies an overall reduction in flops of ~16 million


Slide44 l.jpg

relative speedup receptive audiences

year

Algorithms and Moore’s Law

  • This advance took place over a span of about 36 years, or 24 doubling times for Moore’s Law

  • 22416 million  the same as the factor from algorithms alone!


The power of optimal algorithms45 l.jpg

AMG Framework receptive audiences

error damped by pointwise relaxation

Choose coarse grids, transfer operators, etc. to eliminate, based on numerical weights, heuristics

The power of optimal algorithms

  • Since O(N) is already optimal, there is nowhere further “upward” to go in efficiency, but one must extend optimality “outward”, to more general problems

  • Hence, for instance, algebraic multigrid (AMG), obtaining O(N) in anisotropic, inhomogeneous problems

algebraically smooth error


ad