Loading in 2 Seconds...

Rich Loft Director, Technology Development Computational and Information Systems Laboratory

Loading in 2 Seconds...

- By
**istas** - Follow User

- 90 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Rich Loft Director, Technology Development Computational and Information Systems Laboratory' - istas

Download Now**An Image/Link below is provided (as is) to download presentation**

Download Now

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Climate Change Research Epochs

### Dr. Henry Tufoand myself with “frost”(2005)

An Inconvenient Question: Are We Going to Get the Algorithms and Computing Technology We Need to Make Critical Climate Predictions in Time?

Rich Loft

Director, Technology Development

Computational and Information Systems Laboratory

National Center for Atmospheric Research

loft@ucar.edu

Main Points

- Nature of the climate system makes it a grand challenge computing problem.
- We are at a critical juncture: we need regional climate prediction capabilities!
- Computer clock/thread speeds are stalled: massive parallelism is the future of supercomputing.
- Our best algorithms, parallelization strategies and architectures are inadequate to the task.
- We need model acceleration improvements in all three areas if we are to meet the challenge.

Options for Application Acceleration

- Scalability
- Eliminate bottlenecks
- Find more parallelism
- Load balancing algorithms
- Algorithmic Acceleration
- Bigger Timesteps
- Semi-Lagrangian Transport
- Implicit or semi-implicit time integration – solvers
- Fewer Points
- Adaptive Mesh Refinement methods
- Hardware Acceleration
- More Threads
- CMP, GP-GPU’s
- Faster threads
- device innovations (high-K)
- Smarter threads
- Architecture - old tricks, new tricks… magic tricks
- Vector units, GPU’s, FPGA’s

A Very Grand Challenge:Coupled Models of the Earth System

~150 km

air column

water column

Viner (2002)

Typical Model Computation:

- 15 minute time steps

- 1 peta-flop per model year

There are 3.5 million timesteps in a century

Multicomponent Earth System Model

Coupler

Land

Atmosphere

Ocean

Sea Ice

C/N

Cycle

Dyn.

Veg.

Land

Use

Ecosystem

& BGC

Gas chem.

Prognostic

Aerosols

Upper

Atm.

Ice

Sheets

- Software Challenges:
- Increasing Complexity
- Validation and Verification
- Understanding the Output

Key concept: A flexible coupling framework is critical!

- IPCC AR4: “Warming of the climate system is un-equivocal” …
- …and it is “very likely” caused by human activities.
- Most of the observed changes over the past 50 years are now simulated by climate models adding confidence to future projections.
- Model Resolutions: O(100 km)

Assess regional impacts

Simulate adaptation strategies

Simulate geoengineering solns

Reproduce historical trends

Investigate climate change

Run IPCC Scenarios

Before IPCCAR4 After

Curiosity Driven

Policy Driven

Where we want to go:The Exascale Earth System Model Vision

Coupled Ocean-Land-Atmosphere Model

~1 km x ~1 km (cloud-resolving)

100 levels, whole atmosphere

Unstructured, adaptive grids

~100 m

10 levels

Landscape-resolving

~10 km x ~10 km (eddy-resolving)

100 levels

Unstructured, adaptive grids

Requirement: Computing power enhancement by as much as a factor of 1010-1012

ESSL - The Earth & Sun Systems Laboratory

YIKES!

Compute Factors for ultra-high resolution Earth System Model

(courtesy of John Drake, ORNL)

Ocean component of CCSM (Collins et al, 2006)

Eddy-resolving POP (Maltrud & McClean,2005)

Why High Resolution in the Ocean?1˚

0.1˚

Performance Improvements are not coming fast enough!

…suggests 1010 to 1012 improvement will take 40 years

ITRS Roadmap: feature size dropping 14%/year

By 2050 reaches the size of an atom – oops!

National Security Agency - The power consumption of today's advanced computing systems is rapidly becoming the limiting factor with respect to improved/increased computational ability."

Chip Level Trends: Stagnant Clock Speed

- Chip density is continuing increase ~2x every 2 years
- Clock speed is not
- Number of cores are doubling instead
- There is little or no additional hidden parallelism (ILP)
- Parallelism must be exploited by software

Source: Intel, Microsoft (Sutter) and Stanford (Olukotun, Hammond)

Moore’s Law -> More’s Law: Speed-up through increasing parallelism

How long can we double the number of cores per chip?

NCAR and University Colorado Partner

to Experiment with Blue Gene/L

- Characteristics:
- 2048 Processors/5.7 TF
- PPC 440 (750 MHz)
- Two processors/node
- 512 MB memory per node
- 6 TB file system

Current high resolution CCSM runs

- 0.25 ATM,LND + 0.1 OCN,ICE [ATLAS/LLNL]
- 3280 processors
- 0.42 simulated years/day (SYPD)
- 187K CPU hours/year
- 0.50 ATM,LND + 0.1 OCN,ICE [FRANKLIN/NERSC]
- Current
- 5416 processors
- 1.31 SYPD
- 99K CPU hours/year
- “Efficiency Goal
- 4932 processors
- 1.80 SYPD
- 66K CPU hours/year

120 sec.

52 sec.

ATM

[np=1664]

CPL

[np=384]

LND

[np=16]

ICE

[np=1800]

21 sec.

91 sec.

5416 processors

Current 0.5 CCSM “fuel efficient” configuration [franklin]OCN

[np=3600]

120 sec.

52 sec.

ATM

[np=1664]

OCN

[np=3600]

CPL

[np=384]

LND

[np=16]

ICE

[np=1800]

21 sec.

91 sec.

5416 processors

Efficiency issues in current 0.5 CCSM configurationUse Space Filling Curves (SFC) in POP, reduce processor count by 13%.

Load Balancing: Partitioning with Space Filling Curves

Partition for 3 processors

Space-filling Curve Partitioning for Ocean Model running on 8 Processors

Static Load Balancing…

Key concept: no need to compute over land!

Ocean Model 1/10 Degree performance

Key concept: You need routine access to > 1k procs to discover true scaling behaviour!

120 sec.

52 sec.

ATM

[np=1664]

OCN

[np=3600]

CPL

[np=384]

LND

[np=16]

ICE

[np=1800]

21 sec.

91 sec.

5416 processors

Efficiency issues in Current CCSM 0.5 configurationUse wSFC in CICE, reduce Execution time by 2x.

Small domains @ high latitudes

Static, Weighted Load Balancing Example:Sea Ice Model CICE4 @ 1° on 20 processorsCourtesy of John Dennis

120 sec.

52 sec.

ATM

[np=1664]

OCN

[np=3600]

CPL

[np=384]

LND

[np=16]

ICE

[np=1800]

21 sec.

91 sec.

5416 processors

Efficiency issues in current 0.5 CCSM configuration: CouplerUnresolved scalability issues in Coupler – Options: Better interconnect,Nested grids,

PGAS language paradigm

120 sec.

52 sec.

ATM

[np=1664]

OCN

[np=3600]

CPL

[np=384]

LND

[np=16]

ICE

[np=1800]

21 sec.

91 sec.

5416 processors

Efficiency issues in current 0.5 CCSM configuration: atmospheric componentScalability limitation in 0.5° fv-CAM[MPI] – shift to hybrid OpenMP/MPI version

62 sec.

31 sec.

ATM

[np=5200]

CPL

[np=384]

LND

[np=40]

ICE

[np=8120]

21 sec.

10 sec.

19460 processors

Projected 0.5 CCSM “capability” configuration: 3.8 years/dayOCN

[np=6100]

Action: Run hybrid atmospheric model

62 sec.

31 sec.

ATM

[np=5200]

CPL

[np=384]

LND

[np=40]

ICE

[np=8120]

21 sec.

10 sec.

14260 processors

Projected 0.5 CCSM “capability” configuration - version 2: 3.8 years/dayOCN

[np=6100]

Action: Thread ice model

Showing degree of

non-uniformity

Scalable Geometry Choice: Cube-Sphere- Sphere is decomposed into 6 identical regions using a central projection (Sadourny, 1972) with equiangular grid (Rancic et al., 1996).
- Avoids pole problems, quasi-uniform.
- Non-orthogonal curvilinear coordinate system with identical metric terms

Scalable Numerical Method:High-Order Methods

- Algorithmic Advantages of High Order Methods
- h-p element-based method on quadrilaterals (Ne x Ne)
- Exponential convergence in polynomial degree (N)
- Computational Advantages of High Order Methods
- Naturally cache-blocked N x N computations
- Nearest-neighbor communication between elements (explicit)
- Well suited to parallel µprocessor systems

HOMME: Computational Mesh

- Elements:
- A quadrilateral “patch” of N x N gridpoints
- Gauss-Lobatto Grid
- Typically N={4-8}
- Cube
- Ne = Elements on an edge
- 6 x Ne x Ne elements total

Aqua-Planet CAM/HOMME Dycore

Full CAM Physics/HOMME Dycore

Parallel I/O library used for physics aerosol input and input data

( work COULD NOT have been done without Parallel IO)

Work underway to couple to other CCSM components

5 years/day

60 sec.

47 sec.

HOMME ATM

[np=24000]

CPL

[np=3840]

LND

[np=320]

ICE

[np=16240]

8 sec.

5 sec.

30000 processors

Projected 0.25 CCSM “capability” configuration - version 2: 4.0 years/dayOCN

[np=6000]

Action: insert scalable atmospheric dycore

Using a bigger parallel machine can’t be the only answer

- Progress in the Top 500 list is not fast enough
- Amdahl’s Law is formidable opponent
- Dynamical timestep goes like N-1
- Merciless effect of Courant limit
- The cost of dynamics relative to physics increases as N
- e.g. if dynamics takes 20% at 25 km it will take 86% of the time at 1 km
- Traditional parallelization of horizontal leaves N2 per thread cost (vertical x horizontal)
- Must inevitably slow down with stalled thread speeds

Options for Application Acceleration

- Scalability
- Eliminate bottlenecks
- Find more parallelism
- Load balancing algorithms
- Algorithmic Acceleration
- Bigger Timesteps
- Semi-Lagrangian Transport
- Implicit or semi-implicit time integration – solvers
- Fewer Points
- Adaptive Mesh Refinement methods
- Hardware Acceleration
- More Threads
- CMP, GP-GPU’s
- Faster threads
- device innovations (high-K)
- Smarter threads
- Architecture - old tricks, new tricks… magic tricks
- Vector units, GPU’s, FPGA’s

Accelerator Research

- Graphics Cards – Nvidia 9800/Cuda
- Measured 109x on WRF microphysics on 9800GX2
- FPGA – Xilinx (data flow model)
- 21.7x simulated on sw-radiation code
- IBM Cell Processor - 8 cores
- Intel Larrabee

DG+NH+AMR

- Curvilinear elements
- Overhead of parallel AMR at each time-step: less than 1%

Idea based on Fischer, Kruse, Loth (02)

Courtesy of Amik St. Cyr

SLIM ocean model

- Louvain la Neuve University
- DG, implicit, AMR unstructured

To be coupled to prototype unstructured ATM model

(Courtesy of J-F Remacle LNU)

NCAR Summer Internships in Parallel Computational Science (SIParCS)2007-2008

- Open to:
- Upper division undergrads
- Graduate students
- In Disciplines such as:
- CS, Software Engineering
- Applied Math, Statistics
- ES Science
- Support:
- Travel, Housing, Per diem
- 10 weeks salary
- Number of interns selected:
- 7 in 2007
- 11 in 2008

http://www.cisl.ucar.edu/siparcs

Contributors:

D. Bailey (NCAR)

F. Bryan (NCAR)

T. Craig (NCAR)

A. St. Cyr (NCAR)

J. Dennis (NCAR)

J. Edwards (IBM)

B. Fox-Kemper (MIT,CU)

E. Hunke (LANL)

B. Kadlec (CU)

D. Ivanova (LLNL)

E. Jedlicka (ANL)

E. Jessup (CU)

R. Jacob (ANL)

P. Jones (LANL)

S. Peacock (NCAR)

K. Lindsay (NCAR)

W. Lipscomb (LANL)

R. Loy (ANL)

J. Michalakes (NCAR)

A. Mirin (LLNL)

M. Maltrud (LANL)

J. McClean (LLNL)

R. Nair (NCAR)

M. Norman (NCSU)

T. Qian (NCAR)

M. Taylor (SNL)

H. Tufo (NCAR)

M. Vertenstein (NCAR)

P. Worley (ORNL)

M. Zhang (SUNYSB)

Funding:

DOE-BER CCPP Program Grant

DE-FC03-97ER62402

DE-PS02-07ER07-06

DE-FC02-07ER64340

B&R KP1206000

DOE-ASCR

B&R KJ0101030

NSF Cooperative Grant NSF01

NSF PetaApps Award

Computer Time:

Blue Gene/L time:

NSF MRI Grant

NCAR

University of Colorado

IBM (SUR) program

BGW Consortium Days

IBM research (Watson)

LLNL

Stony Brook & BNL

CRAY XT3/4 time:

ORNL

Sandia

The Size of the Interdisciplinary/Interagency Team Working on Climate ScalabilityQ. If you had a petascale computerwhat would you do with it?

A. Use it as a prototype of an exascale computer.

Download Presentation

Connecting to Server..