4 3 1 pioneering applications priority research directions l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
4.3.1 Pioneering Applications: Priority Research Directions PowerPoint Presentation
Download Presentation
4.3.1 Pioneering Applications: Priority Research Directions

Loading in 2 Seconds...

play fullscreen
1 / 14

4.3.1 Pioneering Applications: Priority Research Directions - PowerPoint PPT Presentation


  • 220 Views
  • Uploaded on

4.3.1 Pioneering Applications: Priority Research Directions. Criteria for Consideration. Summary of Barriers & Gaps. Demonstrated need for Exascale Significant Scientific Impact in: basic physics, environment, engineering, life sciences, materials

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '4.3.1 Pioneering Applications: Priority Research Directions' - Anita


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
4 3 1 pioneering applications priority research directions
4.3.1 Pioneering Applications: Priority Research Directions

Criteria for Consideration

Summary of Barriers & Gaps

  • Demonstrated need for Exascale
  • Significant Scientific Impact in: basic physics, environment, engineering, life sciences, materials
  • Realistic Productive Pathway (over 10 years) to Exploitation of Exascale

What will Pioneering Apps do to address the barriers & gaps in associated Priority Research Directions (PRD’s)?

Potential Impact on Software

Potential impact on user community

(usability, capability, etc.)

What new software capabilities will result?

What new methods and tools will be developed?

How will this realistically impact the research advances targeted by pioneering applications that may benefit from exascale systems?

What’s the timescale in which that impact may be felt?

slide2

4.3.1 PIONEERING APPLICATIONS

Pioneering Applications with demonstrated need for Exascale to have significant scientific impact on associated priority research directions (PRD’s) with a productive pathway to exploitation of computing at the extreme scale

Multi-hadron physics

Electroweak symmetry breaking

Whole system

burning plasma simulations

Single hadron physics

Regional decadal climate

Science Milestones

Global coupled climate processes

Integrated Plasma core-edge simulations

New capability 1

2010

2011

2012

2013

2014

2015

2016

2017

2018

2019

1 PF

10 PF

100 PF

1 EF

pioneering apps
PIONEERING APPS
  • Technology drivers
    • Advanced architectures with greater capability but with formidable software development challenges
  • Alternative R&D strategies
    • Choosing architectural platform(s) capable of addressing PRD’s of Pioneering Apps on path to exploiting Exascale
  • Recommended research agenda
    • Effective collaborative alliance between Pioneering Apps, CS, and Applied Math with an associated strong V&V effort
  • Crosscutting considerations
    • Identifying possible common areas of software development need among the Pioneering Apps
    • Addressing common need to attract, train, and assimilate young talent into this general research arena
4 3 1 pioneering applications high energy physics
4.3.1 Pioneering Applications: High Energy Physics

Key challenges

Summary of research direction

  • The applications community will develop:
  • Multi-layer, multi-scale algorithms and implementations
  • Optimised single-core/single-chip complex linear algebra routines
  • Mixed precision arithmetic for fast memory access and off-chip communications
  • Algorithms that tolerate hardware without error detection/correction
  • Verification of all components (algorithms, software and hardware)
  • Data management and standards for shared data use
  • Achieving the highest possible sustained applications performance for the lowest cost
  • Exploiting architectures with imbalanced node performance and inter-node communications
  • Developing multi-layered algorithms and implementations to exploit on-chip (heterogeneous) capabilities, fast memory, and massive system parallelism
  • Tolerance to and recovery from system faults at all levels over long runtimes

Potential impact on software component

Potential impact on usability, capability,

and breadth of community

  • Generic software components required:
  • Performance analysis tools
  • Highly parallel, high bandwidth I/O
  • Efficient compilers for multi-layered parallel algorithms targeting heterogeneous architectures
  • Automatic recovery from hardware/system errors
  • Robust global file system and metadata standards
  • Stress testing and verification of exascale hardware and system software
  • Development of new stochastic and linear solver algorithms
  • Reliable fault-tolerant massively parallel systems
  • Global data sharing and interoperability
4 3 1 pioneering applications high energy physics5
4.3.1 Pioneering Applications: High Energy Physics
  • Technology drivers
    • massive parallelism
    • heterogeneous microprocessor architectures
  • Alternative R&D strategies
    • optimisation of computationally demanding kernels
    • algorithms targeting all levels of hardware parallelism
  • Recommended research agenda
    • co-design of hardware and software
  • Cross-cutting considerations
    • automated fault tolerance at all levels
    • multi-level algorithm implementation and optimisation
slide6

3-55 Hz

Algorithm complexity

1015 flops

1000

3-35 Hz

Visco elastic FWI

petro-elastic inversion

100

elastic FWI

visco elastic modeling

10

3-18 Hz

isotropic/anisotropic FWI

elastic modeling/RTM

1

0,5

isotropic/anisotropic RTM

isotropic/anisotropic modeling

0,1

Paraxial isotropic/anisotropic imaging

Asymptotic approximation imaging

2010

2015

2020

2005

1995

2000

Industrial challenges in the Oil & Gas industry: Depth Imaging roadmap

9.5 PF

900 TF

56 TF

RTM

Substained performance for different frequency content over a 8 day processing duration

HPC Power PAU (TF)

courtesy

Algorithmic complexity Vs. corresponding computing power

6

high performance computing as key enabler
High Performance Computing as key-enabler

Capacity: # of Overnight Loads cases run

LES

Available ComputationalCapacity [Flop/s]

UnsteadyRANS

1 Zeta (1021)

102

103

1 Exa (1018)

RANS Low Speed

x106

104

1 Peta (1015)

RANS High Speed

  • “Smart” use of HPC power:
  • Algorithms
  • Data mining
  • knowledge

105

1 Tera (1012)

106

1 Giga (109)

1990

2000

2010

2020

2030

1980

CFD-basedLOADS & HQ

Aero Optimisation& CFD-CSM

Full MDO

CFD-basednoise simulation

Real time CFD based in flight simulation

Data Set

HS Design

Capability achieved during one night batch

Courtesy AIRBUS France

computational challenges and needs for academic and industrial applications communities backup
Computational Challenges and Needs for Academic and Industrial Applications CommunitiesBACKUP

Computations with smaller and smaller scales in larger and larger geometries

 a better understanding of physical phenomena  a more effective help for decision making

 A better optimisation of the production (margin benefits)

2003

2006

2007

2010

2015

The whole vessel reactor

Consecutive thermal fatigue event

Computations enable to better understand the wall thermal loading in an injection.

Knowing the root causes of the event  define a new design to avoid this problem.

9 fuel assemblies

No experimental approach up to now

Will enable the study of side effects implied by the flow around neighbour fuel assemblies.

Better understanding of vibration phenomena and wear-out of the rods.

Computation with an L.E.S. approach for turbulent modelling

Refined mesh near the wall.

Part of a fuel assembly

3 grid assemblies

106 cells

3.1013operations

107 cells

6.1014 operations

108 cells

1016operations

109 cells

3.1017operations

1010 cells

5.1018operations

Fujistu VPP 5000

1 of 4 vector processors

2 month length computation

Cluster, IBM Power5

400 processors

9 days

IBM Blue Gene/L

20 Tflops during 1 month

600 Tflops during 1 month

10 Pflops during 1 month

# 1 Gb of storage

2 Gb of memory

# 15 Gb of storage

25 Gb of memory

# 200 Gb of storage

250 Gb of memory

# 1 Tb of storage

2,5 Tb of memory

# 10 Tb of storage

25 Tb of memory

Pre-processing not parallelized

Mesh generation

… ibid. …

… ibid. …

Scalability / Solver

… ibid. …

… ibid. …

… ibid. …

Visualisation

Power of the computer

Pre-processing not parallelized

IESP/Application Subgroup

slide9

From sequences to structures : HPC Roadmap

2009

2011

2015 and beyond

Grand Challenge GENCI/CCRT

Proteins 69 (2007) 415

Improving the prediction of protein structure by coupling new bio-informatics algorithm and massive molecular dynamics simulation approaches.

Systematic identification of biological partners of proteins.

Identify all protein sequences using public resources and metagenomics data, and systematic modelling of proteins belonging to the family (Modeller software).

Computations using more and more sophisticated bio-informatical and physical modelling approaches  Identification of protein structure and function

1 family

~ 104*KP cpu/~week

CSP : proteins structurally characterized ~ 104

1 family

5.103 cpu/~week

1 family

5.104 cpu/~week

# 25 Gb of storage

500 Gb of memory

# 5 Tb of storage

5 Tb of memory

# 5*CSP Tb of storage

5*CSP Tb of memory

slide10

PIONEERING APPS: Fusion Energy Sciences

Criteria for Consideration

(1)Demonstrated need for Exascale

-- FES applications currently utilize LCF’s at ORNL and ANL, demonstrating scalability of key physics with increased computing capability

(2) Significant Scientific Impact: (identified at DOE Grand Challenges Workshop)

-- high physics fidelity integration of multi-physics, multi-scale FES dynamics

-- burning plasmas/ITER physics simulation capability

(3) Productive Pathway (over 10 years) to Exploitation of Exascale

-- ability to carry out confinement simulations (including turbulence-driven transport) demonstrates ability to include higher physics fidelity components with increased computational capability

-- needed for both of the areas identified in (2) as priority research directions

slide11

PIONEERING APPS: Fusion Energy Sciences

Summary of Barriers & Gaps

(1) high physics fidelity integration of multi-physics, multi-scale FES dynamics

-- FES applications for macroscopic stability, turbulent transport, edge physics (where atomic processes important), etc. have demonstrated at various levels of efficiency the capability of using existing LCF’s

-- Associated Barrier & Gap: need to integrate/couple improved versions of such large scale simulations to produce an experimentally validated integrated simulation capability for scenario modeling of the whole device

(2) burning plasmas/ITER physics simulation capability

-- As FES enters new era of burning plasma experiments on the reactor scale, require capabilities for addressing the larger spatial and longer energy-confinement time

-- Associated Barrier & Gap: scales spanning the small gyroradius of the ions to the radial dimension of the plasmas will need to be addressed • an order of magnitude greater spatial resolution is needed to account for the larger plasmas of interest • major increase expected in the plasma energy confinement time (~1 second in the ITER device) together with the longer pulse of the discharges in these superconducting systems • will demand simulations of unprecedented aggregate floating point operations

slide12

PIONEERING APPS: Fusion Energy Sciences

Potential Impact on Software

(1) What new software capabilities will result?

-- For each science driver and each exascale-appropriate application the approach for developing new software capabilities will involve:

• Inventory current codes with respect to mathematical formulations, data structures, current scalability of algorithms and solvers (e.g. Poisson solves) with associated identification of bottlenecks to scaling, current libraries used, and

“complexity” with respect to memory, flops, and communication

• Inventory current capabilities for workflows, frameworks, V&V, uncertainty quantification, etc. with respect to: tight vs. loose code coupling schemes for integration; mgt. of large data sets from experiments & simulations; etc.

• Inventory expected software developmental tasks for the path to exascale (concurrency, memory access, etc.)

• Inventory and carry out work-force assessment needs with respect to computer scientists, applied mathematician, and FES applications scientists.

(2) What new methods and tools will be developed?

-- Outcome from above inventory/assessment exercises should lead to development of corresponding exascale relevant tools and capabilities.

slide13

PIONEERING APPS: Fusion Energy Sciences

  • Potential impact on user community
  • (usability, capability, etc.)
  • How will this realistically impact the research advances targeted by FES that may benefit from exascale systems?
  • -- The FES PRD’s for (1) high physics fidelity integrated simulations and for addressing (2) burning plasmas/ITER challenges will potentially be able to demonstrate how the application of exascale computing capability can enable the accelerated delivery of much needed modeling tools.
  • (2) What’s the timescale in which that impact may be felt?
  • -- As illustrated on Pioneering Apps Roadmap (earlier slide):
  • • 10 to 20 PF (2012) integrated plasma core-edge coupled simulations
  • • 1 EF (2018) whole-system burning plasma simulations applicable to ITER
slide14

PIONEERING APPS: Fusion Energy Sciences

  • Technology drivers
    • Advanced architectures with greater capability but with formidable software development challenges (e.g., scalable algorithms and solvers; workflows & frameworks; etc.)
  • Alternative R&D strategies
    • Choosing architectural platform(s) capable of addressing PRD’s of FES on path to exploiting Exascale
  • Recommended research agenda
    • Effective collaborative alliance between FES, CS, and Applied Math (e.g., SciDAC activities) with an associated strong V&V effort
  • Crosscutting considerations
    • Identifying possible common areas of software development needs with Pioneering Apps (climate, etc.)
    • Critical need in FES to attract, train, and assimilate young talent into this field – in common with general computational science research arena