Fusion energy sciences greenbook presentation
Download
1 / 22

Fusion Energy Sciences Greenbook Presentation - PowerPoint PPT Presentation


  • 102 Views
  • Uploaded on

Fusion Energy Sciences Greenbook Presentation. prepared by: Carl Sovinec (U-WI), Alex Friedman (LLNL&LBNL), Stephane Ethier (PPPL), and Chuang Ren (UCLA). National Energy Research Scientific Computing Center User Group Meeting, June 25, 2004. OUTLINE. Fusion sciences overview

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Fusion Energy Sciences Greenbook Presentation' - meena


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Fusion energy sciences greenbook presentation
Fusion Energy Sciences Greenbook Presentation

prepared by:

Carl Sovinec (U-WI), Alex Friedman (LLNL&LBNL),

Stephane Ethier (PPPL), and Chuang Ren (UCLA)

National Energy Research Scientific Computing Center User Group Meeting, June 25, 2004


Outline
OUTLINE

  • Fusion sciences overview

    • Plasma properties and descriptions

    • Ranges of time- and space-scales

  • Large-scale computations in FES

    • Magnetic fusion energy (MFE)

    • Inertial fusion energy (IFE)

  • Input from FES NERSC users


Fusion sciences overview
Fusion Sciences Overview

  • Fusion science is largely plasmascience.

    • Matter is in the plasma state at fusion conditions.

    • Collective plasma dynamics regulate confinement or focusing.

    • Heating and drive rely on interaction of electromagnetic waves with plasmas.

    • Macroscopic plasma dynamics impose stability limits.

    • Plasma-surface interaction (atomic physics) impacts feasibility.

  • With ITER being planned and NIF now operational, computation has a tremendous opportunity to contribute programmatically.

International Thermonuclear Experimental Reactor

National Ignition Facility


Plasma properties and theoretical descriptions
Plasma Properties and Theoretical Descriptions

  • Particle-particle interactions are long-range but weak, so while classical statistics hold, plasmas are easily driven from local thermodynamic equilibrium.

  • Dominant interactions occur through collective motions and EM fields (E, B).

  • Kinetic theory provides an accurate and comprehensive plasma description:

}

Particle distribution evolution

}

Maxwell’s equations

  • fa(x,v,t) is the ensemble-averaged single-particle distribution function for each species (a=i,e).

  • This system is sometimes solved in primitive (6D+time) form, e.g. for the propagation of ion beams and lasers in plasmas. However, even with large-scale computation, various limits and physically motivated averages are usually applied to isolate different classes of behavior.


Magnetic fusion energy scales and descriptions
Magnetic Fusion Energy Scales and Descriptions

  • Approximations used for MFE plasmas lead to tractable but limited theoretical descriptions that are suitable for different ranges of spatial scales and characteristic times.

RF

Transport

Gyro-kinetics

MHD


Scales for heavy ion beam physics

In driver

-12

-11

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

In chamber

Scales for Heavy Ion Beam Physics

Time scales:

depressed

betatron

betatron

t

electron drift

pb

out of magnet

»

transit

lattice

thru

electron

period

fringe

beam

cyclotron

pulse

fields

residence

in magnet

log of timescale

pulse

beam

t

pe

in seconds

residence

t

pi

t

pb

Length scales:

  • electron gyroradius in magnet ~10 mm

  • lD,beam ~ 1 mm

  • beam radius ~ cm

  • machine length ~ km's


Large scale computation in mfe
Large-scale Computation in MFE

  • Existing computational efforts are addressing the following fundamental issues:

  • Nonlinear macroscopic plasma stability and the consequences of instability.

  • Cross-magnetic-field transport of plasma particles and energy from small-scale turbulence.

  • Heating / current drive / momentum input via RF waves.

  • Edge plasma dynamics and interactions with core plasma.

  • Atomic physics arising from plasma-surface interaction.

  • SCIDAC collaborations have helped extend the capabilities of the numerical models.

  • Integrated modeling will couple disparate descriptions to predict the nonlinear behavior of burning plasmas. (http://www.isofs.info/)


Computations for macroscopic stability must address stiffness and anisotropy
Computations for macroscopic stability must address stiffness and anisotropy.

  • Nonlinear dynamics that change the topology of the confining magnetic field are modeled with single- and two-fluid systems of equations, augmented by kinetic closures and/or minority species in some simulations.

  • Anisotropy produces subtle balances of large forces, nearly singular behavior at rational surfaces, and vastly different parallel and perpendicular transport properties.

  • System stiffness reflects the large range of time-scales from Alfvén-wave propagation to slow nonlinear evolution (~transport scale).

This nonlinear simulation of a loss-of-confinement event in discharge #87009 of the GA DIII-D tokamak helped explain how internal MHD activity altered the heat deposition. (NIMROD data courtesy of Scott Kruger, Tech-X Corp.; SCIRUN graphics from Allen Sanderson, U. Utah)

SCIDAC Center for Extended MHD Modeling

w3.pppl.gov/CEMM


Performance of macroscopic computations is dominated by parallel linear algebra
Performance of macroscopic computations is dominated by parallel linear algebra.

  • Stiff PDE systems require implicit and/or semi-implicit methods that lead to ill-conditioned matrices.

  • The algebraic systems are solved at every time-step, and a complete nonlinear computation may require 104 or more time-steps.

  • SCIDAC collaborations (X. Li, SuperLU; TOPS-PETSc group) led to performance breakthroughs, but scaling to large #s of procs. remains challenging.

Fixed problem-size scaling with SuperLU (left) and NIMROD-native CG solver (right).


First principles computation of microturbulence leading to transport requires kinetic effects
First-principles computation of microturbulence leading to transport requires kinetic effects.

  • Ion gyro-orbits about magnetic field-lines are small with respect to the device size and parallel wavelengths but comparable to perpendicular wavelengths.

  • The gyrokinetic approximation removes fast dynamics.

  • Determining transport properties from correlations of fluctuations leads to disparate scales (stiffness) that must be resolved.

  • Both Eulerian (continuum) and Lagrangian (PIC) methods are used numerically.

The largest GTC run as of 5/03 required 1 billion particles and 125 million

grid points using 1024 processors on the IBM-SP at NERSC.

SCIDAC Plasma Micorturbulence Project

fusion.gat.com/theory/pmp/


Both particle and continuum codes scale well on present day machines
Both particle and continuum codes scale well on present-day machines.

  • Decomposition strategies (including MPI / loop parallelism) have been tuned.

  • Computation time with particle-based codes is presently dominated by scatter and gather operations.

  • As more electron and electromagnetic effects are added, electromagnetic “field-solves” (linear algebra) become an increasingly larger fraction of the CPU time.

Fixed problem-size scaling for the continuum GYRO code.

Increasing problem-size scaling for the PIC-based GTC code.


Computations of wave plasma interactions investigate mode conversion and energy deposition
Computations of wave-plasma interactions investigate mode conversion and energy deposition.

  • The electromagnetic wave equation is solved in frequency-space with plasma current density being an integral operator on electric field.

  • Computations traditionally used spectral representations.

  • Recent developments include a plasma model valid for arbitrary gyroradius/wavelength scaling. (AORSA, E.F. Jaeger, ORNL)

AORSA computation results for a multiple-ion-species plasma in the Alcator C-Mod experiment at MIT showing mode conversion from the long-wavelength “fast-wave” to ion cyclotron waves.

SCIDAC Wave-Plasma Interactions Project

www.ornl.gov/sci/fed/scidacrf


Computational performance of rf plasma calculations is dominated by parallel linear algebra
Computational performance of RF-plasma calculations is dominated by parallel linear algebra.

  • In-core ScaLAPACK solves for the spectral representation achieved 1.6 Tflops on 1600 processors of Seaborg—67% efficiency!

  • Some computations are more effective with a configuration-space representation.

    • Current density computation can be trimmed from vacuum regions of 3D stellarator calculations.

    • In some cases, the matrix solve time is reduced by a factor of 100; computational efficiency decreases, however.

3D AORSA computation for the LHD stellarator.


While SCIDAC has already provided a boost to MFE computation, predicting plasma behavior in ITER will require continued hardware and algorithmic gains.

From the SCaLeS Report (www.pnl.gov/scales), Plasma Science Section by S. C. Jardin, PPPL.


Prepared by alex friedman llnl lbnl heavy ion fusion virtual national laboratory

Large-scale Computation in IFE computation, predicting plasma behavior in ITER will require continued hardware and algorithmic gains.

HIF: Simulation of space-charge-dominated beams

Intense beams of heavy ions will drive targets for Inertial Fusion Energy & High Energy Density PhysicsThis beam science will benefit from the next NERSC computer - but the machine’s architecture will matter

Prepared by: Alex Friedman, LLNL & LBNL

Heavy Ion Fusion Virtual National Laboratory

NERSC Users Group, LBNL, JUne 25, 2004


Particle-in-cell simulation of injector based on merging 119 intense beamlets

Key question in Heavy Ion Fusion:How do intense ion beams behave as they are accelerated and compressed into a small volume in space and time?

  • Beams are non-neutral plasmas; long-range forces dominate

  • They are collisionless with “long memories” — must follow beam particle distribution from source to target

  • “Multiscale, multispecies, multiphysics” computing; ions encounter:

    • Good electrons: neutralization by plasma aids compression, focusing

    • Bad electrons: stray “electron cloud” and gas can afflict beam

  • PIC is main tool; new methods offer: resolution (AMR-PIC), dense plasmas (implicit, hybrid PIC+fluid), low noise (f), halo (Vlasov), short electron timescales (large-Dt advance), …


p intense beamletsx

10-5

10-4

10-3

10-2

10-1

1

beam ions background ions electrons

x

target

df

  • Nonlinear-perturbative simulation of ion-electron two-stream instability reveals structure of eigenmode

  • 4D Vlasov testbed captures halo down to extremely low densities

  • Electromagnetic simulation of a single converging beam in target chamber

  • Simulation of diode using merged Adaptive Mesh Refinement & PIC


Achieving hif goals requires many processor hours good machine architecture supportive center
Achieving HIF goals requires many processor-hours, good machine architecture, supportive center

  • Source-to-focus WARP PIC simulation of a beam in a full-scale HIF driver

    • On Seaborg: key kernels achieve 700-900 Mflop/s single-processor; aggregated parallel performance is ~100 Mflop/s per processor

    • Observe good scalability up to 256 proc’s on present-day problems; can assume further algorithmic improvements & larger problems

    • Next-step exp’t (minimal): 440 proc-hrs (128x128x4096, 16M part’s, 10k steps)

    • Full-scale system w/ electrons: 1.8 M proc-hrs (4x resolution, 4X longer beam, 4X longer path, two species, Dt halved, using new electron mover)

  • While performance on the SP is comparable to that of other large codes, the SP architecture is not ideal for this class of problem

    • A higher fraction of peak parallel speed was achieved on T3E than SP

    • WARP should adapt especially well to a vector/parallel machine

    • Hardware gather and scatter valuable; scatter-add even more so

    • Trends toward multi-physics complexity and implicitness imply that benefits would accrue from easy programmability, flexibility, good parallel performance

  • NERSC support has been excellent and is a key to successful supercomputing


Fast ignition separating compression and heating

Compressed fuel machine architecture, supportive center

e-

heating laser

overdense

plasma

Fast Ignition: Separating Compression and Heating

  • It is relative easy to compress fuel pellet to achieve core density >  range

    • Ignition needs a hot spot to start fusion

  • Near-perfect compression required to achieve hot spot in conventional ICF

  • FI: Using a 2nd laser to create hot spot (Tabak et al., 1994)

    • Heating window: 10 ps --> PW laser

    • Laser energy needs to be converted into energetic electrons or protons

  • FI relaxes compression requirement and increase energy gain.


Key question in fi how much of ignition laser energy is coupled to target core
Key question in FI: how much of ignition laser energy is coupled to target core

  • Energetic particle production (PIC simulation)

    • Laser-underdense plasma interaction

      • Channeling

      • Laser stability, e.g. hose/filament

    • Laser-plasma interface & vicinity (n≤102 nc)

      • Hole-boring

      • Fast e- production

      • Fast e- transport: current filament/magnetic field generation

    • Laser-solid material interaction

      • Energetic proton production/focusing

      • Laser-gold cone interaction for coned target

  • Energetic particle transport/energy deposition in dense plasma (hybrid simulation)

    • Particle description for energetic components + fluid description for dense plasma (n~102-104 nc)

    • Need to incorporate proper model for resistivity/collisionality


Fi simulations requires tremendous computational resources
FI simulations requires tremendous computational resources. coupled to target core

  • For explicit PIC 3D simulations,

    • Total memory scales as L3n3/2

    • Total particle-step scales as L3Tn2

  • To simulate a (50m)3 plasma with n=100nc for 10ps requires ~6102 TB memory (1013 particles) and 109 processor-hour (on Seaborg)

  • State-of-art large PIC runs at Livermore used 7.2109 particles

  • Analyzing 109-particle data requires running in parallel and interactively data processing software such as IDL.


Fes nersc user input
FES NERSC User Input coupled to target core

  • NERSC services and support are excellent.

  • The latency of Seaborg’s inter-node connection is too high and bandwidth is too low—data access relative to CPU speed should be considered carefully in the next purchase.

  • Scheduling policies are too selective in the type of scientific computations that are supported.

  • Those who have been able to take advantage of the large-job reimbursement program like it.

  • Diagnosing large PIC simulations will require support for large parallel interactive sessions.

  • At least 4 different (and different types of) fusion codes have demonstrated improved performance on the Cray X1

Assessment from CRS: as different types of FES computations expand their physical models and employ more sophisticated algorithms, communication will become a greater burden.


ad