Numerical simulation of 3d fully nonlinear waters waves on parallel computers
1 / 24

Numerical Simulation of 3D Fully Nonlinear Waters Waves on Parallel Computers - PowerPoint PPT Presentation

  • Uploaded on

Numerical Simulation of 3D Fully Nonlinear Waters Waves on Parallel Computers. Xing Cai University of Oslo. Outline of the Talk. Mathematical model Numerical scheme (sequential) Parallelization strategy (domain decomposition) Object-oriented implementation Numerical experiment.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Numerical Simulation of 3D Fully Nonlinear Waters Waves on Parallel Computers' - yen-doyle

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Numerical simulation of 3d fully nonlinear waters waves on parallel computers

Numerical Simulation of3D Fully Nonlinear Waters Waveson Parallel Computers

Xing Cai

University of Oslo

Outline of the talk
Outline of the Talk

  • Mathematical model

  • Numerical scheme (sequential)

  • Parallelization strategy (domain decomposition)

  • Object-oriented implementation

  • Numerical experiment


Mathematical model
Mathematical Model

  • Fully nonlinear 3D water waves

  • Primary unknowns:


Numerical scheme
Numerical Scheme

  • Physical domain:

  • Transformation: (a fixed domain)


Numerical scheme1
Numerical Scheme

  • Operator splitting

  • At each time level:

    • FDM for updating free surface conditions

    • FEM solution of an elliptic boundary value problem in



  • Elliptic boundary value problem - most CPU intensive

  • Resulting system of linear equations

  • Preconditiong

Computational cost


N- number of unknowns

The question
The Question

Starting point: an o-o water wave simulator

(built in Diffpack: C++ environment for scientific computing)

How to do the parallelization?

  • Different approaches on different levels:

  • Automatic parallelization

  • Parallelization on the low matrix-vector level

  • Parallelization on the level of simulators


Parallelization strategy
Parallelization Strategy

  • Domain Decomposition

  • Divide and conquer

  • Solution of the original large problem through iteratively solving many smaller subproblems --solution method or preconditioner

  • Flexible -- localized treatment of irregular geometries, singularities etc

  • Very efficient numerical methods -- even on sequential computers

  • Suitable for coarse grained parallelization


Overlapping domain decomposition
Overlapping Domain Decomposition

  • Alternating Schwarz method for two subdomains

  • Example: solving an elliptic boundary value problem

  • in

  • A sequence of approximations

  • where


Numerical foundation
Numerical Foundation

  • Additive Schwarz Method

  • Subproblems are of the same form as the original large problem, with possibly different boundary conditions on artificial boundaries.

  • Subproblems can be solved in parallel.


Convergence of the solution
Convergence of the Solution


Solving the Poisson

problem on the unit



Numerical foundation1
Numerical Foundation

  • Coarse Grid Correction

  • Important for good DD convergence

  • Run on each processor, shared with subdomain simulators on the same processor


Some observations
Some Observations

  • Parallel Computing

  • efficiency relies on the parallelization

  • Domain Decomposition

  • suits well for parallel computing

  • a good parallelization strategy

  • Object-Oriented Programming Technique

  • flexible and efficient sequential simulators

  • can be used in subdomain solves -- main ingredient of DD


New programming model
New Programming Model

  • A simulator-parallel model

  • Each processor hosts an arbitrary number of subdomains

  • balance between numerical efficiency and load balancing

  • One subdomain is assigned a sequential simulator

  • Flexibility -- different types of grids, linear system solvers, preconditioners, convergence monitors etc. are allowed for different subproblems

  • Domain decomposition on the level of subdomain simulators!


Simulator parallel

  • Reuse of existing sequential simulators

  • Data distribution is implied

  • No need for global data

  • Needs additional functionalities for exchanging nodal values inside the overlapping region

  • Needs some global administration


A generic programming framework
A Generic Programming Framework

  • An add-on library (SPMD model)

  • Use of object-oriented programming technique

  • Flexibility and portability

  • Simplified parallelization process for end-user


The administrator
The Administrator

  • Parameter Interface

    solution method or preconditioner, max iterations, stopping criterion etc

  • DD algorithm Interface

    access to predifined numerical algorithme.g.CG

  • Operation Interface (standard codes & UDC)

    access to subdomain simulators, matrix-vector product, inner product etc


The subdomain simulator
The Subdomain Simulator

  • Subdomain Simulator -- a generic representation

  • C++ class hierarchy

  • Interface of generic member functions


Adaptation of sequential simulator




Adaptation of Sequential Simulator

  • Class SubdomainSimulator - generic representation of a sequential simulator.

  • Class SubdomainFEMSolver - generic representation of a sequential simulator using FEM.

  • A new sequential wave simulator that fits in the framework is

  • readily extended from the

  • existing sequential simulator,

  • also being a subclass of

  • SubdomainFEMSolver.




  • Algorithmic efficiency

  • efficiency of original sequential simulator(s)

  • efficiency of domain decomposition method

  • Parallel efficiency

  • communication overhead (low)

  • coarse grid correction overhead (normally low)

  • synchronization overhead

  • load balancing

    • subproblem size

    • work on subdomain solves


Parallel efficiency
Parallel Efficiency

  • Fixed number of subdomains M=16.

  • Subdomain grids from partition of a global 41x41x41 grid.

  • Simulation over 32 time steps.

  • DD as preconditioner of CG for the Laplace eq.

  • Multigrid V-cycle as subdomain solver.


Overall efficiency
Overall Efficiency

  • Number of subdomains equal to number of processors


*ForP=2 parallel BiCGStab is used.


  • Efficient solution of elliptic boundary value problems

  • Parallelization based on DD

  • Introduction of a simulator-parallel model

  • A generic framework for implementation