Spectral Differencing
Download
1 / 55

Spectral Differencing with a Twist - PowerPoint PPT Presentation


  • 78 Views
  • Uploaded on

Spectral Differencing with a Twist. Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004. Spectral Differencing with a Twist. Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Spectral Differencing with a Twist' - ignatius-eaton


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Spectral Differencing

with a Twist

Johann Radon Institute for Computational and Applied Mathematics

Linz, Österreich

15. März 2004


Spectral Differencing

with a Twist

Johann Radon Institute for Computational and Applied Mathematics

Linz, Österreich

15. März 2004


Spectral Differencing

with a Twist

Richard Baltensperger

Université Fribourg

Manfred Trummer

Pacific Institute for the

Mathematical Sciences &

Simon Fraser University

Johann Radon Institute for Computational and Applied Mathematics

Linz, Österreich

15. März 2004

[email protected] www.math.sfu.ca/~mrt




Banff international research station
Banff International Research Station

  • PIMS – MSRI collaboration (with MITACS)

  • 5-day workshops

  • 2-day workshops

  • Focused Research

  • Research In Teams

    www.pims.math.ca/birs


Round off errors
Round-off Errors

  • 1991 – a patriot missile battery in Dhahran, Saudi Arabia, fails to intercept Iraqi missile resulting in 28 fatalities. Problem traced back to accumulation of round-off error in computer arithmetic.


Introduction
Introduction

Spectral Differentiation:

  • Approximate function f(x) by a global interpolant

  • Differentiate the global interpolant exactly, e.g.,


Features of spectral methods
Features of spectral methods

+ Exponential (spectral) accuracy, converges faster than any power of 1/N

+ Fairly coarse discretizations give good results

- Full instead of sparse matrices

- Tighter stability restrictions

- Less robust, difficulties with irregular domains


Types of spectral methods
Types of Spectral Methods

  • Spectral-Galerkin: work in transformed space, that is, with the coefficients in the expansion.

    Example: ut = uxx


Types of spectral methods1
Types of Spectral Methods

  • Spectral Collocation (Pseudospectral): work in physical space. Choose collocation points xk, k=0,..,N, and approximate the function of interest by its values at those collocation points.

    Computations rely on interpolation.

  • Issues: Aliasing, ease of computation, nonlinearities


Interpolation
Interpolation

  • Periodic function, equidistant points: FOURIER

  • Polynomial interpolation:

  • Interpolation by Rational functions

  • Chebyshev points

  • Legendre points

  • Hermite, Laguerre


Differentiation matrix d
Differentiation Matrix D

Discrete data set fk, k=0,…,N

Interpolate between collocation points xk

p(xk) = fk

Differentiate p(x)

Evaluate p’(xk) = gk

All operations are linear: g = Df


Software
Software

  • Funaro: FORTRAN code, various polynomial spectral methods

  • Don-Solomonoff, Don-Costa: PSEUDOPAK FORTRAN code, more engineering oriented, includes filters, etc.

  • Weideman-Reddy: Based on Differentiation Matrices, written in MATLAB (fast Matlab programming)


Polynomial interpolation
Polynomial Interpolation

Lagrange form: “Although admired for its mathematical beauty and elegance it is not useful in practice”

  • “Expensive”

  • “Difficult to update”



Barycentric formula version 2
Barycentric formula, version 2

=1

  • Set-up: O(N2)

  • Evaluation: O(N)

  • Update (add point): O(N)

  • New fk values: no extra work!



Barycentric formula weights w k1
Barycentric formula: weights wk

  • Equidistant points:

  • Chebyshev points (1st kind):

  • Chebyshev points (2nd kind):


Barycentric formula weights w k2
Barycentric formula: weights wk

  • Weights can be multiplied by the same constant

  • This function interpolates for any weights ! Rational interpolation!

  • Relative size of weights indicates ill-conditioning


Computation of the differentiation matrix
Computation of the Differentiation Matrix

Entirely based upon interpolation.


Barycentric formula
Barycentric Formula

Barycentric (Schneider/Werner):


Chebyshev differentiation
Chebyshev Differentiation

xk = cos(k/N)

Differentiation Matrix:


Chebyshev matrix has behavioural problems
Chebyshev Matrix has Behavioural Problems

numerical

  • Trefethen-Trummer, 1987

  • Rothman, 1991

  • Breuer-Everson, 1992

  • Don-Solomonoff, 1995/1997

  • Bayliss-Class-Matkowsky, 1995

  • Tang-Trummer, 1996

  • Baltensperger-Berrut, 1996


Chebyshev matrix and errors
Chebyshev Matrix and Errors

Matrix

Absolute Errors

Relative Errors


Round off error analysis
Round-off error analysis

has relative error

and so has

, therefore


Roundoff error analysis
Roundoff Error Analysis

  • With “good” computation we expect an error in D01 of O(N2)

  • Most elements in D are O(1)

  • Some are of O(N), and a few are O(N2)

  • We must be careful to see whether absolute or relative errors enter the computation


Remedies
Remedies

  • Preconditioning, add ax+b to f to create a function which is zero at the boundary

  • Compute D in higher precision

  • Use trigonometric identities

  • Use symmetry: Flipping Trick

  • NST: Negative sum trick


More ways to compute df
More ways to compute Df

If we only want Df but not the matrix D (e.g., time stepping, iterative methods), we can compute Df for any f via

  • FFT based approach

  • Schneider-Werner formula


Chebyshev differentiation matrix
Chebyshev Differentiation Matrix

xk = cos(k/N)

“Original Formulas”:

Cancellation!



Flipping trick
Flipping Trick

sin( – x) not as accurate as sin(x)

Use

and “flip” the upper half of D into the lower half, or the upper left triangle into the lower right triangle.


Nst negative sum trick
NST: Negative sum trick

Spectral Differentiation is exact for constant functions:

Arrange order of summation to sum the smaller elements first - requires sorting




Observations
Observations

  • Original formula for D is very inaccurate

  • Trig/Flip + “NST” (Weideman-Reddy) provides good improvement

  • FFT not as good as “expected”, in particular when N is not a power of 2

  • NST applied to original D gives best matrix

  • Even more accurate ways to compute Df


Machine dependency
Machine Dependency

  • Results can vary substantially from machine to machine, and may depend on software.

  • Intel/PC: FFT performs better

  • SGI

  • SUN

  • DEC Alpha


Understanding the negative sum trick
Understanding theNegative sum trick


Understanding the negative sum trick1
Understanding theNegative sum trick

Error in f

Error in D



Understanding nst 3
Understanding NST3

The inaccurate matrix elements are multiplied by very small numbers leading to O(N2) errors -- optimal accuracy


Understanding the negative sum trick2
Understanding the Negative sum trick

NST is an (inaccurate) implementation of the Schneider-Werner formula:

Schneider-Werner Negative Sum Trick


Understanding the negative sum trick3
Understanding the Negative sum trick

  • Why do we obtain superior results when applying the NST to the original (inaccurate) formula?

  • Accuracy of Finite Difference Quotients:


Finite difference quotients
Finite Difference Quotients

  • For monomials a cancellation of the cancellation errors takes place, e.g.:

  • Typically fj – fk is less accurate than xj – xk, so computing xj - xk more accurately does not help!




Fast schneider werner
Fast Schneider-Werner

  • Cost of Df is 2N2, SW costs 3N2

  • Can implement Df with “Fast SW” method

Size of each corner blocks is N½

Cost: 2N2 + O(N)


Polynomial differentiation
Polynomial Differentiation

  • For example, Legendre, Hermite, Laguerre

  • Fewer tricks available, but Negative Sum Trick still provides improvements

  • Ordering the summation may become even more important


Higher order derivatives
Higher Order Derivatives

  • Best not to compute D(2)=D2, etc.

  • Formulas by Welfert (implemented in Weideman-Reddy)

  • Negative sum trick shows again improvements

  • Higher order differentiation matrices badly conditioned, so gaining a little more accuracy is more important than for first order


Using d to solve problems
Using D to solve problems

  • In many applications the first and last row/column of D is removed because of boundary conditions

  • f -> Df appears to be most sensitive to how D is computed (forward problem = differentiation)

  • Have observed improvements in solving BVPs

Skip




Close
Close

  • Demystified some of the less intuitive behaviour of differentiation matrices

  • Get more accuracy for the same cost

  • Study the effects of using the various differentiation matrices in applications

  • Forward problem is more sensitive than inverse problem

  • Df: Time-stepping, Iterative methods


To think about
To think about

  • Is double precision enough as we are able to solve “bigger” problems?

  • Irony of spectral methods: Exponential convergence, round-off error is the limiting factor

  • Accuracy requirements limit us to N of moderate size -- FFT is not so much faster than matrix based approach



ad