Numerical methods for partial differential equations
This presentation is the property of its rightful owner.
Sponsored Links
1 / 41

Numerical Methods for Partial Differential Equations PowerPoint PPT Presentation


  • 98 Views
  • Uploaded on
  • Presentation posted in: General

Numerical Methods for Partial Differential Equations . CAAM 452 Spring 2005 Lecture 6 Various Finite Difference Discretizations for the Advection Equations – Phase and Dissipation Error Instructor: Tim Warburton. Note on AB3 Stability. Review: After Time Stepping… Back to PDE’s.

Download Presentation

Numerical Methods for Partial Differential Equations

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Numerical methods for partial differential equations

Numerical Methods for Partial Differential Equations

CAAM 452

Spring 2005

Lecture 6

Various Finite Difference Discretizations for the Advection Equations – Phase and Dissipation Error

Instructor: Tim Warburton


Note on ab3 stability

Note on AB3 Stability


Review after time stepping back to pde s

Review: After Time Stepping… Back to PDE’s

  • We have now proposed two quite distinct but reliable ways to advance an ODE approximately in time.

  • But our original goal was to solve a PDE

  • And not just for one mode.


Review sample points in time space

Review: Sample Points in Time Space

  • We chose to sample the ODE at discrete points on the time axis.

  • We can also make the same choice for the spatial representationat each discrete time level.

  • We follow this choice with the dilemma of how to reconstruct anapproximation to the derivative of discretely sampled data.

  • Assume c is positive

x


Review choice of stencil

Review: Choice of Stencil

x

x

x

x

x

And many more combinations


Today delta operators

Today: Delta Operators

We will consider the use of each of these difference formula

(and more sophisticated versions) as approximations for the spatial

derivatives.

We consider stability… by which we mean we will examine the eigenspectrum of the difference operator treated as a matrix.

We will use Gershgorin’s theorem frequently:

http://mathworld.wolfram.com/GershgorinCircleTheorem.html


Right difference matrix

Right Difference Matrix

  • Consider a case with 10 points on a periodic interval:

  • The semi-discrete scheme for the advection equations is:

Note: I used indexing from zero to match with classical definitions coming up.


Discrete operator matrix

Discrete Operator Matrix

  • By Gerschgorin’s theorem we conclude that the matrix has all eigenvalues contained in the circle centered at -c/dx with radius c/dx

  • Thus the eigenvalues are all in the left half plane, with the possible exception of one or more on the origin  good candidate for time-stepping.


Caam 452 spring 2005

cont

  • We can be more precise about the location of the eigenvalues of

  • We first introduce the Fourier transform for the data vector:

  • This is simply a linear, map applied to the vector of values with inverse given by:


Fourier transform in matrix notation

Fourier Transform in Matrix Notation

  • Introducing the following notation for the Fourier transform matrix and a shift operator:

  • We consider the following eigenvalue problem:

or


Right shift matrix

Right Shift Matrix

  • We write down explicitly what the right shift matrix is:

  • Where the box around the addition operator indicates that this is addition modulo M

  • This time the delta indicates the Kronecker Delta:

    • http://mathworld.wolfram.com/KroneckerDelta.html


Caam 452 spring 2005

cont

  • We perform a similarity transform using the Fourier matrix:

  • Becomes:

  • We now consider the first term:

  • First note (basic properties of the Fourier transform):


Caam 452 spring 2005

cont

What we discover after some algebra(with slight liberties taken with the notation) is that the Fourier transform matrix diagonalizes the right shift matrix.


Caam 452 spring 2005

cont

  • Finally we return to the and can now find its eigenspectrum:

  • Using

  • We find that the matrix is diagonalized by the Fourier matrix and that


Important result fourier transform of right shift matrix

Important Result: Fourier Transform of Right Shift Matrix

i.e. the Fourier (similarity) transform of the

right shift matrix is a diagonal matrix


Left difference matrix

Left Difference Matrix

  • Consider a case with 10 points on a periodic interval:

  • The semi-discrete scheme for the advection equations is:


Caam 452 spring 2005

cont

  • Applying Gerschgorin this time we find that the eigenvalues are all in a disk centered at +c/dx with radius c/dx.

  • This is bad news !.

  • All the eigenvalues are in the right half plane, apart from the possiblity of a subset which are at the origin.

  • i.e. any non zero mode will be unstable for time stepping operator (and clearly not physical)


Detailed study of the left difference operator spectrum

Detailed Study of the Left Difference Operator Spectrum

  • This time we introduce the left shift operator:

  • This time the eigenvalue problem to consider is:

  • Through a similar analysis (since the Fourier transform also diagonalizes the left shift matrix) we find:

  • This confirms the Gerschgorin estimate.


Important result fourier transform of left shift matrix

Important Result: Fourier Transform of Left Shift Matrix

i.e. the Fourier (similarity) transform of the

left shift matrix is a diagonal matrix


Polynomials of shift matrices

Polynomials of Shift Matrices

  • We can express most finite difference operators as polynomials of shift operators:

  • We can apply the Fourier similarity transform to diagonalize this:

  • Where is the vector of Fourier coefficents for u.

  • Hence each Fourier coefficient satisfies:


Centered differencing

Centered Differencing

  • We recall the centered difference operator can be expressed in terms of the left and right difference operators:


Caam 452 spring 2005

cont

  • So the operator has purely imaginary eigenvalues

  • As before we must resort to a time-stepping method which will in this case accommodate a portion of the imaginary axis.


The 4 th order central difference formula

The 4th Order Central Difference Formula

  • Recall the fourth order central difference formula:

  • As for the central differencing formula all the eigenvalues are imaginary.

  • We have used the diagonalizing property of the Fourier transform on polynomials of the shift matrices


Summary of mode evolution

Summary of Mode Evolution

  • Using:

  • We write down the ODE satisfied by each discrete Fourier mode of the transformed data:


Interpreting the odes

Interpreting the ODEs

  • We can assert an interpretation of multipliers in the ODE for each discrete Fourier mode.

  • We know that the m’th mode has physical shapewhere L is the periodic length (L=M*dx)

  • So (reverting back to our approach for turning a PDE into an ODE analytically) we would ideally be solving:

  • Thus we can perform a small theta analysis of the multipliers in each of the cases we just derived.


1 right difference

(1) Right Difference

The small theta analysis (low wave number) implies the solution will decay


2 left difference

(2) Left Difference

The small theta analysis shows that the solution propagates and grows exponentially fast


3 center difference

(3) Center Difference

The small theta analysis shows that the solution propagates and does not grow


4 4 th order center difference

(4) 4th Order Center Difference


Summary of small theta analysis

Summary of Small Theta Analysis

  • The dominant remainder term in this analysis relates to a commonly used, physically motivated description of the shortfall of the method:

Dissipative

Unstable

Dispersive

Dispersive


Terms

Terms

  • The dominant error is dispersive if it causes different modes to travel at different speeds as with

  • To leading order accuracy ctilde is the actual numerical wavespeed, compared with the physical advection speed

  • The dominant error is dissipative if it reduces the wave amplitude over time as with


Comparison of multipliers

Comparison of Multipliers

6


The imaginary part

The Imaginary Part

  • We now see that for each of the methods has a dispersive nature (wave speed changes with wave number).

  • Also, the imaginary part of the multiplier is not monotone and linear over [0, pi] or [pi,2*pi]

  • Clearly there will be spurious modes (eigenvalues) which show up in the discrete spectrum as we saw last time.

Spurious modes


Recap via a new example

Recap Via a New Example

  • There is a 6th order central difference scheme:

  • We Fourier transform this:


Caam 452 spring 2005

cont

  • We simplify this to:

  • Compare with GKO p83.

  • The eigenvalues are all imaginary

  • Tedious Taylor expansion computations will reveal an error O(theta^7)

  • Next we plot this dispersion relationship against the other examples:


Phase error plot

Phase Error Plot

  • It should be clear that the 6th order central difference operator is more accurate over a wider range of theta – but still prone to spurious eigenvalues.

Should be

(1/6)*dx^2


Summary so far

Summary So Far

  • We have established that for the advection equation it is not appropriate to use for the spatial derivative term (non-negative real part for multiplier).

  • The other difference formulae are stable in this context:

  • In this context is referred to as an upwind derivative – it uses a dissipative mechanism to remain stable.

  • The given central difference operators are carefully designed so that their eigenvalues sit on the imaginary axis.


Caam 452 spring 2005

cont

  • We have established that in each of the schemes we have replaced the exact ODE multiplier

  • with an approximation, in this lecture denoted by:

  • We found that this approximation is not uniform in wave number – i.e. there is a non monotonic dependence in m (embedded in the error terms).

  • This explains our observation of the spurious eigenvalues in the computer evaluations of the spectrum of the discrete operator.


Recall those spurious eigenvalues

Recall Those Spurious Eigenvalues

  • What should the eigenvalues of the 4th order derivative operator ideally tend to in the limiting of decreasing dx?

  • Example (10, 20, 80 points with the periodic length 2*pi):

>> [dxDeigs, dxD] = test4th(20);

>> dx = 2*pi/20;

>>

>> e = sort(dxDeigs/dx);

>> e(1:10)

ans =

-0.0000

-0.0000

-0.0000 - 0.9997i

-0.0000 + 0.9997i

0.0000 - 1.6233i

0.0000 + 1.6233i

-0.0000 - 1.9901i

-0.0000 + 1.9901i

0.0000 - 2.9290i

0.0000 + 2.9290i

>> [dxDeigs, dxD] = test4th(80);

>> dx = 2*pi/80;

>> e = sort(dxDeigs/dx);

>>

>> e(1:10)

ans =

0.0000 - 0.0000i

0.0000 + 0.0000i

-0.0000 - 1.0000i

-0.0000 + 1.0000i

-0.0000 - 1.6639i

-0.0000 + 1.6639i

-0.0000 - 2.0000i

-0.0000 + 2.0000i

0.0000 - 2.9997i

0.0000 + 2.9997i

>> [dxDeigs, dxD] = test4th(10);

>> dx = 2*pi/10;

>> sort(dxDeigs/dx)

ans =

0.0000 - 0.0000i

0.0000 + 0.0000i

0.0000 - 0.9950i

0.0000 + 0.9950i

0 - 1.4996i

0 + 1.4996i

0 - 1.8623i

0 + 1.8623i

-0.0000 - 2.1741i

-0.0000 + 2.1741i


Upwinding

Upwinding

  • Recall: we referred to the differencing operator as the “upwind derivative”.

  • This is due to the physical placement of the sampled data relative to the advection direction:

  • i.e. the differencing only uses data from the upstream (upwind) direction.

  • More on this next time.

Advection direction

x


Next lecture

Next Lecture

  • Introduction of some standard finite-difference methods.

  • Define (for finite difference schemes):

    • Consistency

    • Stability

    • Accuracy

    • Courant-Friedrichs-Lewy condition for stability

  • We will examine convergence:

    • Continuous time+discrete space (as seen today)

    • Discrete time and discrete space

    • Lax-Richtmyer equivalence theory (consistency+stability  convergence)


  • Login