- 98 Views
- Uploaded on
- Presentation posted in: General

Numerical Methods for Partial Differential Equations

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Numerical Methods for Partial Differential Equations

CAAM 452

Spring 2005

Lecture 6

Various Finite Difference Discretizations for the Advection Equations – Phase and Dissipation Error

Instructor: Tim Warburton

- We have now proposed two quite distinct but reliable ways to advance an ODE approximately in time.
- But our original goal was to solve a PDE
- And not just for one mode.

- We chose to sample the ODE at discrete points on the time axis.
- We can also make the same choice for the spatial representationat each discrete time level.
- We follow this choice with the dilemma of how to reconstruct anapproximation to the derivative of discretely sampled data.
- Assume c is positive

x

x

x

x

x

x

And many more combinations

We will consider the use of each of these difference formula

(and more sophisticated versions) as approximations for the spatial

derivatives.

We consider stability… by which we mean we will examine the eigenspectrum of the difference operator treated as a matrix.

We will use Gershgorin’s theorem frequently:

http://mathworld.wolfram.com/GershgorinCircleTheorem.html

- Consider a case with 10 points on a periodic interval:
- The semi-discrete scheme for the advection equations is:

Note: I used indexing from zero to match with classical definitions coming up.

- By Gerschgorin’s theorem we conclude that the matrix has all eigenvalues contained in the circle centered at -c/dx with radius c/dx
- Thus the eigenvalues are all in the left half plane, with the possible exception of one or more on the origin good candidate for time-stepping.

- We can be more precise about the location of the eigenvalues of
- We first introduce the Fourier transform for the data vector:
- This is simply a linear, map applied to the vector of values with inverse given by:

- Introducing the following notation for the Fourier transform matrix and a shift operator:
- We consider the following eigenvalue problem:

or

- We write down explicitly what the right shift matrix is:
- Where the box around the addition operator indicates that this is addition modulo M
- This time the delta indicates the Kronecker Delta:
- http://mathworld.wolfram.com/KroneckerDelta.html

- We perform a similarity transform using the Fourier matrix:
- Becomes:
- We now consider the first term:
- First note (basic properties of the Fourier transform):

What we discover after some algebra(with slight liberties taken with the notation) is that the Fourier transform matrix diagonalizes the right shift matrix.

- Finally we return to the and can now find its eigenspectrum:
- Using
- We find that the matrix is diagonalized by the Fourier matrix and that

i.e. the Fourier (similarity) transform of the

right shift matrix is a diagonal matrix

- Consider a case with 10 points on a periodic interval:
- The semi-discrete scheme for the advection equations is:

- Applying Gerschgorin this time we find that the eigenvalues are all in a disk centered at +c/dx with radius c/dx.
- This is bad news !.
- All the eigenvalues are in the right half plane, apart from the possiblity of a subset which are at the origin.
- i.e. any non zero mode will be unstable for time stepping operator (and clearly not physical)

- This time we introduce the left shift operator:
- This time the eigenvalue problem to consider is:
- Through a similar analysis (since the Fourier transform also diagonalizes the left shift matrix) we find:
- This confirms the Gerschgorin estimate.

i.e. the Fourier (similarity) transform of the

left shift matrix is a diagonal matrix

- We can express most finite difference operators as polynomials of shift operators:
- We can apply the Fourier similarity transform to diagonalize this:
- Where is the vector of Fourier coefficents for u.
- Hence each Fourier coefficient satisfies:

- We recall the centered difference operator can be expressed in terms of the left and right difference operators:

- So the operator has purely imaginary eigenvalues
- As before we must resort to a time-stepping method which will in this case accommodate a portion of the imaginary axis.

- Recall the fourth order central difference formula:
- As for the central differencing formula all the eigenvalues are imaginary.
- We have used the diagonalizing property of the Fourier transform on polynomials of the shift matrices

- Using:
- We write down the ODE satisfied by each discrete Fourier mode of the transformed data:

- We can assert an interpretation of multipliers in the ODE for each discrete Fourier mode.
- We know that the m’th mode has physical shapewhere L is the periodic length (L=M*dx)
- So (reverting back to our approach for turning a PDE into an ODE analytically) we would ideally be solving:
- Thus we can perform a small theta analysis of the multipliers in each of the cases we just derived.

The small theta analysis (low wave number) implies the solution will decay

The small theta analysis shows that the solution propagates and grows exponentially fast

The small theta analysis shows that the solution propagates and does not grow

- The dominant remainder term in this analysis relates to a commonly used, physically motivated description of the shortfall of the method:

Dissipative

Unstable

Dispersive

Dispersive

- The dominant error is dispersive if it causes different modes to travel at different speeds as with
- To leading order accuracy ctilde is the actual numerical wavespeed, compared with the physical advection speed
- The dominant error is dissipative if it reduces the wave amplitude over time as with

6

- We now see that for each of the methods has a dispersive nature (wave speed changes with wave number).
- Also, the imaginary part of the multiplier is not monotone and linear over [0, pi] or [pi,2*pi]
- Clearly there will be spurious modes (eigenvalues) which show up in the discrete spectrum as we saw last time.

Spurious modes

- There is a 6th order central difference scheme:
- We Fourier transform this:

- We simplify this to:
- Compare with GKO p83.
- The eigenvalues are all imaginary
- Tedious Taylor expansion computations will reveal an error O(theta^7)
- Next we plot this dispersion relationship against the other examples:

- It should be clear that the 6th order central difference operator is more accurate over a wider range of theta – but still prone to spurious eigenvalues.

Should be

(1/6)*dx^2

- We have established that for the advection equation it is not appropriate to use for the spatial derivative term (non-negative real part for multiplier).
- The other difference formulae are stable in this context:
- In this context is referred to as an upwind derivative – it uses a dissipative mechanism to remain stable.
- The given central difference operators are carefully designed so that their eigenvalues sit on the imaginary axis.

- We have established that in each of the schemes we have replaced the exact ODE multiplier
- with an approximation, in this lecture denoted by:
- We found that this approximation is not uniform in wave number – i.e. there is a non monotonic dependence in m (embedded in the error terms).
- This explains our observation of the spurious eigenvalues in the computer evaluations of the spectrum of the discrete operator.

- What should the eigenvalues of the 4th order derivative operator ideally tend to in the limiting of decreasing dx?
- Example (10, 20, 80 points with the periodic length 2*pi):

>> [dxDeigs, dxD] = test4th(20);

>> dx = 2*pi/20;

>>

>> e = sort(dxDeigs/dx);

>> e(1:10)

ans =

-0.0000

-0.0000

-0.0000 - 0.9997i

-0.0000 + 0.9997i

0.0000 - 1.6233i

0.0000 + 1.6233i

-0.0000 - 1.9901i

-0.0000 + 1.9901i

0.0000 - 2.9290i

0.0000 + 2.9290i

>> [dxDeigs, dxD] = test4th(80);

>> dx = 2*pi/80;

>> e = sort(dxDeigs/dx);

>>

>> e(1:10)

ans =

0.0000 - 0.0000i

0.0000 + 0.0000i

-0.0000 - 1.0000i

-0.0000 + 1.0000i

-0.0000 - 1.6639i

-0.0000 + 1.6639i

-0.0000 - 2.0000i

-0.0000 + 2.0000i

0.0000 - 2.9997i

0.0000 + 2.9997i

>> [dxDeigs, dxD] = test4th(10);

>> dx = 2*pi/10;

>> sort(dxDeigs/dx)

ans =

0.0000 - 0.0000i

0.0000 + 0.0000i

0.0000 - 0.9950i

0.0000 + 0.9950i

0 - 1.4996i

0 + 1.4996i

0 - 1.8623i

0 + 1.8623i

-0.0000 - 2.1741i

-0.0000 + 2.1741i

- Recall: we referred to the differencing operator as the “upwind derivative”.
- This is due to the physical placement of the sampled data relative to the advection direction:
- i.e. the differencing only uses data from the upstream (upwind) direction.
- More on this next time.

Advection direction

x

- Introduction of some standard finite-difference methods.
- Define (for finite difference schemes):
- Consistency
- Stability
- Accuracy
- Courant-Friedrichs-Lewy condition for stability

- We will examine convergence:
- Continuous time+discrete space (as seen today)
- Discrete time and discrete space
- Lax-Richtmyer equivalence theory (consistency+stability convergence)