1 / 46

Numerical Methods for Partial Differential Equations

Numerical Methods for Partial Differential Equations . CAAM 452 Spring 2005 Lecture 5 Summary of convergence checks for LMM and one-step time stepping Instructor: Tim Warburton. Today.

premala
Download Presentation

Numerical Methods for Partial Differential Equations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Numerical Methods for Partial Differential Equations CAAM 452 Spring 2005 Lecture 5 Summary of convergence checks for LMM and one-step time stepping Instructor: Tim Warburton

  2. Today • I will summarize the steps we took to examine the stability, consistency and thus convergence of the AB and RK schemes

  3. AB • The general AB scheme reads: • To check linear stability we assume that f is linear in u (we can also achieve a similar goal by assuming f is Lipschitz) • Assuming: • We know that all the solutions of this recurrence relationship are bounded provided the corresponding stability polynomial:has roots which satisfy the root condition (all roots bounded by 1, and if they have magnitude 1 must be simple roots).

  4. AB cont • We must then check the roots of: • For roots with |z|<1 we are sure that the method is stable. What we wish to know is what values of mu*dt guarantee this condition. We can think of this formula defining a map between the (complex) plane of roots to the (complex) plane of mu*dt • We must determine the image of the unit circle in the root plane in the mu*dt plane.

  5. AB cont • Given the root equation: • We determine the map in a trivial manner: • Example AB3:

  6. AB3 Stability

  7. AB Accuracy • Next we perform a local truncation error analysis. • That is we estimate what correction at each step of the solution is required for the exact solution of the ODE to be a solution to the numerical method:

  8. cont • We now use Taylor expansions to express the truncation error in terms of u and its derivatives at tn: • We found that Tn scaled as: • Finally we apply Dahlquist’s equivalence theorem, which requires consistency and stability  convergence and corollary which gives us global error estimates:

  9. RK Analysis • A similar analysis is required for the Runge-Kutta schemes. • The stability analysis was straightforward as we found that the recurrence for a linear f was: • For stability we required: • We examined this by setting

  10. RK3 Stabilityand let Matlab do the work.

  11. RK Accuracy • The error estimate process for RK schemes is a little more complicated, but still boils down to dt expansions of the one-step operator. • Again, once we have consistency and stability we have convergence.

  12. Implicit Schemes • Later on in the semester we will discuss implicit schemes. • Then we will also discuss the Butcher block formulation of RK methods.

  13. After Time Stepping… Back to PDE’s • We have now proposed two quite distinct but reliable ways to advance an ODE approximately in time. • But our original goal was to solve a PDE • And not just for one mode.

  14. Sample Points in Time Space • We chose to sample the ODE at discrete points on the time axis. • We can also make the same choice for the spatial representationat each discrete time level. • We follow this choice with the dilemma of how to reconstruct anapproximation to the derivative of discretely sampled data. x

  15. Choice of Stencil x x x x x And many more combinations

  16. Use Taylor’s Series • In much the same spirit as time-domain we can expand the solution at each grid point in terms of u and its derivatives as a power series in dx (the space between spatial grid points) • Examples:

  17. Left Differencing • Using the data points on this stencil we wish to compute an approximation to the gradient of u at • Dividing and rearranging we find: x

  18. Left Differencing x This gives us an obvious one-sided linear interpolation formula of the derivative. We will later on decide whether it is suitable for time-stepping with AB or RK.

  19. Left Difference Operator • We define the following difference operator (appalling but fairly standard notational choice): • Where represents the function sampled at the m’th data point. • The difference formula shows i.e. delta- is a first order approximation of the derivative operator

  20. Right Difference Operator x i.e. delta+ is a first order approximation of the derivative operator

  21. Central Differencing Operator • Subtracting the forward and backward expansions: • We obtain: x

  22. cont • i.e. the central differencing operator is second order accurate in dx • Note: for the error term to be bounded we require that the third derivative be bounded.

  23. Approximating the Second Derivative • We can create an approximation of the second derivative:

  24. cont

  25. Delta Operators

  26. Using the delta’s: Example 1 We use:

  27. Example 2 • We can use the difference operators to eliminate the third order spatial derivative:

  28. Example 2 cont • We can expand out the difference operator • And we have a super difference formula for the derivative which is 4th order accurate! • What is the stencil for this formula ? (volunteer)

  29. Example 2 cont • We can think of this difference formula as a matrix operator acting on the point data values aligned in a vector: • Becomes

  30. For 10 Points on a Periodic Interval Notice: the matrix is skew-symmetrix  imaginary eigenvalues

  31. Diagonalize • We now use this formula in the PDE which we evaluate at the 10 spatial sample points: This is an identity (although we are a little fuzzy about the O(dx^4) term)

  32. Some Notation • We will represent the set of data values at the spatial points as a vector:

  33. cont • The system matrix is an approximation of the derivative operator so we will denote it as D

  34. PDE  System of ODE’s -> Decoupled ODE’s • So we can express the PDE as: • Let’s make a semi-discrete scheme: • As a theoretical exercise we can diagonalize the skew symmetric D operator with a diagonal matrix. • Then:

  35. cont • The system of 10 ODE’s: • are now totally decoupled, so we can apply our extensive knowledge of treating scalar ODE’s to each of the components of vhat. • All we have to do is make sure that the eigenvalues of the cD matrix are squeezed into the time-stepping region by choosing small enough dt

  36. cont • By linearity of the right hand side we can be sure that time stepping (say using AB or RK) the original ODE will be fine as long as fits in the relevant stability region… • We now need to solve: • Noting our studies of the stability regions, the fact that all the eigenvalues sit on the imaginary axis rules out • Euler-Forward (unconditionally unstable) • AB2 and RK2 (marginally stable)

  37. cont • In this case we can use Gerschgorin’s theorem to bound the magnitude of the eigenvalues. • The Gerschgorin disk for each row is centered at zero with radius:

  38. Cont (dt estimate) • The skew symmetry tells us the eigenvalues are on the imaginary axis so we know that if: • is in the stability region we have sufficient conditions for stability of the ODE. • This is a pretty good estimate, as the eigenvalues given by Matlab for the 10x10 matrix are:

  39. Comment on Gerschgorin Bound >> [v,dxD] = test4th(10); >> >> v v = 0 - 1.36603956377562i 0 - 1.17011114634479i 0 - 0.94222308910582i 0 - 0.62520425034077i 0 - 0.00000000000000i 0 + 0.00000000000000i 0 + 0.62520425034077i 0 + 0.94222308910582i 0 + 1.17011114634479i 0 + 1.36603956377562i • The Gerschgorin estimate was pretty good in this case (we suggested eigenvalues in i*[-1.5,1.5] ) >> 12*dxD ans = 0 8 -1 0 0 0 0 0 1 -8 -8 0 8 -1 0 0 0 0 0 1 1 -8 0 8 -1 0 0 0 0 0 0 1 -8 0 8 -1 0 0 0 0 0 0 1 -8 0 8 -1 0 0 0 0 0 0 1 -8 0 8 -1 0 0 0 0 0 0 1 -8 0 8 -1 0 0 0 0 0 0 1 -8 0 8 -1 -1 0 0 0 0 0 1 -8 0 8 8 -1 0 0 0 0 0 1 -8 0

  40. Note on Those Eigenvalues • What should the eigenvalues of the derivative operator ideally tend to in the limiting of decreasing dx? • Example (10, 20, 80 points with the periodic length 2*pi): >> [dxDeigs, dxD] = test4th(20); >> dx = 2*pi/20; >> >> e = sort(dxDeigs/dx); >> e(1:10) ans = -0.0000 -0.0000 -0.0000 - 0.9997i -0.0000 + 0.9997i 0.0000 - 1.6233i 0.0000 + 1.6233i -0.0000 - 1.9901i -0.0000 + 1.9901i 0.0000 - 2.9290i 0.0000 + 2.9290i >> [dxDeigs, dxD] = test4th(80); >> dx = 2*pi/80; >> e = sort(dxDeigs/dx); >> >> e(1:10) ans = 0.0000 - 0.0000i 0.0000 + 0.0000i -0.0000 - 1.0000i -0.0000 + 1.0000i -0.0000 - 1.6639i -0.0000 + 1.6639i -0.0000 - 2.0000i -0.0000 + 2.0000i 0.0000 - 2.9997i 0.0000 + 2.9997i >> [dxDeigs, dxD] = test4th(10); >> dx = 2*pi/10; >> sort(dxDeigs/dx) ans = 0.0000 - 0.0000i 0.0000 + 0.0000i 0.0000 - 0.9950i 0.0000 + 0.9950i 0 - 1.4996i 0 + 1.4996i 0 - 1.8623i 0 + 1.8623i -0.0000 - 2.1741i -0.0000 + 2.1741i

  41. cont • From Fourier analysis we know that the field u, if sufficiently smooth has a Fourier representation. • Taking the first derivative we achieve: • Also, each Fourier mode is an eigenfunction of the differentiation operator with eigenvalue

  42. cont • By the completeness of the Fourier modes, we would hope the eigenvalues of the discrete approximation differentation operator would converge to • However, we see an oddity • There is the +/-1.6639 in there. • More about this later. >> [dxDeigs, dxD] = test4th(80); >> dx = 2*pi/80; >> e = sort(dxDeigs/dx); >> >> e(1:10) ans = 0.0000 - 0.0000i 0.0000 + 0.0000i -0.0000 - 1.0000i -0.0000 + 1.0000i -0.0000 - 1.6639i -0.0000 + 1.6639i -0.0000 - 2.0000i -0.0000 + 2.0000i 0.0000 - 2.9997i 0.0000 + 2.9997i

  43. Using RK4 To avoid confusion with subindices for the vector, we haveused super indices to represent the time level.

  44. Using AB4

  45. That’s a Scheme • Without much trouble we have created a stable finite difference scheme for the advection equation – given the local truncation errors are small we might expect that the scheme is high order too. • We will examine the accuracy and conditions for convergence of this kind of finite difference scheme next lecture. • However, I did steer the choice of spatial difference discretization towards a skew symmetric operator  • I will motivate the choices next lecture.

  46. Homework 2 (due 02/03/2005) Q1) Implement the 6 stage, 5th order Runge-Kutta from: http://www.library.cornell.edu/nr/bookcpdf/c16-2.pdf Use equations: 16.2.4 , 16.2.5, 16.2.6 with coefficients at the top of table 717 (Cash-Karp coefficients provided on the website) Q1a) Solve Q1b) Estimate dtmax by concocting a Lipschitz constant for the right hand side of the ODE. Q1c) Solve 4 times, using (i) dt=dtmax (ii) dt=dtmax/2 (iii) dt=dtmax/5 (iv) dt=dtmax/6At each time step estimate the error (using 16.2.7) and compare with actual error (plot both on a graph, use log scale if necessary).

More Related