1 / 30

LECTURE 2

LECTURE 2. Elliptic PDEs and the Finite Difference Method. Aim of Lecture. During this lecture we will discuss: Elliptic Partial Differential Equations Finite Difference Method Taylor’s Series Expansions High Order Terms & Truncation Finite Difference Discretisation Solution Methods

simon-tyson
Download Presentation

LECTURE 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LECTURE 2 Elliptic PDEs and the Finite Difference Method

  2. Aim of Lecture • During this lecture we will discuss: • Elliptic Partial Differential Equations • Finite Difference Method • Taylor’s Series Expansions • High Order Terms & Truncation • Finite Difference Discretisation • Solution Methods • Linear/Matrix System Solvers • Iterative Solvers: Jacobi, Gauss-Seidel • Use of Excel

  3. Elliptic PDEs • Elliptic PDEs represent phenomena that have already reached a steady state and are, hence, time independent. • Two classic Elliptic Equations are: • Laplace Equation or • Poisson’s Equation or • u(x,y) is dependent variable and g is a constant

  4. Elliptic PDE – Example • Temperature, u(x,y) profile around two computer chips on a printed circuit board. • Where g is the heat source. Heat Source g

  5. FINITE DIFFERENCE METHOD

  6. x+h x-h x Taylor’s Series Expansions • Recall the Taylor’s Series expansion of a function about a point, x • We will use this to find approximate solutions to PDEs

  7. High Order Terms • Note the in the previous expansion • This refers to all powers of h greater than or equal to 4, e.g. • When h is small, these high order terms are tiny, e.g. • We can simplify expressions by making h small and ignoring high order terms • This is known as truncation

  8. Taylor Series expansions in 2D y+Dy (x,y) • Consider the function u expanded about the point (x,y) y y-Dy x-Dx x x+Dx

  9. j+1 j j -1 i i -1 i+1 Taylor Series expansions in 2D • Now consider a regular grid of points and use the notation • This gives

  10. Finite Differences • We can rearrange the Taylor’s series gives gives (known as forward difference) (known as backward difference)

  11. Finite Differences • We can also add and subtract Taylor’s series (1) – (2) gives (1) + (2) gives (known as central difference) (also known as central difference [2nd order])

  12. - Forward difference - Backward difference - Central difference • Central difference (2nd order) Finite Differences: Summary • We can rearrange the Taylor’s series to get • Then truncate the higher order terms and substitute for differentials in the PDE

  13. Exercise • Write down the central difference approximations for:

  14. Truncation Error • Approximating derivatives, in this case using finite differences, is known as discretisation. • These approximations will result in errors known as truncation error. Truncation Error

  15. j+1 j j -1 i i -1 i+1 Finite Difference Method – Example • Consider Poisson’s equation. • Discretise • Difference formula for each node:

  16. Finite Difference Method – Example • Consider the case: • Then • The difference equation can be written as: or

  17. y x Example • Consider the PDE shown, on a square domain with zero boundary conditions (u = 0) (0,1) u = 0 (1,1) u = 0 u = 0 u = 0 (0,0) u = 0 (1,0)

  18. j = 1 2 3 4 i = 1 2 3 4 Approximate Solution • Represent using a Finite Difference grid with • Need to find approximations to u for all nodal values • However we know that u = 0 on all boundaries • So need only to find approximations for u at four internal nodes • Required values are

  19. j = 1 2 3 4 i = 1 2 3 4 Example • In general we have: • Or, in terms of the 4 unknowns: • So that and then (e.g. using Matlab)

  20. Solvers • Notice that the Finite Difference method will generally result in a matrix system of the form Au = b where u = [u1u2 ……….. un ]T b = [b1b2 ……….. bn ]T and

  21. Solvers • Generally in 2D we will get matrices of the form Note the banded structure of the matrix.

  22. Banded matrices • Banded matrices arise in finite difference methods because (in 2D) the value at each node is directly dependent only on its four nearest neighbours. • Banded matrices are sparse (i.e. mostly full of zeroes) with a regular structure and hence can be stored in minimal space. • For example, if the finite difference grid is 100 by 100, the number of unknowns is 10,000 and the number of entries in the matrix is 100,000,000 which might require 1,600Mb (megabytes) to store in a computer. • However, by just storing five non-zero diagonals we can reduce the storage requirement to around 50,000 values or around 800Kb (kilobytes) = 0.8Mb. • This means the system can be solved much more rapidly.

  23. Direct and Iterative Solvers • Exact solution requires inversion of A • Very slow. Huge memory requirements. • Direct Solvers: (Gaussian Elimination, etc) • Need to store whole matrix. (Disadvantage) • Slow, especially for large matrices. (Disadvantage) • Robust even with ill-conditioned matrices. (Advantage) • Iterative Solvers:(Jacobi, Gauss Seidel, etc) • Good for large matrix systems. No need to store whole matrix (Advantage) • Fast, even for large matrices. (Advantage) • Poor for ill-conditioned matrices. May converge only slowly. (Disadvantage)

  24. Iterative Solvers • Two classical Examples are Jacobi and Gauss-Seidel • Consider following system of equations • Start with initial vector x1 = [0, 0, 0] T. The final solution is x = [1, 1,1] T. It takes 8 iterations for Jacobi and 6 iterations for Gauss Seidel. Gauss Seidel Jacobi

  25. Iterative Solvers – Matrix Version • Need to solve Ax = b • Let A = D+L+U (Diagonal + Lower triangle + Upper triangle) • The Jacobi method can be written as: Dx(k+1) = -(L+U)x(k) + b • The Gauss Seidel method can be written: (D+L)x(k+1) = -Ux(k) + b (generally converges faster as it uses most recent information)

  26. Iterative Solvers – Convergence • To ensure convergence of an iterative solver such as Jacobi or Gauss Seidel, we require Diagonal Dominance in the matrix, i.e. for each row i • In other words, the diagonal element in each row (or column) is greater in magnitude than the sum of the off diagonal elements • We can sometimes rearrange the order of the equations to ensure diagonal dominance

  27. Exercise • Write down the Jacobi method for the following system • Would you expect it to converge? • If not, how would you rewrite it so that it would converge?

  28. Example – Gauss Seidel • Example: use Excel to solve the following • Rewrite as Gauss-Seidel iterations

  29. Gauss Seidel in Excel FLAG (to reset, see Tutorial 1) Formula for X1 (uses old value of X2)

  30. Gauss Seidel in Excel Formula for X2 (uses current value of X1)

More Related