iterative solution methods l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Iterative Solution Methods PowerPoint Presentation
Download Presentation
Iterative Solution Methods

Loading in 2 Seconds...

play fullscreen
1 / 37

Iterative Solution Methods - PowerPoint PPT Presentation


  • 243 Views
  • Uploaded on

Iterative Solution Methods. Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem Ax=b During the iterations A, matrix is not changed so sparcity is preserved Each iteration involves a matrix-vector product

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Iterative Solution Methods' - dougal


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
iterative solution methods
Iterative Solution Methods
  • Starts with an initial approximation for the solution vector (x0)
  • At each iteration updates the x vector by using the sytem Ax=b
  • During the iterations A, matrix is not changed so sparcity is preserved
  • Each iteration involves a matrix-vector product
  • If A is sparse this product is efficiently done
iterative solution procedure
Iterative solution procedure
  • Write the system Ax=b in an equivalent form

x=Ex+f (like x=g(x) for fixed-point iteration)

  • Starting with x0, generate a sequence of approximations {xk} iteratively by

xk+1=Exk+f

  • Representation of E and f depends on the type of the method used
  • But for every method E and f are obtained from A and b, but in a different way
convergence
Convergence
  • As k, the sequence {xk} converges to the solution vector under some conditions on E matrix
  • This imposes different conditions on A matrix for different methods
  • For the same A matrix, one method may converge while the other may diverge
  • Therefore for each method the relation between A and E should be found to decide on the convergence
different iterative methods
Different Iterative methods
  • Jacobi Iteration
  • Gauss-Seidel Iteration
  • Successive Over Relaxation (S.O.R)
    • SOR is a method used to accelerate the convergence
    • Gauss-Seidel Iteration is a special case of SOR method
x k 1 ex k f iteration for jacobi method

Dxk+1

xk+1=Exk+f iteration for Jacobi method

A can be written as A=L+D+U (not decomposition)

Ax=b  (L+D+U)x=b

Dxk+1 =-(L+U)xk+b

xk+1=-D-1(L+U)xk+D-1b

E=-D-1(L+U)

f=D-1b

x k 1 ex k f iteration for gauss seidel

Dxk+1

x(k+1)=Ex(k)+f iteration for Gauss-Seidel

Ax=b  (L+D+U)x=b

(D+L)xk+1 =-Uxk+b

xk+1=-(D+L)-1Uxk+(D+L)-1b

E=-(D+L)-1U

f=-(D+L)-1b

comparison
Comparison
  • Gauss-Seidel iteration converges more rapidly than the Jacobi iteration since it uses the latest updates
  • But there are some cases that Jacobi iteration does converge but Gauss-Seidel does not
  • To accelerate the Gauss-Seidel method even further, successive over relaxation method can be used
successive over relaxation method

Correction term

Multiply with

Successive Over Relaxation Method
  • GS iteration can be also written as follows

Faster

convergence

slide11
SOR

1<<2 over relaxation (faster convergence)

0<<1 under relaxation (slower convergence)

There is an optimum value for 

Find it by trial and error (usually around 1.6)

x k 1 ex k f iteration for sor
x(k+1)=Ex(k)+f iteration for SOR

Dxk+1=(1-)Dxk+b-Lxk+1-Uxk

(D+ L)xk+1=[(1-)D-U]xk+b

E=(D+ L)-1[(1-)D-U]

f= (D+ L)-1b

the conjugate gradient method
The Conjugate Gradient Method
  • Converges if A is a symmetric positive definite matrix
  • Convergence is faster
convergence of iterative methods

Substitute this into

Convergence of Iterative Methods

Define the solution vector as

Define an error vector as

convergence of iterative methods15
Convergence of Iterative Methods

power

iteration

The iterative method will converge for any initial

iteration vector if the following condition is satisfied

Convergence condition

norm of a vector
Norm of a vector

A vector norm should satisfy these conditions

Vector norms can be defined in different forms as long as the norm definition satisfies these conditions

commonly used vector norms
Commonly used vector norms

Sum norm or ℓ1 norm

Euclidean norm or ℓ2 norm

Maximum norm or ℓ norm

norm of a matrix
Norm of a matrix

A matrix norm should satisfy these conditions

Important identitiy

commonly used matrix norms
Commonly used matrix norms

Maximum column-sum norm or ℓ1 norm

Spectral norm or ℓ2 norm

Maximum row-sum norm or ℓ norm

example

17

13

15

16

19

10

Example
  • Compute the ℓ1 and ℓ norms of the matrix
convergence condition
Convergence condition

Express E in terms of modal matrix P and 

:Diagonal matrix with eigenvalues of E on the diagonal

sufficient condition for convergence

is a sufficient condition for convergence

Sufficient condition for convergence

If the magnitude of all eigenvalues of iteration matrix

E is less than 1 than the iteration is convergent

It is easier to compute the norm of a matrix than to compute its eigenvalues

convergence of jacobi iteration24
Convergence of Jacobi iteration

Evaluate the infinity(maximum row sum) norm of E

Diagonally dominant matrix

If A is a diagonally dominant matrix, then Jacobi iteration converges for any initial vector

stopping criteria
Stopping Criteria
  • Ax=b
  • At any iteration k, the residual term is

rk=b-Axk

  • Check the norm of the residual term

||b-Axk||

  • If it is less than a threshold value stop
example 1 jacobi iteration
Example 1 (Jacobi Iteration)

Diagonally dominant matrix

example 1 continued
Example 1 continued...

Matrix is diagonally dominant, Jacobi iterations are converging

example 2
Example 2

The matrix is not diagonally dominant

example 2 continued
Example 2 continued...

The residual term is increasing at each iteration, so the iterations are diverging.

Note that the matrix is not diagonally dominant

convergence of gauss seidel iteration
Convergence of Gauss-Seidel iteration
  • GS iteration converges for any initial vector if A is a diagonally dominant matrix
  • GS iteration converges for any initial vector if A is a symmetric and positive definite matrix
  • Matrix A is positive definite if

xTAx>0 for every nonzero x vector

positive definite matrices
Positive Definite Matrices
  • A matrix is positive definite if all its eigenvalues are positive
  • A symmetric diagonally dominant matrix with positive diagonal entries is positive definite
  • If a matrix is positive definite
    • All the diagonal entries are positive
    • The largest (in magnitude) element of the whole matrix must lie on the diagonal
positive definitiness check

Not positive definite

Largest element is not on the diagonal

Not positive definite

All diagonal entries are not positive

Positive Definitiness Check

Positive definite

Symmetric, diagonally dominant, all diagonal entries are positive

positive definitiness check33
Positive Definitiness Check

A decision can not be made just by investigating the matrix.

The matrix is diagonally dominant and all diagonal entries are positive but it is not symmetric.

To decide, check if all the eigenvalues are positive

example 1 continued35

Jacobi iteration

Example 1 continued...

When both Jacobi and Gauss-Seidel iterations converge, Gauss-Seidel converges faster

convergence of sor method
Convergence of SOR method
  • If 0<<2, SOR method converges for any initial vector if A matrix is symmetric and positive definite
  • If >2, SOR method diverges
  • If 0<<1, SOR method converges but the convergence rate is slower (deceleration) than the Gauss-Seidel method.
operation count
Operation count
  • The operation count for Gaussian Elimination or LU Decomposition was 0 (n3), order of n3.
  • For iterative methods, the number of scalar multiplications is 0 (n2) at each iteration.
  • If the total number of iterations required for convergence is much less than n, then iterative methods are more efficient than direct methods.
  • Also iterative methods are well suited for sparse matrices