steepest decent and conjugate gradients cg
Download
Skip this Video
Download Presentation
Steepest Decent and Conjugate Gradients (CG)

Loading in 2 Seconds...

play fullscreen
1 / 68

Steepest Decent and Conjugate Gradients (CG) - PowerPoint PPT Presentation


  • 102 Views
  • Uploaded on

Steepest Decent and Conjugate Gradients (CG). Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system. Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system Problem : dimension n too big, or not enough time for gauss elimination

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Steepest Decent and Conjugate Gradients (CG)' - serina-stephenson


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
steepest decent and conjugate gradients cg1
Steepest Decent and Conjugate Gradients (CG)
  • Solving of the linear equation system
steepest decent and conjugate gradients cg2
Steepest Decent and Conjugate Gradients (CG)
  • Solving of the linear equation system
  • Problem: dimension n too big, or not enough time for gauss elimination

Iterative methods are used to get an approximate solution.

steepest decent and conjugate gradients cg3
Steepest Decent and Conjugate Gradients (CG)
  • Solving of the linear equation system
  • Problem: dimension n too big, or not enough time for gauss elimination

Iterative methods are used to get an approximate solution.

  • Definition Iterative method: given starting point , do steps

hopefully converge to the right solution

starting issues1
starting issues
  • Solving is equivalent to minimizing
starting issues2
starting issues
  • Solving is equivalent to minimizing
  • A has to be symmetric positive definite:
starting issues4
starting issues
  • If A is also positive definite the solution of is the minimum
starting issues5
starting issues
  • If A is also positive definite the solution of is the minimum
starting issues6
starting issues
  • error:

The norm of the error shows how far we are away from the exact solution, but can’t be computed without knowing of the exact solution .

starting issues7
starting issues
  • error:

The norm of the error shows how far we are away from the exact solution, but can’t be computed without knowing of the exact solution .

  • residual:

can be calculated

steepest decent1
Steepest Decent
  • We are at the point . How do we reach ?
steepest decent2
Steepest Decent
  • We are at the point . How do we reach ?
  • Idea: go into the direction in which decreases most quickly ( )
steepest decent3
Steepest Decent
  • We are at the point . How do we reach ?
  • Idea: go into the direction in which decreases most quickly ( )
  • how far should we go?
steepest decent4
Steepest Decent
  • We are at the point . How do we reach ?
  • Idea: go into the direction in which decreases most quickly ( )
  • how far should we go?

Choose so that is minimized:

steepest decent5
Steepest Decent
  • We are at the point . How do we reach ?
  • Idea: go into the direction in which decreases most quickly ( )
  • how far should we go?

Choose so that is minimized:

steepest decent6
Steepest Decent
  • We are at the point . How do we reach ?
  • Idea: go into the direction in which decreases most quickly ( )
  • how far should we go?

Choose so that is minimized:

steepest decent7
Steepest Decent
  • We are at the point . How do we reach ?
  • Idea: go into the direction in which decreases most quickly ( )
  • how far should we go?

Choose so that is minimized:

steepest decent8
Steepest Decent
  • We are at the point . How do we reach ?
  • Idea: go into the direction in which decreases most quickly ( )
  • how far should we go?

Choose so that is minimized:

steepest decent9
Steepest Decent
  • We are at the point . How do we reach ?
  • Idea: go into the direction in which decreases most quickly ( )
  • how far should we go?

Choose so that is minimized:

steepest decent10
Steepest Decent

one step of steepest decent can be calculated as follows:

steepest decent11
Steepest Decent

one step of steepest decent can be calculated as follows:

  • stopping criterion: or with an given small

It would be better to use the error instead of the residual, but you can’t calculate the error.

steepest decent12
Steepest Decent

Method of steepest decent:

steepest decent13
Steepest Decent
  • As you can see the starting point is important!
steepest decent14
Steepest Decent
  • As you can see the starting point is important!

When you know anything about the solution use it to guess a good starting point. Otherwise you can choose a starting point you want e.g. .

steepest decent convergence1
Steepest Decent - Convergence
  • Definition energy norm:
steepest decent convergence2
Steepest Decent - Convergence
  • Definition energy norm:
  • Definition condition:

( is the largest and the smallest eigenvalue of A)

steepest decent convergence3
Steepest Decent - Convergence
  • Definition energy norm:
  • Definition condition:

( is the largest and the smallest eigenvalue of A)

convergence gets worse when the condition gets larger

conjugate gradients1
Conjugate Gradients
  • is there a better direction?
conjugate gradients2
Conjugate Gradients
  • is there a better direction?
  • Idea: orthogonal search directions
conjugate gradients3
Conjugate Gradients
  • is there a better direction?
  • Idea: orthogonal search directions
conjugate gradients4
Conjugate Gradients
  • is there a better direction?
  • Idea: orthogonal search directions
  • only walk once in each direction and minimize
conjugate gradients5
Conjugate Gradients
  • is there a better direction?
  • Idea: orthogonal search directions
  • only walk once in each direction and minimize

maximal n steps are needed to reach the exact solution

conjugate gradients6
Conjugate Gradients
  • is there a better direction?
  • Idea: orthogonal search directions
  • only walk once in each direction and minimize

maximal n steps are needed to reach the exact solution

has to be orthogonal to

conjugate gradients7
Conjugate Gradients
  • example with the coordinate axes as orthogonal search directions:
conjugate gradients8
Conjugate Gradients
  • example with the coordinate axes as orthogonal search directions:

Problem: can’t be computed because (you don’t know !)

conjugate gradients9
Conjugate Gradients
  • new idea: A-orthogonal
conjugate gradients10
Conjugate Gradients
  • new idea: A-orthogonal
  • Definition A-orthogonal: A-orthogonal

(reminder: orthogonal: )

conjugate gradients11
Conjugate Gradients
  • new idea: A-orthogonal
  • Definition A-orthogonal: A-orthogonal

(reminder: orthogonal: )

  • now has to be A-orthogonal to
conjugate gradients12
Conjugate Gradients
  • new idea: A-orthogonal
  • Definition A-orthogonal: A-orthogonal

(reminder: orthogonal: )

  • now has to be A-orthogonal to
conjugate gradients13
Conjugate Gradients
  • new idea: A-orthogonal
  • Definition A-orthogonal: A-orthogonal

(reminder: orthogonal: )

  • now has to be A-orthogonal to

can be computed!

conjugate gradients14
Conjugate Gradients
  • A set of A-orthogonal directions can be found with n linearly independent vectors and conjugate Gram-Schmidt (same idea as Gram-Schmidt).
conjugate gradients15
Conjugate Gradients
  • Gram-Schmidt:

linearly independent vectors

conjugate gradients16
Conjugate Gradients
  • Gram-Schmidt:

linearly independent vectors

conjugate gradients17
Conjugate Gradients
  • Gram-Schmidt:

linearly independent vectors

  • conjugate Gram-Schmidt:
conjugate gradients18
Conjugate Gradients
  • A set of A-orthogonal directions can be found with n linearly independent vectors and conjugate Gram-Schmidt (same idea as Gram-Schmidt).
  • CG works by setting (makes conjugate Gram-Schmidt easy)
conjugate gradients19
Conjugate Gradients
  • A set of A-orthogonal directions can be found with n linearly independent vectors and conjugate Gram-Schmidt (same idea as Gram-Schmidt).
  • CG works by setting (makes conjugate Gram-Schmidt easy)

with

conjugate gradients convergence2
Conjugate Gradients - Convergence
  • for steepest decent for CG

Convergence of CG is much better!

ad