1 / 20

Iterative methods

Iterative methods. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A. Iterative methods for Ax=b. Iterative methods produce a sequence of approximations So, they also produce a sequence of errors We say that a method converges if .

jadyn
Download Presentation

Iterative methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Iterative methods TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA

  2. Iterative methods for Ax=b Iterative methods produce a sequence of approximations So, they also produce a sequence of errors We say that a method converges if

  3. What is computationally appealing about iterative methods ? Iterative methods produce the approximation using a very basic primitive: matrix-vector multiplication If a matrix Ahas m non-zero entries then A*x can be computed in time O(m). Essentially every non-zero matrix is recalled from the RAM only one time. In some applications we don’t even know the matrix A explicitly. For example we may have only a function f(x), which returns A*x for some unknown matrix A.

  4. What is computationally appealing about iterative methods ? The memory requirements are also O(m). If the matrix can fit into the RAM, the method will run too. This is a HUGEIMPROVEMENT with respect to the O(m2) of Gaussian elimination. In most applications Gaussian elimination is impossible because the memory is limited. Iterative methods are the only approach that can potentially work.

  5. What might be problematic about iterative methods? Do they converge always? If they converge, how fast do they converge? Speed of convergence: How many steps are required so that we gain one decimal place in the error, in other words:

  6. Richardson’s iteration Perhaps the most simple iteration It can be seen as a fixed point iteration for (I-A)x+b The error reduction equation is simple too!

  7. Richardson’s iteration for diagonal matrices Let’s study convergence for positive diagonal matrices Assume initial error is the all-ones vector and Then Let’s play with numbers…..

  8. Richardson’s iteration for diagonal matrices So it seems that we must have …and Richardson’s won’t converge for all matrices ARE WE DOOMED?

  9. Maybe we’re not (yet) doomed Suppose we know a d such that We can then try to solve the equivalent system The new matrix A’ is a diagonal matrix with elements d’isuch that:

  10. Richardson’s iteration converges for an equivalent “scaled” system

  11. You’re cheating us:Diagonal matrices are easy anyways! Spectral decomposition theorem Every non-singular matrix has the eigenvaluedecomposition: So what is (I-A)i ? So if (I-D)i goes to zero then (I-A)i goes to zero too

  12. Can we scale any positive system? Ghersghorin’s Theorem Recall the definition of infinity norm for matrices

  13. Richardson’s iteration converges for an equivalent “scaled” system for all positive matrices

  14. How about the rate of convergence? We showed that the jth coordinate of the error ei is: How many steps do we need to take to make this coordinate smaller than 1/10 ? …. And the answer is

  15. How about the rate of convergence? So in the worst case the number of iterations we need is: ARE WE DOOMED version2 ?

  16. We may have some more information about the matrix A Suppose we know numbers d’i such that We can then try to solve the equivalent system The new matrix A’ is a diagonal matrix with elements d’isuch that:

  17. Preconditioning In general, changing the system to is called preconditioning Why? Because we try to make a new matrix with a better condition. Do we need to have the inverse of B? So, B must be a matrix such that By = z can be solved more easily.

  18. Richardson’s iteration converges fast for an equivalent system when we have some more info in the form of a preconditioner

  19. Laplacians of weighted graphs 30 2 1 1 20 15 Symmetric, negative off-diagonals, zero row-sums Symmetric diagonally dominant matrix: Add a positive diagonal to Laplacian and flip off-diagonal signs, keeping symmetry.

  20. Preconditioning for Laplacians The star graph: A simple diagonal scaling makes the condition number good for The line graph The condition number is bad despite even with scaling ARE WE DOOMED version3 ?

More Related