1 / 25

An introduction to iterative projection methods

Eigenvalue problems. 4 th Seminar. An introduction to iterative projection methods. Luiza Bondar. the 23 rd of November -2005. Introduction (Erwin) Perturbation analysis (Nico) Direct (global) methods (Peter) Introduction to projection methods (Luiza) (theoretical background)

jhilda
Download Presentation

An introduction to iterative projection methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Eigenvalue problems 4th Seminar An introduction to iterative projection methods Luiza Bondar the 23rd of November -2005

  2. Introduction (Erwin) Perturbation analysis (Nico) Direct (global) methods (Peter) Introduction to projection methods (Luiza) (theoretical background) Krylov subspace methods 1 (Mark) Krylov subspace methods 2 (Willem)

  3. Outline • Introduction • The power method • Projection Methods • Subspace iteration • Summary

  4. Introduction Direct methods (Schur decomposition, QR iteration, Jacobi method, method of Sturm sequences ) compute all the eigenvalues and the corresponding eigenvectors What if we DON’T need all the eigenvalues? Example : compute the page rank of the www documents

  5. Introduction WEB: a graph (pages are nodes links are edges )

  6. Introduction Web graph: 1.4 bilion nodes (pages) 6.6 bilion edges (links) page rank of page i : the probability that a surfer will visit the page i page rank : vector with dimension N=1.4 bilion The page rank is a dominant vector of a sparse 1.4 bilion X 1.4 bilion matrix. It makes little sense to compute all the eigenvectors.

  7. The power method computes the dominant eigenvalue and an associated eigenvector Some background consider that A has p distinct eigenvalues. is the algebraic multiplicity of is the projection onto semi-simple

  8. The power method consider that the dominant eigenvalue is unique and is semi-simple initial vector such that () and take compute NO YES convergence ?

  9. use convergence of each term in given by 0 ( ) then The power method is used by to compute the page rank. and The power method initial vector

  10. The power method • the convergence of the method is given by • the convergence might be very slow if are close from one another • if the dominant eigenvalue is multiple but semi-simple, then the algorithm provides only one eigenvalue and a corresponding eigenvector • does not converge if the dominant eigenvalue is complex and the original matrix is real (2 eigenvalues with the same modulus) IMPROVEMENT : the shifted power method LED TO : projection methods

  11. The power method Shifted power method Example • let be the dominant eigenvalue of a matrix that has an egenvalue • then the power method does not converge when applied to • but the power method converges for a shift (e.g. ) Other variants of the power method inverse power method (iterates with ) inverse power method with shift smallest eigenvalue eigenvalue closest to the shift

  12. The power method inverse power method then converges to the smallest eigenvalue and converges to an associated eigenvector inverse power method with shift then converges to and converges to an eigenvector associated with

  13. The power method does not converge if the dominant eigenvalue is complex and the original matrix is real (2 eigenvalues with the same modulus) But after a certain k contains approximations to the complex par of eigenvectors power method IDEA: extract the vectors by performing a projection into the subspace

  14. Projection methods (Introduction) find and such that introduce 2 degrees of freedom impose 2 more constrains one choice is to impose orthogonality conditions (Galerkin) i.e., and projection method

  15. Projection methods (Introduction) Generalization find and such that dim K=dim L=m K: the right subspace,L: the left subspace A projection technique seeks an approximate eigenpar and such that orthogonal projection or oblique projection A way to constructK is Krylov subspace (inspired by the power method)

  16. Projection methods (orthogonal) Consider an orthonormal basis of K and the approximate can be written as eigenvalue of then eigenvalue of eigenvector of then eigenvector of Arnoldi’s method and the hermitian Lanczos algorithm are orthogonal projection methods

  17. Projection methods (oblique) Search for and such that orthonormal basis ofK orthonormal basis ofL and are such that (biorthogonal) the approximate can be written as The condition leads to the approximate eigenvalue problem The nonhermitian Lanczos alghoritm is an oblique projection method.

  18. Projection methods (orthogonal) How accurate can an orthogonal projection method be? exact eigenpar projection onto K then K

  19. K Projection methods (orthogonal) Hermitian case

  20. Subspace iteration generalization of the power method start with an initial system of m vectors instead of only one vector (power method) compute the matrix If each of the m vectors is normalised in the same way as for the power method, then each of these vectors will converge to the SAME eigenvector associated with the dominant eigenvalue (provided that ) Note looses its linear independence IDEA: restore the linear independence by performing a QR factorisation

  21. Subspace iteration start with QR factorize take compute recover the first m eigenvalues and corresponding eigenvectors of A from YES NO convergence ?

  22. Subspace iteration • the i-th column of converges to a Schur vector associated with the eigenvalue • the convergence of the column is given by the factor • the speed of convergence for an eigenvalue depends on how close is it to the next one Variants of the subspace iteration method • take the dimension of the subspace m larger than nev number of eigenvalues wanted • perform “locking” i.e., as soon as an eigenvalue has converged stop multiplying with A the corresponding vector in the subsequent iterations

  23. Subspace iteration Some very theoretical result on residual norm projection onto the subspace spanned by the eigenvectors associated with the first m eigenvalues of projection onto assume that are linearly independent Then for any eigenvalue of there is an unique such that and 0

  24. Summary • The power method can be used to compute the dominant eigenvalue (real) and a corresponding eigenvector. • Variants of the power method can compute the smallest eigenvalue or the eigenvalue closest to a given number (shift). • General projection methods consist in approximating the eigenvectors of a matrix with vectors belonging to a subspace of approximants with dimension smaller than the dimension of the matrix. • Subspace iteration method is a generalization of the power method that computes a given number of dominant eigenvalues and their corresponding eigenvectors.

  25. Last minute questions answered by Tycho van Noorden Sorin Pop

More Related