1 / 43

Finding Eigenvalues and Eigenvectors

Finding Eigenvalues and Eigenvectors. What is really important?. Approaches. Find the characteristic polynomial Leverrier’s Method Find the largest or smallest eigenvalue Power Method Inverse Power Method Find all the eigenvalues Jacobi’s Method Householder’s Method QR Method

alva
Download Presentation

Finding Eigenvalues and Eigenvectors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Finding Eigenvalues and Eigenvectors What is really important?

  2. Approaches • Find the characteristic polynomial • Leverrier’s Method • Find the largest or smallest eigenvalue • Power Method • Inverse Power Method • Find all the eigenvalues • Jacobi’s Method • Householder’s Method • QR Method • Danislevsky’s Method DRAFT Copyright, Gene A Tagliarini, PhD

  3. Finding the Characteristic Polynomial • Reduces to finding the coefficients of the polynomial for the matrix A • Recall |lI-A| = ln+anln-1+an-1ln-2+…+a2l1+a1 • Leverrier’s Method • Set Bn = A and an = -trace(Bn) • For k = (n-1) down to 1 compute • Bk = A (Bk+1 + ak+1I) • ak = - trace(Bk)/(n – k + 1) DRAFT Copyright, Gene A Tagliarini, PhD

  4. Vectors that Span a Space and Linear Combinations of Vectors • Given a set of vectors v1, v2,…, vn • The vectors are said span a space V, if given any vector xεV, there exist constants c1, c2,…, cn so that c1v1 + c2v2 +…+ cnvn = x and x is called alinear combinationof the vi DRAFT Copyright, Gene A Tagliarini, PhD

  5. Linear Independence and a Basis Given a set of vectors v1, v2,…, vn and constants c1, c2,…, cn The vectors are linearly independent if the only solution to c1v1 + c2v2 +…+ cnvn = 0 (the zero vector) is c1= c2=…=cn = 0 A linearly independent, spanning set is called a basis 5 3/12/2014 DRAFT Copyright, Gene A Tagliarini, PhD

  6. Example 1: The Standard Basis • Consider the vectors v1 = <1, 0, 0>, v2 = <0, 1, 0>, and v3 = <0, 0, 1> • Clearly, c1v1 + c2v2 + c3v3 = 0  c1= c2= c3 = 0 • Any vector <x, y, z> can be written as a linear combination of v1, v2, and v3 as<x, y, z> = x v1 + y v2 + z v3 • The collection {v1, v2, v3} is a basis for R3; indeed, it is the standard basis and is usually denoted with vector names i, j, and k, respectively. DRAFT Copyright, Gene A Tagliarini, PhD

  7. Another Definition and Some Notation • Assume that the eigenvalues for an n x n matrix A can be ordered such that|l1| > |l2| ≥ |l3| ≥ … ≥ |ln-2| ≥ |ln-1| > |ln| • Then l1 is the dominant eigenvalue and |l1| is the spectral radius of A, denoted r(A) • The ith eigenvector will be denoted using superscripts as xi, subscripts being reserved for the components of x DRAFT Copyright, Gene A Tagliarini, PhD

  8. Power Methods: The Direct Method • Assume an n x n matrix A has n linearly independent eigenvectors e1, e2,…, en ordered by decreasing eigenvalues|l1| > |l2| ≥ |l3| ≥ … ≥ |ln-2| ≥ |ln-1| > |ln| • Given any vector y0 ≠ 0, there exist constants ci, i = 1,…,n, such that y0 = c1e1 + c2e2 +…+ cnen DRAFT Copyright, Gene A Tagliarini, PhD

  9. The Direct Method (continued) • If y0 is not orthogonal to e1, i.e.,(y0)Te1≠ 0, • y1 = Ay0 = A(c1e1 + c2e2 +…+ cnen) • = Ac1e1 + Ac2e2 +…+ Acnen • = c1Ae1 + c2Ae2 +…+ cnAen • Can you simplify the previous line? DRAFT Copyright, Gene A Tagliarini, PhD

  10. The Direct Method (continued) • If y0 is not orthogonal to e1, i.e.,(y0)Te1≠ 0, • y1 = Ay0 = A(c1e1 + c2e2 +…+ cnen) • = Ac1e1 + Ac2e2 +…+ Acnen • = c1Ae1 + c2Ae2 +…+ cnAen • y1 = c1l1e1 + c2l2e2 +…+ cnlnen • What is y2 = Ay1? DRAFT Copyright, Gene A Tagliarini, PhD

  11. The Direct Method (continued) DRAFT Copyright, Gene A Tagliarini, PhD

  12. The Direct Method (continued) DRAFT Copyright, Gene A Tagliarini, PhD

  13. The Direct Method (continued) DRAFT Copyright, Gene A Tagliarini, PhD

  14. The Direct Method (continued) • Note: any nonzero multiple of an eigenvector is also an eigenvector • Why? • Suppose e is an eigenvector of A, i.e., Ae=le and c0 is a scalar such that x = ce • Ax = A(ce) = c (Ae) = c (le) = l (ce) = lx DRAFT Copyright, Gene A Tagliarini, PhD

  15. The Direct Method (continued) DRAFT Copyright, Gene A Tagliarini, PhD

  16. Direct Method (continued) • Given an eigenvector e for the matrix A • We have Ae = le and e0, so eTe  0 (a scalar) • Thus, eTAe = eTle = leTe  0 • So l = (eTAe) / (eTe) DRAFT Copyright, Gene A Tagliarini, PhD

  17. Direct Method (completed) DRAFT Copyright, Gene A Tagliarini, PhD

  18. Direct Method Algorithm DRAFT Copyright, Gene A Tagliarini, PhD

  19. Jacobi’s Method • Requires a symmetric matrix • May take numerous iterations to converge • Also requires repeated evaluation of the arctan function • Isn’t there a better way? • Yes, but we need to build some tools. DRAFT Copyright, Gene A Tagliarini, PhD

  20. What Householder’s Method Does • Preprocesses a matrix A to produce an upper-Hessenberg form B • The eigenvalues of B are related to the eigenvalues of A by a linear transformation • Typically, the eigenvalues of B are easier to obtain because the transformation simplifies computation DRAFT Copyright, Gene A Tagliarini, PhD

  21. Definition: Upper-Hessenberg Form • A matrix B is said to be in upper-Hessenberg form if it has the following structure: DRAFT Copyright, Gene A Tagliarini, PhD

  22. A Useful Matrix Construction • Assume an n x 1 vector u0 • Consider the matrix P(u) defined byP(u) = I – 2(uuT)/(uTu) • Where • I is the n x n identity matrix • (uuT) is an n x n matrix, the outer productof u with its transpose • (uTu) here denotes the trace of a 1 x 1 matrix and is the inner or dot product DRAFT Copyright, Gene A Tagliarini, PhD

  23. Properties of P(u) • P2(u) = I • The notation here P2(u) = P(u) * P(u) • Can you show that P2(u) = I? • P-1(u) = P(u) • P(u) is its own inverse • PT(u) = P(u) • P(u) is its own transpose • Why? • P(u) is an orthogonal matrix DRAFT Copyright, Gene A Tagliarini, PhD

  24. Householder’s Algorithm • Set Q = I, where I is an n x n identity matrix • For k = 1 to n-2 • a = sgn(Ak+1,k)sqrt((Ak+1,k)2+ (Ak+2,k)2+…+ (An,k)2) • uT = [0, 0, …, Ak+1,k+ a, Ak+2,k,…, An,k] • P = I – 2(uuT)/(uTu) • Q = QP • A = PAP • Set B = A DRAFT Copyright, Gene A Tagliarini, PhD

  25. Example DRAFT Copyright, Gene A Tagliarini, PhD

  26. Example DRAFT Copyright, Gene A Tagliarini, PhD

  27. Example DRAFT Copyright, Gene A Tagliarini, PhD

  28. Example DRAFT Copyright, Gene A Tagliarini, PhD

  29. Example DRAFT Copyright, Gene A Tagliarini, PhD

  30. Example DRAFT Copyright, Gene A Tagliarini, PhD

  31. Example DRAFT Copyright, Gene A Tagliarini, PhD

  32. How Does It Work? • Householder’s algorithm uses a sequence of similarity transformationsB = P(uk) A P(uk)to create zeros below the first sub-diagonal • uk=[0, 0, …, Ak+1,k+ a, Ak+2,k,…, An,k]T • a = sgn(Ak+1,k)sqrt((Ak+1,k)2+ (Ak+2,k)2+…+ (An,k)2) • By definition, • sgn(x) = 1, if x≥0 and • sgn(x) = -1, if x<0 DRAFT Copyright, Gene A Tagliarini, PhD

  33. How Does It Work? (continued) • The matrix Q is orthogonal • the matrices P are orthogonal • Q is a product of the matrices P • The product of orthogonal matrices is an orthogonal matrix • B = QTA Q hence Q B = Q QTA Q = A Q • Q QT = I (by the orthogonality of Q) DRAFT Copyright, Gene A Tagliarini, PhD

  34. How Does It Work? (continued) • If ek is an eigenvector of B with eigenvalue lk, then B ek =lkek • Since Q B = A Q,A (Qek) = Q (B ek) = Q (lk ek) = lk(Q ek) • Note from this: • lkis an eigenvalue of A • Qek is the corresponding eigenvector of A DRAFT Copyright, Gene A Tagliarini, PhD

  35. The QR Method: Start-up • Given a matrix A • Apply Householder’s Algorithm to obtain a matrix B in upper-Hessenberg form • Select e>0 and m>0 • e is a acceptable proximity to zero for sub-diagonal elements • m is an iteration limit DRAFT Copyright, Gene A Tagliarini, PhD

  36. The QR Method: Main Loop DRAFT Copyright, Gene A Tagliarini, PhD

  37. The QR Method: Finding The l’s DRAFT Copyright, Gene A Tagliarini, PhD

  38. Details Of The Eigenvalue Formulae DRAFT Copyright, Gene A Tagliarini, PhD

  39. Details Of The Eigenvalue Formulae DRAFT Copyright, Gene A Tagliarini, PhD

  40. Finding Roots of Polynomials • Every n x n matrix has a characteristic polynomial • Every polynomial has a corresponding n x n matrix for which it is the characteristic polynomial • Thus, polynomial root finding is equivalent to finding eigenvalues DRAFT Copyright, Gene A Tagliarini, PhD

  41. Example Please!?!?!? • Consider the monic polynomial of degree nf(x) = a1 +a2x+a3x2+…+anxn-1 +xn and the companion matrix DRAFT Copyright, Gene A Tagliarini, PhD

  42. Find The Eigenvalues of the Companion Matrix DRAFT Copyright, Gene A Tagliarini, PhD

  43. Find The Eigenvalues of the Companion Matrix DRAFT Copyright, Gene A Tagliarini, PhD

More Related