1 / 42

ENGMAE 200A: Engineering Analysis I Matrix Eigenvalue Problems Instructor: Dr. Ramin Bostanabad

ENGMAE 200A: Engineering Analysis I Matrix Eigenvalue Problems Instructor: Dr. Ramin Bostanabad. roadmap. Definitions and how to calculate eigenvalues and eigenvectors A few examples with useful results Eigenbases and their applications in: Similar matrices Diagonalization of matrices

koreyr
Download Presentation

ENGMAE 200A: Engineering Analysis I Matrix Eigenvalue Problems Instructor: Dr. Ramin Bostanabad

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ENGMAE 200A: Engineering Analysis I Matrix Eigenvalue Problems Instructor: Dr. Ramin Bostanabad

  2. roadmap • Definitions and how to calculate eigenvalues and eigenvectors • A few examples with useful results • Eigenbases and their applications in: • Similar matrices • Diagonalization of matrices • Matrix Classification

  3. Definitions • What is the difference between the results of these multiplications: vs. • A matrix eigenvalue problem considers the vector equation: • Ax = λx (1) • Here A is a given square matrix, λ an unknown scalar, and x an unknown vector. In a matrix eigenvalue problem, the task is to determine λ’s and x’s

  4. Terminology • λ’s for which (1) has a solution x ≠ 0 is called: an eigenvalue, characteristic value, or latent root. • Solutions x ≠ 0 of (1) are called the eigenvectors, characteristic vectors of A corresponding to that eigenvalue λ. • The set of all the eigenvalues of A is called the spectrum of A. We shall see that the spectrum consists of at least one eigenvalue and at most of n numerically different eigenvalues. • The largest of the absolute values of the eigenvalues of A is called the spectral radius of A.

  5. How to Find Eigenvalues and Eigenvectors We use an example: 1 2 3 4 5

  6. How to Find Eigenvalues and Eigenvectors • D(λ) is the characteristic determinant or, if expanded, the characteristic polynomial. • D(λ) = 0 the characteristic equation of A. • Example Continued: • Solutions of D(λ) = 0 are λ1 = −1 and λ2 = −6. These are the eigenvalues of A. • Eigenvectors for λ1 = −1: • Eigenvectors for λ2 = −6: • show that

  7. Opening Example revisited • What is the difference between the results of these multiplications: vs. • Obtain the eigenvalues and eigenvectors. • The eigenvalues are {10, 3}. Corresponding eigenvectors are [3 4]T and [−1 1]T, respectively.

  8. General Case • How to find the eigenvalues and eigenvectors of: • Ax = λx (1)

  9. Example Find the eigenvalues and eigenvectors: • Characteristic equation: −λ3 − λ2 + 21λ + 45 = 0 • Roots: λ1 = 5, λ2 = λ3 = −3. • Form and then use Gauss elimination: • Hence, the rank of is 2.

  10. Example continued • Choosing x3 = −1 we have x2 = 2 from and then x1 = 1 from −7x1 + 2x2 − 3x3 = 0. So: • x1 = [1 2 −1]T • For λ = −3 the characteristic matrix:

  11. Example continued • From x1 + 2x2 − 3x3 = 0 we have x1 = −2x2 + 3x3. Choosing x2 = 1, x3 = 0 and x2 = 0, x3 = 1, we obtain two linearly independent eigenvectors of A corresponding to λ = −3: and

  12. Example Find the eigenvalues and eigenvectors: • , and • and . • What are the eigenvalues and eigenvectors of ?

  13. Example on land use revisited The 2004 state of land use in a city of of built-up area is: C: Commercially Used 25% I: Industrially Used 20% R: Residentially Used 55% • The transition probabilities for 5-year intervals are given by A and remain practically the same over the time considered

  14. Example on land use What is the limit state? • Definition of limit state • How to find it systematically • Solution: Ax=x so (verify!). Then .

  15. Eigenvalues of Special Matrices Symmetric, Skew-Symmetric, and Orthogonal: A real square matrix A = [ajk] is called symmetric if transposition leaves it unchanged: (1) AT = A, thus akj = ajk, skew-symmetric if transposition gives the negative of A: (2) AT = −A, thus akj = −ajk, orthogonal if transposition gives the inverse of A: (3) AT = A−1.

  16. Eigenvalues of Special Matrices Theorem: Eigenvalues of Symmetric and Skew-Symmetric Matrices (a) The eigenvalues of a symmetric matrix are real. (b) The eigenvalues of a skew-symmetric matrix are pure imaginary or zero. Examples:

  17. Useful theorems We accept and use the following without proof: The eigenvalues of a real matrix are real or complex conjugate in pairs. has an inverse if and only if all its eigenvalues are nonzero. For with eigenvalues The eigenvalues of are the inverse of those of . The sum of the main diagonal entries, called the trace of A, equals the sum of the eigenvalues of A Eigenvalues of are

  18. Orthogonal transformations • Orthogonal transformations: Transformations like y = Ax where A is an orthogonal matrix. • With each vector x in Rnsuch a transformation assigns a vector y in Rn. • Example: The plane rotation through an angle θ • is an orthogonal transformation.

  19. Theorem: Invariance of Inner Products An orthogonal transformation preserves the value of the inner product: That is, for any a and b in Rn, orthogonal n × n matrix A, and u = Aa, v = Ab we have u ·v = a ·b. Hence the transformation also preserves the length or norm of any vector a in Rn

  20. More theorems • Orthonormality of Column and Row Vectors: A real square matrix is orthogonal if and only if its column vectors a1, … , an (and also its row vectors) form an orthonormal system: • Determinant of an Orthogonal Matrix: Has the value +1 or −1. • Eigenvalues of an orthogonal matrix A: Real or complex conjugates in pairs and have absolute value 1.

  21. Example Orthogonal matrix: • Characteristic equation: • One of the eigenvalues must be real (why?) • What are the other eigenvalues? What is their norm?

  22. roadmap • Definitions and how to calculate eigenvalues and eigenvectors • A few examples with useful results • Eigenbases and their applications in: • Similar matrices • Diagonalization of matrices • Matrix Classification

  23. Basis of Eigenvectors Theorem: If an n × n matrix A has n distinct eigenvalues, then A has a basis of eigenvectors x1, … , xnfor Rn. Theorem: A symmetric matrix has an orthonormal basis of eigenvectors for Rn. Example: λ1 = 5 λ2 = −3 λ3 = −3 Slide 8

  24. Similarity of Matrices Definition: An n × n matrix  is called similar to an n × n matrix A if  = P−1AP for some (nonsingular!) n × n matrix P. This transformation, which gives  from A, is called a similarity transformation. Theorem: If  is similar to A, then  has the same eigenvalues as A. Furthermore, if x is an eigenvector of A, then y = P−1x is an eigenvector of  corresponding to the same eigenvalue.

  25. Example Let’s revisit the matrix in slide 2: Let’s choose which gives Then:

  26. Diagonalization of a Matrix Theorem: If an n × n matrix A has a basis of eigenvectors, then D = X−1AX is diagonal, with the eigenvalues of A as the entries on the main diagonal. Here X is the matrix with these eigenvectors as column vectors. Also: Dm = X−1AmX (m = 2, 3, … ).

  27. Example Diagonalize: Solution:

  28. Quadradic forms Definition: A quadratic form Q in the components x1, … , xn of a vector x is a sum n2 of terms: A = [ajk] is called the coefficient matrix of the form. We may assume that A is symmetric (?).

  29. Example Let Here 4 + 6 = 10 = 5 + 5. So:

  30. Manipulating quadratic forms • Say you are given something like this: • How can you convert it to a canonical form: • Applications are many! We will look into an important one.

  31. Manipulating quadratic forms • Symmetric coefficient matrix A has an orthonormal basis of eigenvectors (Theorem 2 on slide 23). So, if we take these as column vectors, we obtain a matrix X that is orthogonal, so that X−1 = XT: • A = XDX−1 = XDXT. • Substitution: Q = xTXDXTx. • Set XTx= y. Since X−1 = XT, we have X−1x = y and so x = Xy. • We also have xTX = (XTx)T = yTand XTx= y. • Now Q becomes • Q = yTDy= λ1y12 + λ2y22 + … + λnyn2

  32. Theorem Principal Axes Theorem: The substitution x = Xy transforms a quadratic form to the principal axes form or canonical form Q = yTDy= λ1y12 + λ2y22 + … + λnyn2 λ1, … , λn are the (not necessarily distinct) eigenvalues of the (symmetric!) matrix A, and X is an orthogonal matrix with corresponding eigenvectors x1, … , xn, respectively, as column vectors.

  33. Example What type of conic section the following quadratic form represents Solution. We have Q = xTAx, where Characteristic equation (17 − λ)2 − 152 = 0 with roots λ1 = 2, λ2 = 32. So: What is the direction of the principal axes in the x1x2-coordinates?

  34. Complex matrices A square matrix A = [akj] is called Hermitian if ĀT = A, that is, ākj=ajk skew-Hermitian if ĀT = −A, that is, ākj= −ajk unitary if ĀT = A−1 • These are generalization of symmetric, skew-symmetric, and orthogonal matrices in complex spaces. • For example (Theorem on invariance of Inner Product): The unitary transformation y = Ax with a unitary matrix A, preserves the value of the inner product and norm.

  35. Generalizing the Theorems • The eigenvalues of a Hermitian matrix (and thus of a symmetric matrix) are real. • The eigenvalues of a skew-Hermitian matrix (and thus of a skew-symmetric matrix) are pure imaginary or zero. • The eigenvalues of a unitary matrix (and thus of an orthogonal matrix) have absolute value 1.

  36. roadmap • Definitions and how to calculate eigenvalues and eigenvectors • A few examples with useful results • Eigenbases and their applications in: • Similar matrices • Diagonalization of matrices • Matrix Classification

  37. Matrix classification • Positive (semi-) definite, negative (semi-) definite, and indefinite matrices: • The Hermitian matrix is said to be positive (negative) definite if the scalar is strictly positive (negative) for every non-zero , i.e., (). • Replace with to get the definition for positive (negative) semidefinite. • For instance: if for any , then is negative semidefinite. • * denotes conjugate transpose. Replace it via “transpose” if is real. • How does this connect to “quadratic forms” that we just studied?

  38. Revisiting the example • We reformulated the problem into a sum of quadratic terms: • Can we say anything about the sign of just by looking at for ? • What does this say about ? • What are the coefficients in? • What does this suggest?

  39. Matrix classification with eigenvalues • The Hermitian matrix is: • Positive definite if and only if all of its eigenvalues are positive. • Positive semi-definite if and only if all of its eigenvalues are non-negative. • Negative definite if and only if all of its eigenvalues are negative • Negative semi-definite if and only if all of its eigenvalues are non-positive. • Indefinite if and only if it has both positive and negative eigenvalues. What if the eigenvalues are complex?

  40. Examples • Classify the following matrices:

  41. EXTRA

  42. Some theorems • Theorem 1: The eigenvalues of a square matrix A are the roots of D(λ) = 0. • An n × n matrix has at least one eigenvalue and at most n numerically different eigenvalues. • Theorem 2: If w and x are eigenvectors of a matrix A corresponding to the same eigenvalue λ, so are w + x (provided x ≠−w) and kxfor any k ≠ 0. • Eigenvectors corresponding to one and the same eigenvalue λ of A, together with 0, form a vector space, called the eigenspace of A corresponding to that λ. • An eigenvector x is determined only up to a constant factor. So, we can normalize x.

More Related