1 / 29

5.IV. Jordan Form

5.IV. Jordan Form. 5.IV.1. Polynomials of Maps and Matrices 5.IV.2. Jordan Canonical Form. 5.IV.1. Polynomials of Maps and Matrices. For any n  n matrix T the ( n 2 +1)-member set. is L.D. So,  scalars c 0 , …, c n s.t.

mimis
Download Presentation

5.IV. Jordan Form

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 5.IV. Jordan Form 5.IV.1. Polynomials of Maps and Matrices 5.IV.2. Jordan Canonical Form

  2. 5.IV.1. Polynomials of Maps and Matrices For any nn matrix Tthe (n2+1)-member set is L.D. So,  scalars c0 , …, cn s.t. Thus, every transformation exhibits a generalized nilpotency: the powers of a square matrix cannot climb forever without a “repeat”. Example 1.2: →

  3. Definition 1.3: • Given any polynomial f(x) = cnxn+ … + c1x + c0 . • If t : V → V is a linear transformation, then f(t) : V → V is the transformation • cn t n+ … + c1t + c0 (id) [id is often omitted ] • If Tis a square matrix then f(T) is the matrix • cn Tn+ … + c1T+ c0 I → Obviously,  Definition 1.5: Minimal Polynomial The minimal polynomialm(x) of a transformation t or a square matrix Tis the polynomial of least degree and with leading coefficient 1 s.t. m(t) = zero or m(T) = O.

  4. Example 1.6: see Example 1.5 is a minimal polynomial for the rotation by π/6 and matrix m(x) can be calculated by brute force (not recommended): Lemma 1.7: Suppose that the polynomial f(x) = cnxn+ … + c1x + c0 factors as If t is a linear map, then If T is a square matrix, then Proof: By induction.

  5. If a minimial polynomial m(x) for a map t factors as then → at least one of the factors sends some non-zero vectors to 0, i.e., at least some (actually all) of the λjs are eigenvalues of t. Ditto for any matrix representation T of t. Theorem 1.8: Cayley-Hamilton ( usually known as a corollary to Lemma 1.9 ) If the characteristic polynomial of a transformation or square matrix factors into then its minimal polynomial factors into where Proof: Takes the next 3 lemmas.

  6. Lemma 1.9: (Often, this is called the Cayley-Hamilton Theorem) If Tis a square matrix with characteristic polynomial c(x) then c(T) = O. Proof: Let The entries of adj(C) are polynomials of degrees  n1. Hence we can write where the Ajs are all nn matrices independent of x. → Equating coefficients of x jgives … … Right multiply the jth eq. by T j and add everything gives c(T) = O.

  7. Alternate phrasing of Lemma 1.9: A matrix or map satisfiesits characteristic polynomial. Lemma 1.10: Let f(x) be a polynomial. If f(T) = O, then f(x) is divisible by the minimal polynomial of T, i.e., any polynomial satisfied by Tis divisable by T ’s minimal polynomial. Proof: Let m(x) be minimal for T so that m(T) = O. The Division Theorem for Polynomials gives f(x) = q(x)m(x) + r(x), where deg r < deg m. If f(T) = O, then r(T) = O. Since m is minimal, r can only be the zero polynomial. QED. Lemmas 1.9 & 1.10 → m(x) divides c(x). i.e., → pj qj Proof of Theorem 1.8 is complete if there are no extra roots λk+1 …..

  8. Lemma 1.11: Each linear factor of the characteristic polynomial c(x) of a square matrix T is also a linear factor of the minimal polynomial m(x). Proof: By definition, every linear factor of c(x) is of the form x λ, where λ is an eigenvalue of T. The proof is complete if x λ is also a factor of m(x). Let v be the eigenvector of T belonging to λ. → f(T) v = f(λ) v for any polynomial f. Since m(T) = O, we have m(T) v = 0 = m(λ) v Since v  0, we have m(λ) = 0. QED. Example 1.12: Find m(x) using the Cayley-Hamilton theorem.

  9. Exercises 5.IV.1. 1. Find the minimal polynomial of each matrix. (a) (b) 2. What is wrong with this claimed proof of Lemma 1.9: “ if c(x) = | T xI | then c(T) = | T TI |= 0” ? 3. The only eigenvalue of a nilpotent map is zero. Show that the converse statement holds.

  10. 5.IV.2. Jordan Canonical Form Lemma 2.1: A linear transformation whose only eigenvalue is zero is nilpotent. Proof: If t : V n → V nhas all λ= 0, then c(x) = xn. Cayley-Hamilton Theorem → tn = zero map. QED. Canonical form for nilpotent matrices is one that is all zeroes except for blocks of subdiagonal ones. This can be made unique by setting some rules for the arrangement of blocks. Lemma 2.2: If the matrices T λIand Nare similar, then Tand N+λIare also similar, via the same change of basis matrices. Proof: → QED

  11. Example 2.3: →  T has only one eigenvalue 3. → is nilpotent. Action of t  3 on a string basis B is Canonical form of t  3 is where  Canonical form of t is

  12. Similarity computations

  13. Example 2.4: Nullity( t  4 ) = 2 Nullity( (t  4)2 ) = 3 Nullity( (t  4)3 ) = 4  t  4 is nilpotent with index of nilpotency 3. Action of t  4 on a string basis B is Canonical form of t  4 is Jordan blocks Canonical form of t is

  14. Example 2.5: The 33 matrices whose only eigenvalue is 1/2 separate into 3 similarity classes of canonical representatives: So far, we’ve found the Jordan form for maps & matrices with a single eigenvalue. Next target: maps & matrices with multiple eigenvalues. This’ll take the next 3 lemmas.

  15. Definition 2.6: Invariant Subspace Let t : V → V be a transformation. Then a subspace M is t invariant if Note: The restriction of a linear t to M can remain linear iff M is t invariant. Examples: N(t) and R(t) are both t invariant. Proof: If v N(t), then  k s.t. t n (v) = 0  n  k. t n+1(v) = t n (t (v)) = 0 → t (v)  N(t). If v R(t), then  w s.t. v = t n (w). Then t(v) = t n+1(w) = t n (t (w))  R(t). Hence, N( t  λi ) and R( t  λi ) are both t  λi invariant. By definition, t  λi is nilpotent on N( t  λi ) .

  16. Lemma 2.7: A subspace M is t invariant iff it is t  λ invariant for any scalar λ. In particular, where λi is an eigenvalue of a linear transformation t, then for any other eigenvalue λj , the spaces N( t  λi ) and R( t  λi ) are both t  λj invariant. Proof of 1st sentence: If M is t  λ invariant for any scalar λ, then setting λ= 0 means W is t invariant. If M is t invariant, then mM → t(m)M . Since M is a subspace, it’s closed under all linear combinations of its members. Hence, t(m)λm M , i.e., mM → ( t λ ) (m)M . QED Proof of 2nd sentence: Since N( t  λi ) and R( t  λi ) are t  λi invariant, they are t invariant, and hence also t  λj invariant.

  17. Lemma 2.8: Given t : V → V and let Nand Rbe t invariant complementary subspaces of V. Then t can be represented by a matrix with blocks of square submatrices T1 and T2 : Proof : & , resp. Let the bases of Nand Rbe Nand Rare complementary → is a basis for V. Then has the desired form.

  18. Lemma 2.9: If Tis a matrices with square submatrices T1 and T2: Then Proof: Let the dimensions of T, T1 & T2 be nn, rr, and (nr)(nr), resp. Example 2.10:

  19. Lemma 2.11: If a linear transformation t : V → V has the characteristic polynomial Then (1) (2) Proof: Since dim(V ) = p1 + …+ pk , (1) is proved if we can show (2) holds and that  i  j By Lemma 2.7, both N( t  λi ) and N( t  λj ) are t invariant. Since the intersect of t invariant subspaces is t invariant, the restriction of t to M = N( t  λi )  N( t  λj ) is a linear transformation. Now, both t  λi and t  λj are nilpotent on M. Therefore, the only eigenvalue λof t on M must satisfy λ=λi and λ=λj . However, λi λj → t has no eigenvalue on M → M = { 0 }

  20. To prove statement (2), fix the index i and write V = N( t  λi )  R( t  λi ). Lemma 2.8 → Lemma 2.9 → Uniqueness clause of the Fundamental Theorem of Arithmetic means that if then  j = 1, …, k. Proof is complete if we can show Now, the restriction of t  λi to M = N( t  λi ) is nilpotent on M. The only eigenvalue of t on M is therefore λi . Hence c(x) = ( x  λi )dim M on M, i.e., qj = 0  j  i. Consider next the restriction of t  λi to R = R( t  λi ) . Since t  λiis nonsingular on R, λi is not an eigenvalue of R. Hence, qi = pi . QED

  21. Theorem 2.12: Any square matrix is similar to one in Jordan form where each Jλ is a Jordan block of eigenvalue λ. Proof: Simply translates the previous lemmas into matrix terms.

  22. Example 2.13: dim N(t2) = 2 Restriction of t 2 on N(t2) is nilpotent with string basis β1  β2  0. Canonical form for this is

  23. For λ= 6, dim N(t6) = 1. Restriction of t 6 on N(t6) is nilpotent with string basis β3  0. Canonical form for this is 

  24. Example 2.14 (c.f. Example 2.13) : Restriction of t 2 on N(t2) is nilpotent of index 1, with string basis β1  0 & β2  0. → c(x) = (x2) (x6). Canonical form for t 2 is

  25. For λ= 6, dim N(t6) = 1. Restriction of t 6 on N(t6) is nilpotent with string basis β3  0. Canonical form for this is 

  26. Example 2.15: Restriction of t 3 on N(t3) is nilpotent of index 2, with string basis β1  β2  0 & β3  0.

  27. Restriction of t+1 on N(t+1) is nilpotent of index 1, with string basis β4  0 & β5  0. Jordan form of T is

  28. Corollary 2.16: Every square matrix is similar to the sum of a diagonal matrix and a nilpotent matrix.

  29. Exercises 5.IV.2. 1. Find the Jordan form from the given data. (a) The matrix T is 55 with the single eigenvalue 3. The nullities of the powers are: T 3I has nullity two, (T 3I)2 has nullity three, (T 3I)3 has nullity four, and (T 3I)4 has nullity five. (b) The matrix Sis 55 with two eigenvalues. For the eigenvalue 2 the nullities are: S 2I has nullity two, and (S 2I)2 has nullity four. For the eigenvalue 1, the nullities are: S+ I has nullity one. 2. Prove that a matrix is diagonalizable if and only if its minimal polynomial has only linear factors.

More Related