1 / 53

DEPARTMENT OF FIRST YEAR ENGINEERING Academic Year 2017-18 ( Sem I )

Dr. D. Y. Patil Pratishthan’s DR. D. Y. PATIL INSTITUTE OF ENGINEERING, MANAGEMENT & RESEARCH. DEPARTMENT OF FIRST YEAR ENGINEERING Academic Year 2017-18 ( Sem I ) Subject : Engineering Mathematics I Name of faculty: Mr. Ramesh Rodalbandi. MATRICES. Matrices – definitions

cdurham
Download Presentation

DEPARTMENT OF FIRST YEAR ENGINEERING Academic Year 2017-18 ( Sem I )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dr. D. Y. Patil Pratishthan’s DR. D. Y. PATIL INSTITUTE OF ENGINEERING, MANAGEMENT & RESEARCH DEPARTMENT OF FIRST YEAR ENGINEERING Academic Year 2017-18 ( Sem I ) Subject : Engineering Mathematics I Name of faculty: Mr. Ramesh Rodalbandi

  2. MATRICES

  3. Matrices – definitions • Matrix notation • Equal matrices • Addition and subtraction of matrices • Multiplication of matrices • Transpose of a matrix • Special matrices • Rank • Determinant of a square matrix • Inverse of a square matrix • Normal form • Solution of a set of linear equations • Eigenvalues and eigenvectors

  4. Matrices – definitions A matrix is a set of real or complex numbers (called elements) arranged in rows and columns to form a rectangular array. A matrix having m rows and n columns is called an m × n matrix. For example: is a 2 × 3 matrix.

  5. Matrices – definitions Row matrix A row matrix consists of a single row. For example: Column matrix A column matrix consists of a single column. For example:

  6. Matrices – definitions Double suffix notation Each element of a matrix has its own address denoted by double suffices, the first indicating the row and the second indicating the column. For example, the elements of 3 × 4 matrix can be written as:

  7. Matrix notation Where there is no ambiguity a matrix can be represented by a single general element in brackets or by a capital letter in bold type.

  8. Equal matrices Two matrices are equal if corresponding elements throughout are equal.

  9. Addition and subtraction of matrices Two matrices are added (or subtracted) by adding (or subtracting) corresponding elements. For example:

  10. Multiplication of matrices Scalar multiplication Multiplication of two matrices

  11. Multiplication of matrices Scalar multiplication To multiply a matrix by a single number (a scalar), each individual element of the matrix is multiplied by that number. For example: That is:

  12. Multiplication of matrices Multiplication of two matrices Two matrices can only be multiplied when the number of columns in the first matrix equals the number of rows in the second matrix. The ijth element of the product matrix is obtained by multiplying each element in the ith row of the first matrix by the corresponding element in the jth column of the second matrix and the element products added. For example:

  13. Multiplication of matrices Multiplication of two matrices

  14. Multiplication of matrices Multiplication of two matrices

  15. Transpose of a matrix If a new matrix is formed by interchanging rows and columns the new matrix is called the transpose of the original matrix. For example, if:

  16. Special matrices Square matrix Diagonal matrix Unit matrix Null matrix Symmetric matrix Skew symmetric matrix Orthogonal matrix

  17. Special matrices Square matrix A square matrix is of order m × m. A square matrix is symmetric if . For example: A square matrix is skew-symmetric if . For example:

  18. Special matrices Diagonal matrix A diagonal matrix is a square matrix with all elements zero except those on the leading diagonal. For example:

  19. Special matrices Unit matrix A unit matrix is a diagonal matrix with all elements on the leading diagonal being equal to unity. For example: The product of matrix A and the unit matrix is the matrix A:

  20. Special matrices Null matrix A null matrix is one whose elements are all zero. Notice that But that if we cannot deduce that or

  21. Special matrices Symmetric matrix A square matrix A is said to be symmetric matrix if A=A’ A= In a symmetric matrix aij =aji for all i ,j.

  22. Special matrices Skew Symmetric matrix A square matrix A is said to be a skew symmetric matrix if A = - A’ A= In a skew symmetric matrix aij = - aji for all i , j.

  23. Special matrices Orthogonal matrix A square matrix A is said to be orthogonal matrix if AA’ = AA’ = I If A is orthogonal then A-1 = A’. If A is orthogonal matrix, then detA=+-1

  24. Singular matrix Every square matrix has its associated determinant. For example, the determinant of The determinant of a matrix is equal to the determinant of its transpose. A matrix whose determinant is zero is called a singular matrix.

  25. Rank of a matrix A matrix is said to be of rank r if there is 1.At least one minor of the order r which is not equal to zero 2.Every minor of order (r+1) is equal to zero . The rank of matrix A is the maximum order of its non-vanishing minor and it is denoted as P(A) = r If a matrix has a non zero minor of order r, then P(A) ≥ r If a matrix has all the minors of order (r+1) as zeros, then P(A) ≤ r . If A is an m x n matrix them P(A) ≤ minimum of m and n.

  26. Cofactors Each element of a square matrix has a minor which is the value of the determinant obtained from the matrix after eliminating the ith row and jth column to which the element is common. The cofactor of element is then given as the minor of multiplied by

  27. Adjoint of a square matrix Let square matrix C be constructed from the square matrix A where the elements of C are the respective cofactors of the elements of A so that if: Then the transpose of C is called the adjoint of A, denoted by adjA.

  28. Determinant of a square matrix In linear algebra, the determinant is a value that can be computed from the elements of a square matrix. The determinant of a matrix A is denoted det(A), det A, or |A|. Geometrically, it can be viewed as the scaling factor of the linear transformation described by the matrix. In the case of a 2 × 2 matrix the determinant may be defined as: Similarly, for a 3 × 3 matrix A, its determinant is:

  29. Inverse of a square matrix If each element of the adjoint of a square matrix A is divided by the determinant of A then the resulting matrix is called the inverse of A, denoted by A-1. Note: if detA = 0 then the inverse does not exist

  30. Inverse of a square matrix Product of a square matrix and its inverse The product of a square matrix and its inverse is the unit matrix:

  31. Normal form By performing elementary transformations, any non zero matrix can be reduced to one of the following four forms called the normal form of A • [I] • [Ir 0] • Ir 0 • Ir 0 0 0

  32. Solution of a set of linear equations The set of n simultaneous linear equations in n unknowns can be written in matrix form as:

  33. Solution of a set of linear equations Since: The solution is then:

  34. Solution of a set of linear equations Gaussian elimination method for solving a set of linear equations Given: Create the augmented matrix B, where:

  35. Solution of a set of linear equations Gaussian elimination method for solving a set of linear equations Eliminate the elements other than a11 from the first column by subtracting a21/a11 times the first row from the second row, a31/a11 times the first row from the third row, etc. This gives a new matrix of the form:

  36. Solution of a set of linear equations Gaussian elimination method for solving a set of linear equations This process is repeated to eliminate the ci2 from the third and subsequent rows until a matrix of the following form is arrived at:

  37. Solution of a set of linear equations Gaussian elimination method for solving a set of linear equations From this augmented matrix we revert to the product:

  38. Solution of a set of linear equations Gaussian elimination method for solving a set of linear equationsC From this product the solution is derived by working backwards from the bottom starting with:

  39. Solution of a set of linear equations Non-Homogeneous equations- For the system of equations AX=B if matrix B is not a null or zero matrix then the system AX=B is known as non-homogeneous system of equations. Homogeneous equations- For the system of equations AX=B if matrix B is a null or zero matrix i.e., Z then the system AX=Z is known as homogeneous system of equations.

  40. Solution of a set of linear equations Condition for consistency for Non-Homogeneous equations- Consider a system of ‘m’ equations on ‘n’ unknowns given by AX=B Let m = total number of equations , n= total no of unknowns Case 1: let m ≠ n 1. If the rank of coefficient matrix A= rank of augmented matrix , then the system is said to be consistent in other words P(A)=P(A,B) 2.No solution – if P(A) ≠ P(A,B) system is said to be inconsistent 3. A unique solution if – P(A) =P(A,B) =n =total no. of unknowns. 4. An infinite number of solutions if – P(A) =P(A,B) < n then the system posses an infinite number of solutions Case 2:let m=n >3 1. P(A) = P(A,B) = n system is consistent and posses an unique solution 2. P(A) ≠ P(A ,B) system is inconsistent and posses no solution 3. P(A) =P(A,B) =r<n then the system is consistent and posses an infinite number of solutions

  41. Solution of a set of linear equations Condition for consistency for Non-Homogeneous equations- Case 3: let m=n =3 find |A| If |A| ≠ 0 system is consistent, then A-1 exists and system possesses a unique solution given by X=A-1B If |A| =0 then system is either inconsistent or possesses an infinite number of solutions and this is decided by applying the method of reduction.

  42. Solution of a set of linear equations Definition of linear dependent vector- Let x1 ,x 2 ,…..,x n be a system of n row (or column) matrices of the same order (also called as vector) If there exist n scalars c1,c 2,c 3,….c n , not all zero, Such that c1x 1+c 2x2+…+c nx n =0 Then the system of n vectors is called is linearly dependent. Definition of linear independent vector - Let x1,x2,x3…..,xn be a system of n rows (or column) matrices of the same order (also called vectors) The system of n vectors x2,x2,x3 is said to be linearly independent, if every Relation of the type c1x1+c2x2+…+cnxn=0 implies c1=c2=c3……cn =0

  43. Eigenvalues and eigenvectors Equations of the form: where A is a square matrix and is a number (scalar) having non-trivial solutions (x0) for x called eigenvectors or characteristic vectors of A. The corresponding values of are called eigenvalues, characteristic values or latent roots of A.

  44. Eigenvalues and eigenvectors Expressed as a set of separate equations: That is:

  45. Eigenvalues and eigenvectors These can be rewritten as: That is: Which means that, for non-trivial solutions:

  46. Eigenvalues and eigenvectors This expansion of this determinant is written by using a short cut method as follows- For a square matrix of order 2x2 Equation is λ2-S1λ+|A|=0 S1 = sum of minors of order one along main diagonal of A S1 = a11+ a22 . Also |A| = = a11a22-a12a21

  47. Eigenvalues and eigenvectors This expansion of this determinant is written by using a short cut method as follows- For a square matrix of order 3x3 Equation is λ3-S1λ2+S2 λ -|A|=0 S1 = trace(A) S1 = a11+ a22 +a33. Also S2 = minor of a11 +minor of a22 + minor of a33 S2 = + + |A| = a11(a22a33-a23a32)-a12(a21a33-a23a31)+a13(a21a32-a22a31)

  48. Eigenvalues and eigenvectors Properties of Eigen vectors – • If X is an Eigen vector of a matrix A corresponding to an Eigen value λ, so is KX with any K ≠ 0 Thus, the Eigen vector corresponding to an Eigen value is not unique. • Let λ1, λ2,… λnbe distinct Eigen values of an nxn matrix then corresponding Eigen vector X1,X2,…,Xn form a linearly independent set. • Corresponding to n distinct eigen values, we get n independent eigen vectors. • An nxn matrix may have n linearly independent eigen vectors or it may have fewer than n • Eigen vector of a square matrix cannot correspond to two distinct eigen values.

  49. Eigenvalues and eigenvectors Properties of Eigen values – 1.Trace of A : The sum of the entries on the main diagonals of nxn matrix A is called the trace of A; thus Trace of A =a11+a22+a33+ ,….,+ann 2.The sum of eigen values of a matrix is the sum of the elements of the principal diagonal i.e., Trace of A =a11+a22+…+ann = λ1+ λ2+…+ λn 3.The eigen values of an upper or lower triangular matrix are the elements on its main diagonal 4.The product of the eigen values of a matrix equals the determinant of the matrix i.e., λ1 x λ2 xλ3x….x λn =|A|. 5. If λ1, λ2, λ3….. λ nare eigen values of A then 1/ λ1,/ λ2….1/ λn are eigen values of A-1 6.The matrix KA has the eigen values K λ1,K λ2,….,K λn. 7.The eigen values of A and A’ are the same.

  50. Eigenvalues and eigenvectors Eigenvalues To find the eigenvalues of: solve the characteristic equation: That is: This gives eigenvalues

More Related