1 / 25

# Refresher: Vector and Matrix Algebra - PowerPoint PPT Presentation

Refresher: Vector and Matrix Algebra. Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering. Outline. Basics: Operations on vectors and matrices Linear systems of algebraic equations Gauss elimination Matrix rank, existence of a solution Inverse of a matrix

Related searches for Refresher: Vector and Matrix Algebra

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'Refresher: Vector and Matrix Algebra' - gavivi

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Refresher:Vector and Matrix Algebra

Mike Kirkpatrick

Department of Chemical Engineering

FAMU-FSU College of Engineering

• Basics:

• Operations on vectors and matrices

• Linear systems of algebraic equations

• Gauss elimination

• Matrix rank, existence of a solution

• Inverse of a matrix

• Determinants

• Eigenvalues and Eigenvectors

• applications

• diagonalization

more

• Special matrix properties

• symmetric, skew-symmetric, and orthogonal matrices

• Hermitian, skew-Hermitian, and unitary matrices

• A matrix is a rectangular array of numbers (or functions).

• The matrix shown above is of size mxn. Note that this designates first the number of rows, then the number of columns.

• The elements of a matrix, here represented by the letter ‘a’ with subscripts, can consist of numbers, variables, or functions of variables.

• A vector is simply a matrix with either one row or one column.A matrix with one row is called a row vector, and a matrix with one column is called a column vector.

• Transpose: A row vector can be changed into a column vector and vice-versa by taking the transpose of that vector. e.g.:

• Matrix addition is only possible between two matrices which have the same size.

• The operation is done simply by adding the corresponding elements. e.g.:

• Multiplication of a matrix or a vector by a scalar is also straightforward:

• Taking the transpose of a matrix is similar to that of a vector:

• The diagonal elements in the matrix are unaffected, but the other elements are switched. A matrix which is the same as its own transpose is called symmetric, and one which is the negative of its own transpose is called skew-symmetric.

• The multiplication of a matrix into another matrix not possible for all matrices, and the operation is not commutative:

AB ≠ BA in general

• In order to multiply two matrices, the first matrix must have the same number of columns as the second matrix has rows.

• So, if one wants to solve for C=AB, then the matrix A must have as many columns as the matrix B has rows.

• The resulting matrix C will have the same number of rows as did A and the same number of columns as did B.

• The operation is done as follows:

using index notation:

• for example:

• One of the most important application of matrices is for solving linear systems of equations which appear in many different problems including electrical networks, statistics, and numerical methods for differential equations.

• A linear system of equations can be written:

a11x1 + … + a1nxn = b1

a21x1 + … + a2nxn = b2

:

am1x1 + … + amnxn = bm

• This is a system of m equations and n unknowns.

• The system of equations shown on the previous slide can be written more compactly as a matrix equation:

Ax=b

• where the matrix A contains all the coefficients of the unknown variables from the LHS, x is the vector of unknowns, and b a vector containing the numbers from the RHS

• Although these types of problems can be solved easily using a wide number of computational packages, the principle of Gaussian elimination should be understood.

• The principle is to successively eliminate variables from the equations until the system is in ‘triangular’ form, that is, the matrix A will contain all zeros below the diagonal.

• A very simple example:

-x + 2y = 4

3x + 4y =38

first, divide the second equation by -2, then add to the first equation to eliminate y; the resulting system is:

-x + 2y = 4

-2.5x = -15  x = 6

 y = 5

• The rank of a matrix is simply the number of independent row vectors in that matrix.

• The transpose of a matrix has the same rank as the original matrix.

• To find the rank of a matrix by hand, use Gauss elimination and the linearly dependant row vectors will fall out, leaving only the linearly independent vectors, the number of which is the rank.

• The inverse of the matrix A is denoted as A-1

• By definition, AA-1 = A-1A = I, where I is the identity matrix.

• Theorem: The inverse of an nxn matrix A exists if and only if the rank A = n.

• Gauss-Jordan elimination can be used to find the inverse of a matrix by hand.

• Determinants are useful in eigenvalue problems and differential equations.

• Can be found only for square matrices.

• Simple example: 2nd order determinant

3rd order determinant

• The determinant of a 3X3 matrix is found as follows:

• The terms on the RHS can be evaluated as shown for a 2nd order determinant.

• Cramer’s: If the determinant of a system of n equations with n unknowns is nonzero, that system has precisely one solution.

• det(AB)=det(BA)=det(A)det(B)

• Let A be an nxn matrix and consider the vector equation:

Ax = x

• A value of  for which this equation has a solution x≠0 is called an eigenvalue of the matrix A.

• The corresponding solutions x are called the eigenvectors of the matrix A.

Ax=x

Ax - x = 0

(A- I)x = 0

• This is a homogeneous linear system, homogeneous meaning that the RHS are all zeros.

• For such a system, a theorem states that a solution exists given that det(A- I)=0.

• The eigenvalues are found by solving the above equation.

• Simple example: find the eigenvalues for the matrix:

• Eigenvalues are given by the equation det(A-I) = 0:

• So, the roots of the last equation are -1 and -6. These are the eigenvalues of matrix A.

• For each eigenvalue, , there is a corresponding eigenvector, x.

• This vector can be found by substituting one of the eigenvalues back into the original equation: Ax = x : for the example: -5x1 + 2x2 = x1

2x1 – 2x2 = x2

• Using =-1, we get x2 = 2x1, and by arbitrarily choosing x1 = 1, the eigenvector corresponding to =-1 is:

and similarly,

• A matrix is called symmetric if:

AT = A

• A skew-symmetric matrix is one for which:

AT = -A

• An orthogonal matrix is one whose transpose is also its inverse:

AT = A-1

• If a matrix contains complex (imaginary) elements, it is often useful to take its complex conjugate. The notation used for the complex conjugate of a matrix A is: 

• Some special complex matrices are as follows:

Hermitian: T = A

Skew-Hermitian: T = -A

Unitary: T = A-1