320 likes | 365 Views
Linear Algebra Review. CS479/679 Pattern Recognition Dr. George Bebis. n-dimensional Vector. An n -dimensional vector v is denoted as follows: The transpose v T is denoted as follows:. Inner (or dot) product.
E N D
Linear Algebra Review CS479/679 Pattern RecognitionDr. George Bebis
n-dimensional Vector • An n-dimensional vector v is denoted as follows: • The transpose vTis denoted as follows:
Inner (or dot) product • Given vT= (x1, x2, . . . , xn) and wT= (y1, y2, . . . , yn), their dot product defined as follows: (scalar) or
k Orthogonal / Orthonormal vectors • A set of vectors x1, x2, . . . , xnis orthogonal if • A set of vectors x1, x2, . . . , xnis orthonormalif
Linear combinations • A vector v is a linear combination of the vectors v1, ..., vkif: where c1, ..., ck are constants. • Example: vectors in R3 can be expressed as a linear combinations of unit vectors i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1)
Space spanning • A set of vectors S=(v1, v2, . . . , vk ) spansome space W if every vector in W can be written as a linear combination of the vectors in S - The unit vectors i, j, and k span R3 w
Linear dependence • A set of vectors v1, ..., vkare linearly dependentif at least one of them is a linear combination of the others: (i.e., vj does not appear on the right side)
Linear independence • A set of vectors v1, ..., vkis linearly independent if no vector vj can be represented as a linear combination of the remaining vectors, i.e.,: Example: c1=c2=0
Vector basis • A set of vectors v1, ..., vk forms a basisin some vector space W if: (1) (v1, ..., vk)span W (2) (v1, ..., vk)are linearly independent • Standard bases: R2 R3 Rn
Matrix Operations • Matrix addition/subtraction • Add/Subtract corresponding elements. • Matrices must be of same size. • Matrix multiplication m x p q x p m x n n Condition: n = q
Symmetric Matrices Example:
Determinants 2 x 2 3 x 3 (expanded along 1st column) n x n (expanded along kth column) Properties:
Matrix Inverse • The inverse of a matrix A, denoted as A-1, has the property: A A-1 = A-1A = I • A-1 exists only if • Terminology • Singular matrix:A-1does not exist • Ill-conditioned matrix: A is “close” to being singular
Matrix Inverse (cont’d) • Properties of the inverse:
Matrix trace Properties:
Rank of matrix • Equal to the dimension of the largest square sub-matrix of A that has a non-zero determinant. Example: has rank 3
Rank of matrix (cont’d) • Alternative definition: the maximum number of linearly independent columns (or rows) of A. Example: i.e., rank is not 4!
Eigenvalues and Eigenvectors • The vector vis an eigenvector of matrix A and λ is an eigenvalue of A if: Geometric interpretation: the linear transformation implied by A can not change the direction of the eigenvectors v, only their magnitude. (assume v is non-zero)
Computing λ and v • To find the eigenvalues λ of a matrix A, find the roots of the characteristic polynomial: Example:
Properties of λ and v • Eigenvalues and eigenvectors are only defined for square matrices. • Eigenvectors are not unique (e.g., if vis an eigenvector, so is kv). • Suppose λ1, λ2, ..., λnare the eigenvalues of A, then:
Matrix diagonalization • Given an n x n matrix A, find P such that: P-1AP=ΛwhereΛis diagonal • Solution: Set P = [v1v2 . . . vn], where v1,v2 ,. . . vnare the eigenvectors of A:
Matrix diagonalization (cont’d) Example:
Matrix diagonalization (cont’d) • If A is diagonalizable, then the corresponding eigenvectors v1,v2 ,. . . vnform a basis in Rn • If A is a symmetric matrix, its eigenvaluesare real and its eigenvectors are orthogonal.
Are all n x n matrices diagonalizable? • An n x n matrix A is diagonalizable iff it has n linearly independent eigenvectors. • i.e., if P-1 exists, that is, rank(P)=n • Theorem: If the eigenvalues of A are all distinct, their corresponding eigenvectors are linearly independent (i.e., A is diagonalizable).
Matrix decomposition • If A is diagonalizable, then A can be decomposed as follows:
Matrix decomposition (cont’d) • Matrix decomposition can be simplified in the case of symmetric matrices (i.e., orthogonal eigenvectors): P-1=PT A=PDPT=
Covariance Matrix Decomposition where Φ-1= ΦT
Linear Transformations of random variables Uncorrelated!