1 / 48

Advanced Computer Graphics Spring 2014

Advanced Computer Graphics Spring 2014. K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology. Today ’ s Topics. Linear Algebra SVD Affine Algebra. Singular Value Decomposition. The SVD is a highlight of linear algebra. Typical Applications of SVD

deliz
Download Presentation

Advanced Computer Graphics Spring 2014

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology

  2. Today’s Topics • Linear Algebra • SVD • Affine Algebra

  3. Singular Value Decomposition • The SVD is a highlight of linear algebra. • Typical Applications of SVD • Solving a system of linear equations • Compression of a signal, an image, etc. • SVD approach can give an optimal low rank approximation of a given matrix A. • Ex) Replace the 256 by 512 pixel matrix by a matrix of rank one: a column times a row.

  4. Singular Value Decomposition • Overview of SVD • A is any m by n matrix, square or rectangular. • We will diagonalize it. Its row and column spaces are r-dim. • We choose special orthonormal bases v1,…vr for the row space and u1,…,ur for the column space. • For those bases, we want each Avi to be in the direction of ui. • In matrix form, these equations Avi=σiui become AV=UΣ or A=UΣVT. This is the SVD.

  5. Singular Value Decomposition • The Bases and the SVD • Start with a 2 by 2 matrix. • Its rank is 2. • This matrix A is invertible. • Its row space is the plane R2. • We want v1 and v2 to be perpendicular unit vectors, an orthonormal basis. • We also want Av1 and Av2 to be perpendicular. • Then the unit vectors u1 and u2 of Av1 and Av2 are orthonormal.

  6. Singular Value Decomposition • The Bases and the SVD • We are aiming for orthonormal bases that diagonalize A. • When the inputs are v1 and v2, the outputs are Av1 and Av2. We want those to line up with u1 and u2, respectively. • The basis vectors have to give Av1 = σ1u1 and also Av2 = σ2u2. • The singular values σ1 and σ2 are the lengths |Av1| and |Av2|.

  7. Singular Value Decomposition • The Bases and the SVD • With v1 and v2 as columns of V, • In matrix notation, that is AV=UΣ, or U-1AV = Σ or UTAV = Σ. • Σ contains the singular values, which are different from the eigenvalues.

  8. Singular Value Decomposition • In SVD, U and V must be orthogonal matrices. • Orthonormal basis VTV = I. • VT = V-1. UT = U-1 • This is the new factorization of A: orthogonal times diagonal times orthogonal.

  9. Singular Value Decomposition • There is a way to remove U and see V by itself.: Multiply AT times A. • ATA = (UΣVT)T(UΣVT) = VΣTΣVT • This becomes an ordinary diagonalization of the crucial symmetric matrix ATA, whose eigenvalues are σ12,σ22. The columns of V are the eigenvectors of ATA. <- This is how we find V.

  10. Singular Value Decomposition • Working Example

  11. Singular Value Decomposition • In many cases where Gaussian elimination and LU decomposition fail to give satisfactory results, SVD can diagnose for you precisely what the problem is. • In some cases, SVD gives you a useful numerical answer. • Although it is not necessarily “THE” answer.

  12. Singular Value Decomposition • Any MⅹN matrix A (MⅹN) can be written as the product of an MⅹN column-orthogonal matrix U, an NⅹN diagonal matrix W with positive or zero elements, and the transpose of an NⅹN orthogonal matrix V. • U and V are each orthogonal in the sense that their columns are orthogonal.

  13. Singular Value Decomposition • The decomposition can always be done no matter how singular the matrix is. • In “Numerical Recipes in C”, there is a routine called “svdcmp” that performs SVD on an arbitrary matrix A, replacing it by U and giving back Σ and V separately.

  14. Singular Value Decomposition • If the matrix A is square, U, V and W are all square matrices of the same size. • Their inverses are trivial to compute. • U and V are orthogonal • Their inverses are equal to their transposes. • Σ is diagonal • Its inverse is the diagonal matrix whose elements are the reciprocals of the elements.

  15. Singular Value Decomposition • The only thing that you can go wrong with this inverse computation is • for one of the σj’s to be zero or • for it to be so small that its value is dominated by roundoff error and therefore unknowable. • If more than one of the singular values have such problems, then the matrix is more singular. • So, SVD gives you a clear diagnosis of the situation.

  16. Singular Value Decomposition • Condition Number of a matrix • The ratio of the largest (in magnitude) of the σj’s to the smallest of the σj’s. • A matrix is singular if its condition number is infinite. • A matrix is ill-conditioned if its condition number is too large. • If its reciprocal approaches the machine’s floating-point precision.

  17. Singular Value Decomposition • For singular matrices, the concept of nullspace and range are important. • Given Aᆞx = b • If A is singular, • There is some subspace of x, called the null space, that is mapped to zero, i.e. Aᆞx = 0. • The dimension of the nullspace (the number of linearly independent vectors x, satisfying Ax=0) is called the nullity of A.

  18. Singular Value Decomposition • For singular matrices, the concept of nullspace and range are important. • Given Aᆞx = b • There is also some subspace of b that can be reached by A. • There exists some x which is mapped there. • This subspace of b is called the range of A. • The dimension of the range is called the rank of A.

  19. Singular Value Decomposition • If A is nonsingular, then its range will be all of the vector space b, so its rank is N. • If A is singular, then the rank will be less than N. • Therefore, for an NⅹN matrix, the rank plus the nullity of the matrix equals N. • What has this to do with SVD?

  20. Singular Value Decomposition • SVD explicitly constructs orthonormal bases for the nullspace and range of a matrix. • The columns of U whose same-numbered elements σj are nonzero are an orthonormal set of basis vectors that span the range. • The columns of V whose same-numbered elements σj are zero are an orthonormal basis for the nullspace.

  21. Singular Value Decomposition • When solving the set of simultaneous linear equations with a singular matrix A • SVD can solve the set of homogeneous equations, i.e. b = 0 • Any column of V whose corresponding σj is zero. • When the vector b on the right-hand side is not zero, • The most important question is whether it lies in the range of A or not. • If it does, the singular set of equations does have a solution x. -> More than one solution.

  22. Singular Value Decomposition • When the vector b on the right-hand side is not zero, • The most important question is whether it lies in the range of A or not. • If it does, the singular set of equations does have a solution x. -> More than one solution. • If we want to single out one particular member of this solution set of vectors, we may want to pick the one with the smallest length |x|2 • Simply replace 1/σj by zero if σj = 0. • Then compute

  23. Singular Value Decomposition • If b is not in the range of the singular matrix A, then Ax=b has no solution. • But it can still be used to construct a “solution” vector x. Namely, it will give the closest possible solution in the least squares sense.

  24. Singular Value Decomposition • Numerically, the far more common situation is… • Some of the singular values are very small BUT nonzero. • The matrix is ill-conditioned. • The direct solution methods of LU decomposition or Gaussian elimination may actually give a formal solution to the problem.-> No zero pivot. • Algebraic cancellation due to the limited precision may give a very poor approximation to the true solution.

  25. Singular Value Decomposition • In such cases, the solution vector x obtained by zeroing the small singular values and using is very often better than both the direct methods. • Zeroing a singular value corresponds to throwing away one linear combination of the equations. • But its contribution is small, leading to a good approximation. • SVD cannot be applied blindly. • Should decide what threshold is appropriate.

  26. Singular Value Decomposition • Although you do not need to zero any singular values for computational reasons, you should at least take note of any that are unusually small. • Their corresponding columns in V are linear combination of x’s which are insensitive to your data.

  27. Factorization Based on Elimination • Triangular Factorization without row exchange: A = LDU • L : lower triangular • U : upper triangular • D : diagonal. Elements are nonzero. • They come directly from elimination. • The factorization succeeds only if the pivots are not zero.

  28. Factorization Based on Elimination • Triangular Factorization with row exchange: PA = LDU • P : permutation matrix • Reorder the rows to achieve nonzero pivots. • L : lower triangular • U : upper triangular • D : diagonal. Elements are nonzero.

  29. Factorization Based on Elimination • Reduction to echelon form PA = LU • Every rectangular matrix A can be changed by row operations into a matrix U that has zeros below its pivots. • The last m-r rows of U are entirely zero. • L: square matrix • D and U are combined into a single rectangular matrix with the same shape as A.

  30. Factorization Based on Eigenvalues • Diagonalization of A: A = SΛS-1 • If A has a full set of n linearly independent eigenvectors. • They are the colums of S. • S-1AS is the diagonal matrix of eigenvalues. • With ATA = AAT, the eigenvectors can be chosen orthonormal and S becomes Q. Namely A = QΛQ-1.

  31. Factorization Based on Eigenvalues • Jordan Form (Jordan Decomposition): A = MJM-1 • Every square matrix A is similar to a Jordan matrix J, with the eigenvalues Jii = λi on the diagonal. • J has a diagonal block of the form for each independent eigenvector.

  32. Factorization Based on ATA • Orthogonalization of the columns of A: A = QR • A must have independent columns a1, …, an. Q has orthonormal columns q1,…,qn.

  33. Factorization Based on ATA • Singular Value Decomposition • Polar Decomposition: A = QB • A is split into an orthogonal matrix Q and a symmetric positive definite matrix B. • B is the positive definite square root of ATA. Q = AB-1

  34. Factorization Based on ATA

  35. Affine Algebra • Linear Algebra is the study of vectors and vector spaces. • A vector was treated as a quantity with direction and magnitude. • Two vectors have the same directions and magnitudes are the same irrespective of their positions. • What if the location of the vector, namely, where its initial point is, is very important? • In physics applications, sometimes the location matters.

  36. Affine Algebra • There needs to be a distinction between points and vectors -> The essence of affine algebra. • Let V be a vector space of dimension n. Let A be a set of elements that are called points.

  37. Affine Algebra • Let V be a vector space of dimension n. Let A be a set of elements that are called points. Then A is referred to as an n-dimensional affine space whenever the following conditions are met: • For each ordered pair of points P,Q∈A, there is a unique vector in V called the difference vector and denoted by Δ(P,Q) • For each point P∈A and v∈V, there is a unique point Q∈A such that v = Δ(P,Q). • For any three points P,Q,R∈A, it must be that Δ(P,Q)+Δ(Q,R)=Δ(P,R).

  38. Affine Algebra • From the formal definition for an affine space, we have • Δ(P,P) = 0. • Δ(P,Q) = -Δ(Q,P). • If Δ(P1,Q1) = Δ(P2,Q2), then Δ(P1,P2) = Δ(Q1,Q2).

  39. Affine AlgebraCoordinate Systems • Let A be an n-dimensional affine space. Let a fixed point O∈A be labeled as the origin. Let {v1,…,vn} be a basis for V. Then the set {O;v1,…,vn} is called an affine coordinate system. • The numbers (a1,…,an) are called the affine coordinates of P relative to the specified coordinate system.

  40. Affine AlgebraCoordinate Systems • Change coordinates from one system to another. • {O1;u1,…,un} and {O2;v1,…,vn} for A. • A point P∈A has affine coordinates (a1,…,an) and (b1,…,bn). • The origin O2 has affine coordinates (c1,…,cn) in the first coordinate system.

  41. Affine AlgebraCoordinate Systems • The two bases are related by a change of basis transformation ui = Σmjivj.

  42. Affine AlgebraSubspaces • Let A be an affine space. An affine subspace of A is a set A1⊆A such that V1 = {Δ(P,Q)∈V: P,Q ∈ S} is a subspace of V.

  43. Affine AlgebraTransformation • Definition of Affine Transformations for Affine Spaces • Let A be an affine space with vector space V and vector difference operator ΔA. • Let B be an affine space with vector space W and vector difference operator ΔB. • An affine transformation is a function T: A->B such that • ΔA(P1,Q1)= ΔA(P2,Q2) implies that ΔB(T(P1),T(Q1))= ΔB(T(P2),T(Q2)). • The function L : V->W defined by L(ΔA(P,Q))= ΔB(T(P),T(Q)) is a linear transformation.

  44. Affine AlgebraTransformation • If OA is selected as the origin for A and if OB = T(OA) is selected as the origin for B, then the affine transformation is of the form • T(OA+x) = T(OA) + L(x) = OB + L(x) • If B=A, W=V and B A. Given OB-OA = b. Then T(OA + x) = OA + b + L(x). • For a fixed origin OA and for a specific matrix representation M of the linear transformation L, we have y = Mx + b: rigid motion • M: orthogonal matrix -> rotation

  45. Affine AlgebraBarycentric Coordiantes • An operation on two points: a weighted average of two points • R = (1-t)P + tQ. R is said to be a barycentric combination of P and Q with barycentric coordinates 1-t and t. • The sum of the barycentric coordinates is one.<- A necessity for a pair of numbers to be barycentric coordinates. • R is a point on the line segment connecting P and Q.

  46. Affine AlgebraBarycentric Coordiantes • Triangles • Given three noncolinear points P, Q and R. Then, P + su + tv = P+s(Q-P) + t(R-P) is a point. • Then B = (1-s-t)P + sQ + tR is a barycentric combination of P, Q and R with barycentric coordinates c1 = 1-s-t, c2 = s, c3 = t. • The coordinates cannot be simultaneously negative since the sum of three negative numbers cannot be 1. • It is a useful tool for describing the location of a point in a triangle.

  47. Affine AlgebraBarycentric Coordiantes • Tetrahedra • Given four noncoplanar points Pi, (1 ≤ i ≤ 3), a barycentric combination of the points is B = (1-c1-c2-c3)P0 + c1P1 + c2P2 + c3P3. • Simplices • A simplex is formed by n+1 points Pi, 1 ≤ i ≤ n, such that the set of vectors {Pi-P0} are linearly independent. • A barycentric combination of the points is

  48. Q & A?

More Related