1 / 39

Ill-Posedness and Regularization of  Linear Operators (1 lecture)

Ill-Posedness and Regularization of  Linear Operators (1 lecture). Singular value decomposition (SVD) in finite-dimensional spaces Least squares solution; Moore-Penrose pseudo inverse Geometry of a linear inverse Ill-posed and ill-conditioned problems

jubal
Download Presentation

Ill-Posedness and Regularization of  Linear Operators (1 lecture)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ill-Posedness and Regularization of  Linear Operators (1 lecture) • Singular value decomposition (SVD) in finite-dimensional spaces • Least squares solution; Moore-Penrose pseudo inverse • Geometry of a linear inverse • Ill-posed and ill-conditioned problems • Tikhonov  regularization; Truncated SVD • SVD of compact operators

  2. Hilbert spaces (finite or infinite) OperatorA from to is a mapping that assigns to each (domain) an element (range) A is defined every where A is on operator on Linear operators We write Null space of a linear operator (it is a subspace) Basics of linear operators in function spaces (Appendix B of [RB1])

  3. A is bounded if there exists a constant M: • Norm of A: • Adjoint operator: is the unique operator such that is a matrix and is a matrix and Note: Basics of linear operators in function spaces (Apendix B of [RB1]) • A linear operator A is continuous iff it is bounded • Adjoint operator depends on the inner product. Examples:

  4. is equipped with the standard Euclidian inner product symmetric (self-adjoint ) Eigen-equation are real may always be chosen to form an orthogonal basis Let (U is an unitary matrix) Eigen-equation in terms of U Eigenvalues and eigenvectors of symmetric matrices

  5. Action of a real symmetric matrix on an input vector • projects the input vector along • synthetizes by the linear combination Spectral representation of symmetric matrix A

  6. Functions of a symmetricmatrix • From the following properties of an unitary matrix: 1. 2. If h(A) is a power series 3. If A is non-singular 4. If (A is positive semi-positive - PSD), we can define It follows that

  7. Singular value decomposition of a real (complex) rectangular matrix equipped with the standard Euclidian inner product • but A is not self-adjoint ( ), the eigendecomposition • does not have the nice properties of self-adjoint matrices. The cyclic matrices • are an exception • the eigenvalue problem is meaningless. The Singular value • decomposition provides a generalization of the self-adjoint spectral • decomposion

  8. and are equipped with the standard Euclidian inner products isometric left singular vectors right singular vectors singular values Singular value decomposition

  9. matrix norms: • range and null-space of • range and null-space of Singular value decomposition: consequences

  10. Action of a real symmetric matrix on the vector • projects the input vector along • synthetizes by the linear combination Singular value decomposition

  11. Singular value decomposition: illustration

  12. A is invertible: • A is not invertible b) a) Inversion methods:

  13. Orthogonal components Least-squares approach

  14. Generalized inverse is the least-squares solution of minimum-norm, or the generalized solution

  15. Moore-Penrose pseudo-inverse (r = n · m Minimum-norm solution (r = m < n Moore-Penrose pseudo-inverse and Minimum-norm solution

  16. Moore-Penrose pseudo-inverse: a variational point of view is invertible Minimization of the observed data misfit Normal equations

  17. Effect of noise The boundary of is an ellipse centered at with principal axes aligned with . The lenght of the k-th principal semi-axis is

  18. Classification of the linear operators • If n m A is Ill-posed are sources of instabilities. • In any case “small” singular values Often, the smaller the eigenvalues the more oscilating the corresponding singular vectors (high frequences) Regularization: shrink/threshold large values of i.e, multiply the eigenvalues by a regularizer function such that as

  19. Regularization by shrinking/thresholding the spectrum of A as Such that 1) 2 ) The larger singular are retained Truncated SVD (TSVD) Tikhonov (Wiener) regularization Regularization

  20. TSVD Tikhonov Regularization by shrinking/thresholding the spectrum of A Lets write the singular decomposition of A as unitary Tikhonov regularization: variational formulation

  21. Thus, the Tikhonov regularized solution is given by which is the solution of the variational problem Tikhonov regularization for any Hilbert spaces (see Appendic E of [RB 1]) Family of quadratic regularizers Does SVD plays a role?

  22. Singular value decomposition: illustration

  23. Singular value decomposition: illustration

  24. Singular value decomposition: illustration

  25. Singular value decomposition: illustration L-curve

  26. The optimization problem with the Euler-Lagrange equation is solved by resorting to iterative methods that depend only on The operators • Example: Landweber iterations Medium/large systems • For medium/large systems, the SVD is impracticable

  27. Singular value decomposition in infinite-dimensional spaces Singular system for a compact linear operator ( are Hilbert spaces) is a countable set of triples with the following Properties: 1. The right singular vectors forms an orthonormal basis for 2. The left singular vectors form an orthonormal basis for the closure of 3. The singular values are positive real numbers and are in nonincreasing order, 4. For each j, 5. If is infinite dimensional, 6. A has the representation

  28. Example of compact operators • Any linear operator for which is finite dimensional is compact 2. The diagonal operator on 3. The Fredholm first kind integral on (the space of real-valued square integrable functions on - a Hilbert space )

  29. Compact linear operators in infinite dimensional spaces are ill-posed be a compact linear operator are infinite dimensional Hilbert spaces. • If is infinite dimensional, then the operator equation is • ill-posed in the sense that • The solution is not stable 2. If is finite dimensional then the solution is not unique

  30. Summary: SVD/least-squares based solutions • Least-squares approach Minumum-norm solution

  31. Tikhonov (Wiener) regularization Summary: Regularized solutions • Truncated SVD (TSVD) which is the solution of the variational problem Regularizer (Penalizing function)

  32. Example: Landweber iterations Summary: Medium/large systems with quadratic regularization • For medium/large systems, the SVD is impracticable. (periodic convolution operators are an important exception) • The optimization problem Quadratic regularizer with the Euler-Lagrange equation is solved by resorting to iterative methods that depend only on

  33. Summary: Non-quadratic regularization • Example: discontinuity preserving regularizer penalizes oscillatory solutions

  34. Example: deconvolution of a step Matrix A

  35. Example: deconvolution of a step

  36. Example: Sparse reconstruction norm Observed data - g Original data - f

  37. Example: Sparse reconstruction regularization Pseudo-inverse

  38. Example: Sparse reconstruction. MM optimization norm

  39. Bibliography • [Ch9.; RB1], [Ch2,Ch3; L1] • Majorization Minimization [PO1], [PO3] • Compressed Sensing [PCS1] Important topics Matlab scripts • TSVD_regularization_1D.m • TSVD_Error_1D.m • step_deconvolution.m • l2_l1sparse_regression.m

More Related