1 / 60

MATH 685/ CSI 700/ OR 682 Lecture Notes

MATH 685/ CSI 700/ OR 682 Lecture Notes. Lecture 4. Least squares. Method of least squares. Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over many cases, i.e., taking more measurements than are strictly

yannis
Download Presentation

MATH 685/ CSI 700/ OR 682 Lecture Notes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares

  2. Method of least squares • Measurement errors are inevitable in observational and experimental sciences • Errors can be smoothed out by averaging over many • cases, i.e., taking more measurements than are strictly • necessary to determine parameters of system • Resulting system is overdetermined, so usually there is no exact solution • In effect, higher dimensional data are projected into lower • dimensional space to suppress irrelevant detail • Such projection is most conveniently accomplished by • method of least squares

  3. Linear least squares

  4. Data fitting

  5. Data fitting

  6. Example

  7. Example

  8. Example

  9. Existence/Uniqueness

  10. Normal Equations

  11. Orthogonality

  12. Orthogonality

  13. Orthogonal Projector

  14. Pseudoinverse

  15. Sensitivity and Conditioning

  16. Sensitivity and Conditioning

  17. Solving normal equations

  18. Example

  19. Example

  20. Shortcomings

  21. Augmented system method

  22. Augmented system method

  23. Orthogonal Transformations

  24. Triangular Least Squares

  25. Triangular Least Squares

  26. QR Factorization

  27. Orthogonal Bases

  28. Computing QR factorization • To compute QR factorization of m × n matrix A, with m > n, we annihilate subdiagonal entries of successive columns of A, eventually reaching upper triangular form • Similar to LU factorization by Gaussian elimination, but use orthogonal transformations instead of elementary elimination matrices • Possible methods include • Householder transformations • Givens rotations • Gram-Schmidt orthogonalization

  29. Householder Transformation

  30. Example

  31. Householder QR factorization

  32. Householder QR factorization

  33. Householder QR factorization • For solving linear least squares problem, product Q of Householder transformations need not be formed explicitly • R can be stored in upper triangle of array initially containing A • Householder vectors v can be stored in (now zero) lower triangular portion of A (almost) • Householder transformations most easily applied in this form anyway

  34. Example

  35. Example

  36. Example

  37. Example

  38. Givens Rotations

  39. Givens Rotations

  40. Example

  41. Givens QR factorization

  42. Givens QR factorization • Straightforward implementation of Givens method requires about 50% more work than Householder method, and also requires more storage, since each rotation requires two numbers, c and s, to define it • These disadvantages can be overcome, but requires more complicated implementation • Givens can be advantageous for computing QR factorization when many entries of matrix are already zero, since those annihilations can then be skipped

  43. Gram-Schmidt orthogonalization

  44. Gram-Schmidt algorithm

  45. Modified Gram-Schmidt

  46. Modified Gram-Schmidt QR factorization

  47. Rank Deficiency • If rank(A) < n, then QR factorization still exists, but yields singular upper triangular factor R, and multiple vectors x give minimum residual norm • Common practice selects minimum residual solution x having smallest norm • Can be computed by QR factorization with column pivoting or by singular value decomposition (SVD) • Rank of matrix is often not clear cut in practice, so relative tolerance is used to determine rank

  48. Near Rank Deficiency

  49. QR with Column Pivoting

  50. QR with Column Pivoting

More Related