1 / 32

Computational Methods in Physics PHYS 3437

Computational Methods in Physics PHYS 3437. Dr Rob Thacker Dept of Astronomy & Physics (MM-301C) thacker@ap.smu.ca. Today’s Lecture. Interpolation & Approximation III LU decompositions for matrix algebra Least squares polynomial fitting Fitting to data

sanchezd
Download Presentation

Computational Methods in Physics PHYS 3437

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C) thacker@ap.smu.ca

  2. Today’s Lecture • Interpolation & Approximation III • LU decompositions for matrix algebra • Least squares polynomial fitting • Fitting to data • Fitting to functions – the Hilbert Matrix • Use of orthogonal polynomials to simplify the fitting process

  3. The standard approach • For a linear system of n equations with n unknowns Gaussian elimination is the standard technique As we all know G.E. is an effective but time consuming algorithm – applying the algorithm on an n×n matrix takes ~n3 operations (O(n3)). While we can’t change the order of the calculation we can reduce the overall number of calculations.

  4. The LU Decomposition method • A non-singular matrix can be written as a product of lower and upper diagonal matrices A=LU where, • The simple nature of L & U means that it is actually fairly straightforward to relate the values to A • Then can find the lij and uij fairly easily

  5. Use l11 vals to get all u1j j=2,…,n Use previous steps to get lj2 i=2,…,n Now we know all li1, i=1,..,n By examination • If A=LU, multiply to get Equate first col: Equate first row: Equate second col:

  6. Alternate from columns to rows • Thus expanding each column of A allows us to equate a column of values in L • Similarly, expanding a row of A allows to equate to a row of values in U • However, most rows will be combinations of U and L entries and we must alternate between columns and rows to ensure each expression contains only 1 unknown

  7. General Formulae (Exercise) • Provided we consider column values on or below the diagonal then we have for the k-th column • Similarly, for row values to the right of the main diagonal then we have for the k-th row • Convince yourself of these two relationships…

  8. Utilizing the LU decomposition • We need to solve eqn (5) for z first • Once z is known we can then solve eqn (4) for x • Sounds like a lot of work! Why is this useful? • The triangular matrices are already in a Gauss-Jordan eliminated form so both can be solved by substitution

  9. Consider solving • If we write out the equations: • Which is solved via “forward substitution” – exactly analogous to the back substitution we saw for triadiagonal solvers

  10. Next solve • Again if we right out the equations then: • We solve by back substitution again:

  11. Least squares fit of a polynomial • We’ve already seen how to fit a n-th degree polynomial to n+1 points (xi,yi) i=1,…,n • Lagrange interpolation polynomial • For noisy data we would not want a fit to pass through all the points • We want to find a best fit polynomial or order m • The sum of the squares of the residuals is

  12. Minimize the “error” • We next minimize S by taking partial derivatives w.r.t. the coefficients cn • e.g. for c0:

  13. More examples • For c1 we find: • and so on… Taking partials w.r.t. to m+1 coefficients & equating to zero gives m+1 equations in m+1 unknowns (the cj j=0,…,m)

  14. Matrix form • These are the so-called “normal equations” in analogy with normals to the level set of a function • Can be solved using the LU decomposition method we just discussed to find the coefficients cj

  15. How do we choose m? • Higher order fits are not always better • Many dependent variables are inherently following some underlying “law.” • If the underlying relationship were quadratic, fitting a cubic would be largely pointless • However, we don’t always know the underlying relationship • So we need to look at the data…

  16. Strategy • Plot the data • If linear looks OK then start with m=1 • If curve is visible try m=2 – if higher order then try log-log graph to estimate m Strong suggestion of underlying curve Start with m=2 Linear fit looks reasonable Start with m=1

  17. Strategy cont. • After each fit evaluate Sm and plot the residuals ri=p(xi)-yi • If Sm~Sm-1 then this indicates that the mth (or m-1 th) order is about as good as you can do • Plotting the residuals will also give intuition to the quality of fit • Residuals should scatter around zero with no implicit “bias”

  18. Residuals This plot suggests you need an m+1 order fit r r Example of a pretty good fit Residuals clearly have a positive bias with fewer sign changes Residuals scattered about zero with A large number of sign changes

  19. Extending the approach to fitting functions • We can use the method developed to fit a continuous function f(x) in an interval [a,b] • Rather than a sum of squares we now have an integral that we need to minimize • As an example, let f(x)=sin(px) and the limits be [0,1], find the best quadratic (c0+c1x+c2x2) to fit • We need to minimize

  20. Compute derivatives • We work in the same way as the earlier fit: • All integrals are tractable, and in matrix form we get:

  21. Comparing to Taylor expansion • If we solve the system on the previous page we derive a polynomial • We can compare this to the Taylor expansion to second order about x=1/2 Notice that the least squares fit and Taylor expansion are different

  22. Taylor expansion is more accurate around the expansion point but becomes progressively worse further away Least squares fit is more accurate globally

  23. Hilbert Matrices • The specific form of the coefficient matrix is called a “Hilbert Matrix” and arises because of the polynomial fit • Key point: f(x) only affects the right hand side • For a polynomial fit of degree m the Hilbert Matrix has a dimension of m+1 • As m increases, the determinant →0 for m→∞ • This creates significant numerical difficulties

  24. Hilbert Matrices: problems & solutions • The Hilbert Matrix is classic example of an ill conditioned matrix • By m=4 (i.e. a 5×5 Hilbert matrix) the small determinant causes difficulty in single precision • By m=8 (i.e. a 9×9 Hilbert matrix) problems at double precision (real*8) • We can improve upon the fitting method by choosing to fit a sum of polynomials: pi are ith degree polynomials

  25. Fitting over basis polynomials • We’ll postpone the nature of the pi(x) for a moment… • We now need to find the ai by minimizing the least square function S

  26. Choosing the pi(x) • In general, choosing polynomials appropriately will ensure that this is not a Hilbert Matrix • We can again use an LU solve to find the ai and hence determine the fitting polynomial • However, if we had a system of polynomials for which The system would be trivial to solve! • Such polynomials are said to be orthogonal • Note that the integral is directly analogous to a dot product of vectors of rank N vectors (the integral has “infinite rank”)!

  27. Simplifying the normal equations • If we have orthogonal polynomials, then from (1)

  28. Further simplification • In the case that the polynomials are orthonormal so that • Then eqn (3) reduces to • There is no need to solve a matrix in this case!

  29. Orthogonality definition: weighting functions • In practice it is often useful to include a weighting function, w(x), in the definition of the orthogonality requirement • Orthonomality is then • We can include the weighting function in the definition of the least squares formula for S

  30. Examples of Orthogonal (Orthonormal) functions See Arfken

  31. Summary • LU decomposition is simple approach to solving matrices that uses both forward and backward substitution • Least squares fitting requires setting up the normal equations to minimize the sum of the squares • Fitting functions with a simple polynomial will results in ill conditioned Hilbert matrices • More general fitting using a sum over orthogonal/orthonormal polynomials can reduce the fitting problem to a series of integrals

  32. Next Lecture • Introduction to numerical integration

More Related