1 / 23

Numerical Methods

Numerical Methods. Solution of Equation. Root Finding Methods. Bisection Method Fixed Point Method Secant Method Modified Secant Successive approximation Newton Raphson method Berge Vieta. Motivation. Many problems can be re-written into a form such as: f(x,y,z,…) = 0

isusan
Download Presentation

Numerical Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Numerical Methods Solution of Equation

  2. Root Finding Methods • Bisection Method • Fixed Point Method • Secant Method • Modified Secant • Successive approximation • Newton Raphson method • Berge Vieta

  3. Motivation • Many problems can be re-written into a form such as: • f(x,y,z,…) = 0 • f(x,y,z,…) = g(s,q,…)

  4. Motivation • A root, r, of function f occurs when f(r) = 0. • For example: • f(x) = x2 – 2x – 3 has tworoots at r = -1 and r = 3. • f(-1) = 1 + 2 – 3 = 0 • f(3) = 9 – 6 – 3 = 0 • We can also look at f in its factored form. f(x) = x2 – 2x – 3 = (x + 1)(x – 3)

  5. Finding roots / solving equations • General solution exists for equations such as ax2 + bx + c = 0 • The quadratic formula provides a quick answer to all quadratic equations. • However, no exact general solution (formula) exists for equations with exponents greater than 4. • Transcendental equations: involving geometric functions (sin, cos), log, exp. These equations cannot be reduced to solution of a polynomial.

  6. Examples

  7. Problem-dependent decisions • Approximation: since we cannot have exactness, we specify our tolerance to error • Convergence: we also specify how long we are willing to wait for a solution • Method: we choose a method easy to implement and yet powerful enough and general • Put a human in the loop: since no general procedure can find roots of complex equations, we let a human specify a neighbourhood of a solution

  8. Bisection Method • Based on the fact that the function will change signs as it passes thru the root. • Suppose we know a function has a root between a and b. (…and the function is continuous, … and there is only one root) • f(a)*f(b) < 0 • Once we have a root bracketed, we simply evaluate the mid-point and halve the interval.

  9. Bisection Method • f(a) . F(b) < 0 • c=(a+b)/2 f(a)>0 f(c)>0 a c b f(b)<0

  10. Bisection Method • Guaranteed to converge to a root if one exists within the bracket. a = c f(a)>0 b c a f(c)<0 f(b)<0

  11. Bisection method… • Check if solution lies between aand b…F(a)*F(b) < 0 ? • Try the midpoint m: compute F(m) • If |F(m)| < ϵselectm as your approximate solution • Otherwise, if F(m) is of opposite sign to F(a) that is ifF(a)*F(m) < 0, then b = m. • Else a = m.

  12. Stop Conditions • 1. number of iterations • 2. |f(x0)| ≤ϵ • 3. | x- x0| ≤ϵ

  13. Bisection Method • Simple algorithm: Given: a and b, such that f(a)*f(b)<0 Given: error tolerance, err c=(a+b)/2.0; // Find the midpoint While( |f(c)| > err ) { if( f(a)*f(c) < 0 ) // root in the left half b = c; else // root in the right half a = c; c=(a+b)/2.0; // Find the new midpoint } return c;

  14. Square root program • The (positive) square root function is continuous and has a single solution. c = x2 Example: F(x) = x2 - 4 F(x) = x2 - c

  15. Example: bisection iteration

  16. Remarks • Convergence • Guaranteed once a nontrivial interval is found • Convergence Rate • A quantitative measure of how fast the algorithm is • An important characteristics for comparing algorithms

  17. we could compute exactly how many iterations we would need for a given amount of error. • The error is usually measured by looking at the width of the current interval you are considering (i.e. the distance between a and b). The width of the interval at each iteration can be found by dividing the width of the starting interval by 2 for each iteration. This is because each iteration cuts the interval in half. If we let the error we want to achieve ϵand n be the iterations we get the following:

  18. Let: Length of initial interval L0 After k iterations, length of interval is Lk Lk=L0/2k Algorithm stops when Lk ϵ Plug in some values… Convergence Rate of Bisection This is quite slow, compared to the other methods… Meaning of eps

  19. How to get initial (nontrivial) interval [a,b] ? • Hint from the physical problem • For polynomial equation, the following theorem is applicable: roots (real and complex) of the polynomial f(x) = anxn + an-1xn-1 +…+ a1x + aο satisfy the bound:

  20. Roots are bounded by Hence, real roots are in [-10,10] Roots are –1.5251, 2.2626 ± 0.8844i Example complex

  21. Problem with Bisection • Although it is guaranteed to converge under its assumptions, • Although we can predict in advance the number of iterations required for desired accuracy (b - a)/2n <e -> n > log((b - a ) / e ) • Too slow! Computer Graphics uses square roots to compute distances, can’t spend 15-30 iterations on every one! • We want more like 1 or 2, equivalent to ordinary math operation.

  22. Examples • Locate the first nontrivial root of sin x = x3 using Bisection method with the initial interval from 0.5 to 1. perform computation until error 2% • Determine the real root of f(x)= 5x3-5x2+6x-2 using bisection method. Employ initial guesses of xl = 0 and xu = 1. iterate until the estimated error a falls below a level of s = 15%

  23. Homework

More Related