html5-img
1 / 61

Chapter 7 Numerical Differentiation and Integration

Chapter 7 Numerical Differentiation and Integration. INTRODUCTION DIFFERENTIATION USING DIFFERENCE OPREATORS DIFFERENTIATION USING INTERPOLATION RICHARDSON’S EXTRAPOLATION METHOD NUMERICAL INTEGRATION . NEWTON-COTES INTEGRATION FORMULAE

ronnie
Download Presentation

Chapter 7 Numerical Differentiation and Integration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7Numerical Differentiation and Integration

  2. INTRODUCTION DIFFERENTIATION USING DIFFERENCE OPREATORS DIFFERENTIATION USING INTERPOLATION RICHARDSON’S EXTRAPOLATION METHOD NUMERICAL INTEGRATION

  3. NEWTON-COTES INTEGRATION FORMULAE THE TRAPEZOIDAL RULE ( COMPOSITE FORM ) SIMPSON’S RULES ( COMPOSITE FORM ) ROMBERG’S INTEGRATION DOUBLE INTEGRATION

  4. DIFFERENTIATION USING INTERPOLATION If the given tabular function y(x) is reasonably well approximated by a polynomial Pn(x) of degree n, it is hoped that the result of will also satisfactorily approximate the corresponding derivative of y(x).

  5. However, even if Pn(x) and y(x) coincide at the tabular points, their derivatives or slopes may substantially differ at these points as is illustrated in the Figure below:

  6. Y Pn(x) Y(x) Deviation of derivatives X Xi O

  7. For higher order derivatives, the deviations may be even worst. However, we can estimate the error involved in such an approximation.

  8. For non-equidistant tabular pairs (xi, yi), i = 0, …, n we can fit the data by using either Lagrange’s interpolating polynomial or by using Newton’s divided difference interpolating polynomial. In view of economy of computation, we prefer the use of the latter polynomial.

  9. Thus, recalling the Newton’s divided difference interpolating polynomial for fitting this data as

  10. Assuming that Pn(x) is a good approximation to y(x), the polynomial approximation to can be obtained by differentiating Pn(x). Using product rule of differentiation, the derivative of the products in Pn(x) can be seen as follows:

  11. Thus, is approximated by which is given by

  12. The error estimate in this approximation can be seen from the following. We have seen that if y(x) is approximated by Pn(x), the error estimate is shown to be

  13. Its derivative with respect to x can be written as

  14. Since ξ(x) depends on x in an unknown way the derivative

  15. cannot be evaluated. However, for any of the tabular points x = xi, ∏(x) vanishes and the difficult term drops out. Thus, the error term in the last equation at the tabular point x = xi simplifies to

  16. for some ξ in the interval I defined by the smallest and largest of x, x0, x1, …, xn and

  17. The error in the r-th derivative at the tabular points can indeed be expressed analogously. To understand this method better, we consider the following example.

  18. ExampleFind and from the following data using the method based on divided differences:

  19. 0.15 0.21 0.23 0.1761 0.3222 0.3617 0.27 0.32 0.35 0.4314 0.5051 0.5441

  20. Solution We first construct divided difference table for the given data as shown below:

  21. 3rd divided difference 2nd divided difference 1st divided difference 0.1761 0.3222 0.3617 0.4314 0.5051 0.5441 0.15 0.21 0.23 0.27 0.32 0.35 2.4350 1.9750 1.7425 1.4740 1.3000 –5.7500 –3.8750 –2.9833 –2.1750 15.6250 8.1064 6.7358

  22. Using divided difference formula

  23. from a quadratic polynomial, we have

  24. Thus, using first, second and third differences from the table, the above equation yields

  25. Therefore, Similarly, we can show that

  26. RICHARDSON’S EXTRAPOLATION METHOD To improve the accuracy of the derivative of a function, which is computed by starting with an arbitrarily selected value of h, Richardson’s extrapolation method is often employed in practice, in the following manner:

  27. Suppose we use two-point formula to compute the derivative of a function, then we have

  28. where ET is the truncation error. Using Taylor’s series expansion, we can see that

  29. The idea of Richardson’s extrapolation is to combine two computed values of using the same method but with two different step sizes usually h and h/2 to yield a higher order method. Thus, we have

  30. and

  31. Here, ci are constants, independent of h, and F(h) and F(h/2) represent approximate values of derivatives. Eliminating c1 from the above pair of equations, we get

  32. Now, assuming

  33. Equation for y’(x) above reduces to

  34. Thus, we have obtained a fourth-order accurate differentiation formula by combining two results which are of second-order accurate. Now, repeating the above argument, we have

  35. Eliminating d1 from the above pair of equations, we get a better approximation as

  36. which is of sixth-order accurate, where

  37. This extrapolation process can be repeated further until the required accuracy is achieved, which is called an extrapolation to the limit. Therefore the equation for F2 above can be generalized as

More Related