1 / 55

Chapter 2 A Survey of Simple Methods and Tools

Chapter 2 A Survey of Simple Methods and Tools. 2.1 Horner ’ s Rule and Nested Multiplication. Nested Multiplication For example. Horner ’ s rule for polynomial evaluation. 多項式最高次項的係數. 多項式的係數. Horner ’ s rule for polynomial derivative evaluation. Polynomial first derivative: For example:.

Download Presentation

Chapter 2 A Survey of Simple Methods and Tools

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 2 A Survey of Simple Methods and Tools

  2. 2.1 Horner’s Rule and Nested Multiplication • Nested Multiplication • For example

  3. Horner’s rule for polynomial evaluation 多項式最高次項的係數 多項式的係數

  4. Horner’s rule for polynomial derivative evaluation • Polynomial first derivative: • For example:

  5. Horner’s rule for polynomial derivative evaluation

  6. If the intermediate values in the computation of p(x) are saved, then the subsequent computation of the derivative can be done more cheaply. Define So that Then and, in particular, Define Therefore Since A more efficient implementation of Horner’s rule 注意bk亦為x的函數

  7. 2.2 Difference Approximations to the Derivative—one-side difference • The definition of the derivative: • Taylor’s Theorem: • So that we have • Thus the error is roughly proportional to h. • Can we do better?

  8. 2.2 Difference Approximations to the Derivative—centered difference • Consider the two Taylor expansions:

  9. Example 2.1

  10. Further illustrate these differences in accuracy • Let’s continue computing with the same example, but take more and smaller values of h. Let with the corresponding errors

  11. Nearly 4 Error increase. why?

  12. Rounding Error • Let denote the function computation as actually done on the computer. • Define as the error between the function as computed in infinite precision and as actually computed on the machine. • The approximate derivative that we compute is constructed with , not f. • Define

  13. Rounding Error • We have which we write as

  14. Rounding Error

  15. Nearly 4

  16. 2.3 Application: Euler’s Method for Initial Value Problems • General form: • One-side difference (Eq. 2.1) • Euler’s method

  17. Example 2.2

  18. 2.4 Linear Interpolation • Given a set of nodesxk, if for all k, then we say the function p interpolates the function f at these nodes. • Linear interpolation: using a straight line to approximate a given function • For example: the equation of a straight line that passes through the two points

  19. f(x1) p1(x) f(x0) x x0 x1

  20. The accuracy of linear interpolation

  21. Example 2.3 f (0.2) f (0.1)

  22. Piecewise linear interpolation • Example 4.2

  23. 2.5 Application: the trapezoid rule • Define the integration of interest as I(f ):

  24. Error analysis • Apply the Integral Mean Value Theorem thus

  25. The n-subinterval trapezoid rule

  26. This theorem tells us: • The numerical approximation will converge to the exact value • How fast this convergence occurs h 2

  27. Example 2.5

  28. Example 2.6

  29. The stability of the trapezoid rule The double prime on the summation symbol means that the first and last terms are multiplied by ½. • We conclude that the trapezoid rule is a stable numerical method. • In fact, almost all methods for numerically approximating integrals are stable.

  30. 2.6 Solution of tri-diagonal linear systems

  31. If A is tri-diagonal, then • For example:

  32. Make a notational simplification: where • Then the augmented matrix corresponding to the system is

  33. Gaussian elimination • The elimination step where • The backward solution step

  34. Example 2.7

  35. After a single pass through the first loop: • We cannot continue the process, for we would have to divide by zero in the next step. • However, the solution of the system indeed exist:

  36. Diagonal dominance for tri-diagonal matrices • For example

More Related