1.63k likes | 1.65k Views
Explore numerical methods, Richardson extrapolation, and integration techniques for estimating derivatives in computational mathematics. Discover first and second-derivative formulas through Taylor series and interpolation polynomials. Learn about error estimation and truncation errors in numerical differentiation. A programming experiment and examples illustrate the challenges and solutions in computing derivatives accurately.
E N D
CSE 551 Computational Methods 2018/2019 Fall Chapter 6 Numerical Differenciation and Intgration
Outline Estimating Derivatives and Richardson Extrapolation Numerical Integration
References • W. Cheney, D Kincaid, Numerical Mathematics and Computing, 6ed, • Chapter 4 Section 4.3 • Chapter 5 Sections 5.1, 5.2, 5.3
Estimating Derivatives and Richardson Extrapolation First-Derivative Formulas via Taylor Series Richardson Extrapolation First-Derivative Formulas via Interpolation Polynomials Second-Derivative Formulas via Taylor Series Noise in Computation
Estimating Derivatives and Richardson Extrapolation • A numerical experiment • determining the derivative of a function f at a point x is not a trivial numerical problem. • Specifically, if f (x) can be computed with only n digits of precision • difficult to calculate f (x) numerically with n digits of precision • subtraction between quantities that are nearly equal • several alternatives - numerical computation of • f’(x) and f’’(x)
First-Derivative Formulas via Taylor Series • obvious method - definition of f’(x) • selecting one or more small values of h • What error involved? - Taylor’s Theorem • approximation - error term: −(1/2)h f’’(ξ ) = O(h)ξ - in the interval having endpoints x and x + h
as h → 0, • the difference between f’(x) and • estimate: h−1[ f (x +h)− f (x)] • approaches zero at the same rate that h does - O(h) • if f’’(x) = 0 - the error term: (1/6)h2f’’’(γ) • converges to zero faster at O(h2) • usually, f’’(x) is not zero
truncation error for this numerical procedure: • −(1/2)hf’’(ξ ) • present - calculations performed infinite precision: • imitating mathematical limit process by means of an approximation formula • Additional (and worse) errors expected • calculations – performed on a computer with finite word length.
Example • In Section 1.1, the program named First used the one-sided rule (1) to approximate the first • derivative of the function f (x) = sin x at x = 0.5. • Explain what happens when a large number of iterations are performed, say n = 50.
First Programming Experiment • a short programming experiment involving numerical computations. • from the computational point of view,—namely, taking the derivative of a function • the derivative of a function f at a point x defined: • computer - imitating the limit operation by using a sequence of numbers h such as • for they certainly approach zero rapidly
many other simple sequences are possible, such as 1/n, 1/n2, and 1/10n • The sequence 1/4nconsists of machine numbers in a binary computer and, • on a 32-bit computer, will be sufficiently close to • zero when n is 10.
compute f’(x) • at the point x = 0.5, with f (x) = sin x:
Solution • There is a total loss of all significant digits! • examine the computer output closely, • a good approximation f’(0.5) ≈ 0.87758 • it deteriorated as the process continued • subtraction of two nearly equal quantities • f (x +h) and f (x) • a loss of significant digits • as well as a magnification of this • effect from dividing by a small value of h.
Solution • stop the iterations sooner! • When to stop an iterative process is a common question in numerical algorithms • monitor the iterations to determine when they settle down • when two successive ones are within a prescribed tolerance
Alternatively- use the truncation error term. • six significant digits of accuracy • set • | f’’(x)| < 1 and h = 1/4n • n > 6/ log 4 ≈ 9.97 • stop after about ten steps in the process • The least error of 3.1 × 10−9 was found at iteration 14
Newton’s method and Romberg method • advantageous to have the convergence of numerical processes occur with • higher powers of some quantity approaching zero. • approximation to f’(x) the error behaves like O(h2) • the following two Taylor series: • By subtraction:
very important formula for approximating f’(x): • Expressed otherwise, • with an error leading term:−(1/6)h2f’’’(x) - O(h2)
By Taylor’s Theorem with its error term: • subtraction:
error term simplified: • (½)[ f’’’(ξ1)+f’’’ (ξ2)] • average of two values of f’’’ on the interval [x − h, x + h] • lies between the least and greatest values of f’’’ • f continuous on this interval, • assumed at some point ξ: • assumption that f’’’ - continuous on [x −h, x +h]. • very useful in the numerical solution of certain differential equation
Example • Modify program First in Section 1.1 so that it uses the central difference formula (5) to • approximate the first derivative of the function • f (x) = sin x at x = 0.5.
Solution • the truncation error term for the central difference formula: • or n >(6−log 3)/ log 16 ≈ 4.59 • a good approximation after about five iterations • with this higher-order formula • The least error of 3.6 × 10−12 was at step 9
Richardson Extrapolation • equation for f’(x) - simpler form: • constants a2, a4, . . . depend on f and x • When such information is available • about a numerical process, it is possible to use a powerful technique known as Richardson extrapolation to wring more accuracy out of the method • This procedure is now explained, • using above quation as a model.
Holding f and x fixed - efine a function of h: • ϕ(h) - approximation to f’(x) with error of orderO(h2). • compute limh→0ϕ(h) - f’(x) • select a function f and plot ϕ(h) for h = 1, 1/2 , 1/4 , 1/8 , .. • Near zero, where cannot actually calculate the • value of ϕ • ϕ - approximately a quadratic function of h • since the higher order terms – negligible
Richardson extrapolation seeks to estimate • the limiting value at 0 • from some computed values of ϕ(h) near 0 • take any convenient sequence hn - converges to 0 • calculate ϕ(hn) • and use these as approximations to f’(x)
compute ϕ(h) for some h • and compute ϕ(h/2) • eliminate the dominant term in the error series • multiply second equation by 4 and subtract it from the first:
divide by −3 and rearrange: • adding (1/3)[ϕ(h/2) − ϕ(h)] to ϕ(h/2) • improved precision to O(h4) • the error series that accompanies this • new combination begins with (1/4)a4h4, • since h – small - dramatic improvement.
repeat this process: • from the previous derivation: • combine these equations - eliminate the first term in the error series
Hence, • another improvement in the precision to O(h6). to top it off, • same procedure can be repeated over and over again to kill higher and higher • terms in the error • Richardson extrapolation.
the same situation arises derivation of Romberg’s algorithm • general procedure • start with an equation that includes both situations. Let ϕ be a function such that • the coefficients a2knot known. not interpreted as the definition of ϕ • as a property that ϕ possesses • assumed that ϕ(h) can be computed for any • h > 0 • objective - approximate L accurately using ϕ.
Select a convenient h, and compute the numbers • A(k, 0) = −a2k • D(n, 0) crude estimate of the unknown number • L = limx→0ϕ(x) • More accurate estimates are obtained via Richardson extrapolation. • The extrapolation formula:
Theorem • THEOREM: RICHARDSON EXTRAPOLATION THEOREM • The quantities D(n,m) defined in the Richardson extrapolation process obey the equation
Proof • Equation is true by hypothesi if m = 0. • inductive proof • assume equation valid for an arbitrary value of m−1 • prove that equation valid for m • from for a fixed value m
After simplification, this becomes • led to define • A(m,m) = 0. • equation is true for m, and the induction is complete
summation begins with the term (h/2n)2m+2. • h/2n small • the numbers D(n,m) are approaching L very rapidly
In practice arrange the quantities in a two-dimensional triangular array: • main tasks - generate such an array
Algorithm • Richardson Extrapolation 1. Write a function for ϕ. 2. Decide on suitable values for N and h. 3. For i = 0, 1, . . . , N, compute D(i, 0) = ϕ(h/2i ). 4. For 0 i j N, compute D(i, j ) = D(i, j − 1) + (4j− 1)−1[D(i, j − 1) − D(i − 1, j − 1)]
Example • Write a procedure to compute the derivative of a function at a point by using and Richardson extrapolation.
Solution • input: • a function f • a specific point x • a value of h • and a number n signifying how many rows in the array are to be computed. • output: • array.
Pseudocode procedure Derivative( f, x, n, h, (di j )) integer i, j, n; real h, x; real array (di j )0:n×0:n external function f for i = 0 to n do di0 ← [ f (x + h) − f (x − h)]/(2h) for j = 1 to i do di, j← di, j−1 + (di, j−1 − di−1, j−1)/(4j− 1) end for h ← h/2 end for end procedure Derivative
To test the procedure • choose • f (x) = sin x, where x0 = 1.23095 94154 and h = 1 • f’(x) = cos x and f’(x0) = 1/3
A test procedure program Test Derivative real array (di j )0:n×0:n; external function f integer n ← 10; real h ← 1; x ← 1.23095 94154 call Derivative( f, x, n, h, (di j )) output (di j) end program Test Derivative real function f (x) real x f ← sin(x) end function f
The computer output - triangular array (di j ) with indices 0 j i 10 • The most accurate value: • (d4,1) = 0.33333 33433 • The values di0 • obtained without any extrapolation, are not as accurate • having no more than four correct digits
First-Derivative Formulas via Interpolation Polynomials • general stratagem - approximate derivatives • integrals and other quantities • The function f - approximated by a polynomial p • f ≈ p • Then • f’(x) ≈ p’(x) • this strategy - very cautiously • the behavior of the interpolating polynomial can be oscillatory
the approximating polynomial p determined by • interpolation at a few points • e.g., p is the polynomial of degree at most 1 that • interpolates f at two nodes, x0 and x1n = 1 • If x0 = x and x1 = x + h
If x0 = x − h and x1 = x + h: • interpolation with three nodes, x0, x1, and x2. • The interpolating polynomial: • its derivative:
the right-hand side consists of two terms • first: the previous estimate • and the second: a refinement or correction term. • to evaluate f’(x), x = (½)(x0 + x1) • the correction term – zero • first term - more accurate than those in other cases • the correction term adds nothing
analysis of the errors - general procedure: • pnpolynomial of least degree that interpolates f at the nodes x0, x1, . . . , xn. • according to the first theorem on interpolating errors • ξ dependent on x, and w(x) = (x −x0)(x −x1) · · · (x −xn) • Differentiating gives • assume that • f (n+1)(ξ ) differentiable as a function of x, • if f (n+2)exists and is continuous.
first observation - error formula • w(x) vanishes at each node, • so evaluation at a node xi :, • taking two points x0 and x1, with n = 1 and i = 0, • Similar results follow with n = 1 and i = 1.
second observation : • becomes simpler if x is • chosen as a point where w’(x) = 0 • e.g., if n = 1, w is a quadratic function • vanishes at the two nodes x0 and x1 • a parabola is symmetric about its axis • w’ [(x0 + x1)/2] = 0 • The resulting formula.
four interpolation points: x0, x1, x2, and x3. • The interpolating polynomial with n = 3: • Its derivative: