Komputasi Numerik : Integrasi dan Differensiasi Numerik. Agus Naba Physics Dept., FMIPA-UB. Ordinary Differential Equation. Ordinary Differential Equation (ODE) is a differential equation in which all dependent variables are functions of a single independent variable. ODE’s Problem.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Komputasi Numerik:Integrasi dan Differensiasi Numerik
Physics Dept., FMIPA-UB
Ordinary Differential Equation (ODE) is a differential equation in which all dependent variables are functions of a single independent variable.
First-order Ordinary Differential Equation (ODE):
It enables us to calculate all of yn = y (tn), given y(t0).
The curve y(t) is not generally a straight-line between the neighbouring grid-times tn and tn+1as assumed.
According to Taylor Series:
Each step incurs truncation error ~ t2
Net truncation errors of Euler’s Method ~ t
For every type of computer, there is a charasteristic number, , which defined as the smallest number which when added to a number of order unity gives rise to a new number.
= 2.2 x 10-16 (for double precision number in IBM-PC )
= 1.19 x 10-7 (for single precision number in IBM-PC )
The net round-off errors of Euler’s Method /t.
At large t, the error is dominated by the truncation errors, whereas the round-off errors dominates at small t.
Minimum net numerical errors are achieved when
Not generally used in scientific computing:
Main reason of large truncation errors:
Euler’s method only evaluates derivative at the beginning of the interval [tn,tn+1], i.e., at tn.
（Very asymetric with respect to the beginning and the end of the interval)
The 2nd order RK Method
Modified Euler’s Method
tn + h/2
tn + h
hmin increases and min decreases as order gets larger, but needs more computational effort.
err = yanalitic-ynumeric
Global integration errors associated with Euler's method (solid curve) and the4th order Runge-Kutta method (dotted curve) plotted against the step-length h. Single precision calculation.
Global integration errors associated with Euler's method (solid curve) and the 4th order Runge-Kutta method (dotted curve) plotted against the step-length h. Double precision calculation.
Consider the following ODE:
err = xanalitic-xnumeric
Global integration error associated with a xed step-length (h = 0:01), 4th order RK method, plotted against the independent variable, t, for a system of o.d.e.s in which the variation scale-length decreases rapidly with increasing t. Double precision calculation.
It can be seen that, although the error starts off small, it rises rapidly as the variation scale-length of the solution decreases (i.e., as t increases), and quickly becomes unacceptably large. Of course, we could reduce the error by simply reducing the step-length, h. However, this is a very inefficient solution. The step-length only needs to be reduced at large t. There is no need to reduce it, at all, at small t.
Solution: h should be large at small t but needs to be reduced at large t
The step-length h should be increased if the truncation error per step is too small, and vice versa, in such a manner that the error per step remains relatively constant at 0.
Global integration errors associated with fixed step-length (h = 0.01), 4th order RK method (solid curve) and a corresponding adaptive method (0 = 10-8) (dotted curve), plotted against the independent variable, t, for a system of o.d.e.s in which the variation scale-length decreases rapidly with increasing t. Double precision calculation.
An object is moving through space, its position as a function of time x(t) is recorded in a table.
Determine the object’s velocity v(t)=dx/dt and acceleration a(t)=d2x/dt2
Even a computer runs into errors with such a method because of its subtraction operations: the numerator tends to fluctuate between 0 and the machine precision as the denominator approaches zero.
c denotes a computed expression.
FD: using two points to represent the 1st derivative function by a straight line in the interval from x to x+h
This clearly becomes a good approximation only for small h, i.e., h << 2x
CD: using two points to represent the function by a straight line in the interval from x-h/2 to x+h/2
CD Method gives the exact answer regardless of the size of h !
It reduces the loss of precision that occurs when large and small numbers are added together, only to be subtracted from other large numbers.
Subtract the large number from each other and then add the difference to the small numbers !
Regardless of the algorithm, evaluating the derivative of f(x) at x requires us to know the values of f surrounding x !
Once we have the derivative of f(x) at x, USE the integration methods, ex., RK Method, to approximate the values of f surrounding x !
The approximation/truncation errors in numerical differentiation decrease with decreasing step size h while roundoff errors increase with a smaller step size. Total error is minimum if
minimum. This occurs when
The limit of roundoff error is essentially machine precision:
The h value for which roundoff and truncation errors are equal is
Ex., for single precision 10-7 for f(x)=ex or cos(x)
hfd 0.0005 and hcd 0.01