1 / 21

Can You Trust Your Computer?

Can You Trust Your Computer?. CS365 – Mathematics of Computer Science Spring 2007 David Uhrig Tiffany Sharrard. Introduction. User's Perception of Computers Real vs. Floating Point Numbers Rounding and Chopping Over and Under Flow Machine Constants Error Propagation and Analysis

celerina
Download Presentation

Can You Trust Your Computer?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Can You Trust Your Computer? CS365 – Mathematics of Computer Science Spring 2007 David Uhrig Tiffany Sharrard

  2. Introduction • User's Perception of Computers • Real vs. Floating Point Numbers • Rounding and Chopping • Over and Under Flow • Machine Constants • Error Propagation and Analysis • New Ideas and Solutions • Conclusion • Questions

  3. User’s Perception of Computers • How are computers perceived by users? • A computer is seen as an tool that will give you an exact answer • What a computer may do: • Can create only garbage because of how the computer handles real numbers

  4. Example 1048 + 914 – 1048 + 1032 + 615 – 1032 • The answer to this is 1529, but most digital computers would return zero • Why?

  5. Real vs. Floating Point • Real Number System • Can be written in decimal notation • Can be infinite • Includes all positive and negative integers, fractions, and irrational numbers

  6. Real vs. Floating Point • Floating Point Number System • A t-digit base b floating-point number form: ± d1d2d3…dtbe • Where d1d2d3…dt is the mantissa, b is the base number system, e is the exponent

  7. Real vs. Floating Point • Floating Point Number System (cont’d) • The exponent is an integer between two fixed integer bounds e1 and e2 • e1 <= 0 <= e2

  8. Real vs. Floating Point • Floating Point Number System (cont’d) • Normalized • Depends on: • Base b • Length of the mantissa t • Bounds for the exponent, e1 and e2

  9. Rounding vs. Chopping • Chop • A number is chopped to t digits and all the digits past t are discarded • Example: t = 5 x = 2.5873892874 result = 2.5873

  10. Rounding vs. Chopping • Round • A number x is rounded to t digits when x is replaced by a t digit number that approximates x with minimum error • Example: t = 5 x = 2.5873892874 result = 2.5874

  11. Overflow vs. Underflow • Overflow • Occurs when the result of a floating point operation is larger than the largest floating point number in the given floating point number system • When this occurs, almost all computers will signal an error message

  12. Overflow vs. Underflow • Underflow • Occurs when the result of a computation is smaller than the smallest quantity the computer can store • Some computers don’t see this error because the machine sets the number to zero

  13. Machine Constants • Amount of round-off depends on the floating-point format your computer uses • Before the error can be corrected, the machine constants need to be identified. • Constants vary greatly by hardware • IEEE 754 is the Standard for Binary Floating-Point Arithmetic

  14. Machine Constants IEEE 754 Standard

  15. Machine Epsilon • To quantify the amount of round-off error, a round-off unit is specified: • ε - Machine Epsilon, or Machine Precision • This is the fractional accuracy of a floating point number. • Represented by:ƒl(1 + ε) ≥ 1Where ε is the smallest floating point number the machine can generate.

  16. Computing ɛ Program Output david@david-laptop:~$ ./findepsilon current Epsilon, 1 + current Epsilon 1 2.00000000000000000000 0.5 1.50000000000000000000 0.25 1.25000000000000000000 0.125 1.12500000000000000000 0.0625 1.06250000000000000000 0.03125 1.03125000000000000000 0.015625 1.01562500000000000000 0.0078125 1.00781250000000000000 0.00390625 1.00390625000000000000 0.00195312 1.00195312500000000000 0.000976562 1.00097656250000000000 0.000488281 1.00048828125000000000 0.000244141 1.00024414062500000000 0.00012207 1.00012207031250000000 6.10352E-05 1.00006103515625000000 3.05176E-05 1.00003051757812500000 1.52588E-05 1.00001525878906250000 7.62939E-06 1.00000762939453125000 3.8147E-06 1.00000381469726562500 1.90735E-06 1.00000190734863281250 9.53674E-07 1.00000095367431640625 4.76837E-07 1.00000047683715820312 2.38419E-07 1.00000023841857910156 Calculated Machine epsilon: 1.19209E-07 david@david-laptop:~$ C Code * #include <stdio.h> int main(int argc, char **argv) { float machEps=1.0f; printf("current Epsilon, 1 + current Epsilon\n"); while(1)‏ { printf("%G\t%.20f\n", machEps, (1.0f+machEps)); machEps/=2.0f; //If next epsilon yields 1, then break, because //current epsilon is the machine epsilon. if((float)(1.0+(machEps/2.0)) == 1.0)‏ break; } printf("\nCalculated Machine epsilon: %G\n", machEps); return 0; } * - Code borrowed from Wikipedia Entry on Machine Epsilon:http://en.wikipedia.org/wiki/Machine_epsilon

  17. Error Propagation • An optimistic value for the round-off accumulation in performing N arithmetic operations is roughly√(Nɛ). • Could be Nɛ or even larger! • Example: Subtractive Cancellation • 4-digit base 10 arithmatic:ƒl [(10000 + 1) – 10000] = 0(10000 + 1) – 10000 = 1

  18. Error Analysis • Two primary techniques of error analysis • Forward Error Analysis • Floating-point representation of the error is subjected to the same mathematical operations as the data itself. • Equation for the error itself • Backward Error Analysis • Attempt to regenerate the original mathematical problem from previously computed solutions • Minimizes error generation and propagation

  19. Testing for Error Propagation • Use the computed solution in the original problem • Use Double or Extended Precision rather than Single Precision • Rerun the problem with slightly modified (incorrect) data and look at the results

  20. New Ideas • Increased RAM and Processor speeds allow for more intricate solutions and alternatives to floating point errors. • Nonfloating-point arithmetic implementations • Rational Arithmetic • Multiple or Full Precision Arithmetic • Scalar and Dot Products of Vectors

  21. Conclusion • User's Perception of Computers • Real vs. Floating Point Numbers • Rounding and Chopping • Over and Under Flow • Machine Constants • Error Propagation and Analysis • New Ideas and Solutions • ...Questions?

More Related