Computing software basics
This presentation is the property of its rightful owner.
Sponsored Links
1 / 34

Computing Software Basics PowerPoint PPT Presentation


  • 46 Views
  • Uploaded on
  • Presentation posted in: General

Computing Software Basics. Computation Problems. Making Computers Obey The computer repeatedly refuses to give you the correct answer Limited Range of Numbers How to represent a general number in a finite amount of space (in digit) and how to deal with the approximate representation

Download Presentation

Computing Software Basics

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Computing software basics

Computing Software Basics


Computation problems

Computation Problems

  • Making Computers Obey

    The computer repeatedly refuses to give you the correct answer

  • Limited Range of Numbers

    How to represent a general number in a finite amount of space (in digit) and how to deal with the approximate representation

  • Complex Numbers and Inverse Functions

    How to investage the way computer handles complex numbers and inverse trigonometric functions

  • Summing Series

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Making computer obey

#Making Computer Obey

  • Computer Language: Computer always do exactly as told ! You must tell them exactly and everything they have to do. You need to master the computer language.

  • Programming Concept: well defined PseudeCode.

  • Program Design: Easy to use and to understand by others.

  • Method: Structured Programming by Flowcharting

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Computer languages computer s kernel and shells

Computer Languages:Computer’s kernel and shells


Computer languages

Computer Languages

  • Computers Only understand the basic machine language (tell the hardware to do things like move a number stored in one memory location to another location, or to do some simple, binary arithmetic).

  • Any higher level language need to get translated to the basic computer language.

  • Shell (command line intepreter) is a set of medium level commands or small program.

  • Operating system (OS) is a group of instructions used by the computer to communicate with users and devices, to store and read data, and to execute programs. Ex. : Unix, VMS, MVS, DOS, COS, Windows

  • The nucleus of the OS is called the kernel. The user seldom interacts with the kernel.

  • Compiled high level languages: C, Fortran (translate an entire subprogram into the basic machine language all at one time)

  • Interpreted languages: BASIC, Maple (tranlate one statement program at a time)

  • Compiler is the program that treats your programs as a foreign language ang uses a built-i dictionary and set of rules to translate it into the basic machine language.


Theory program design

Theory: Program Design

  • Simple and easy to read, making the action of each part clear and easy to analyse (Just because it was hard for you to write the program, doesn’t mean that you should make it hard for others to read)

  • Document themselves so that the programmer and other understand what the program are doing

  • Easy to use

  • Easy and safe to modify for different computers or systems

  • Can be passed on to others to use and further develop

  • Give the correct answer

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Limited range of numbers

#Limited Range of Numbers

Given only the digits 0 and 1, all numbers are represented in binary form.

Limitations:

  • N bits allows 2N integer numbers, but one bit is used for the sign, so 2N-1numbers, i.e., [0- 2N-1].

  • Long strings of 0 and 1 are fine for computer but awkward for people.

  • Binary is converted to octal, decimal or hexadecimal -> not so nice because they @do work with decimal rules of arithmetic.

  • Overflow: trying to store number larger than possible largest number

  • Underflow: trying to store number smaller than possible smallest number

  • Computer 32bits for integers: 2312 x 109 (compare to the ratio of the size of the universe to the size of the proton: 1024 !)

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Bit and bytes

Bit and Bytes

  • 1 byte = 1B = 8 bits

  • 1K = 1KB = 210 bytes = 1024 bytes

  • 512 KB = 29 bytes = 524.288 bytes x 1K/1024 bytes

  • 1 byte is the amount of memory needed to store a single character like “a” or “b”.

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Real numbers

Real Numbers

  • Fixed-point systems

  • Floating-point Systems


Fixed point system

Fixed Point System

  • Fixed point:

    xfix=sign x (n2n+ n-12n-1+ ... + 020+...+ m2-m)

  • Total bits: n+m = N-1 used to store  and 1 bit used to store the sign.

  • In 32 bits machine, the 4-bytes length of integers has the range:

    -2147483648  integers * 4  2147483648

  • All numbers have the same absolute error of 2-m-1 (a half of the resolution of 2-m)

  • Main disadvantage: small numbers have large relative errors

  • Used mainly in special applications (like business), why ?

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Floating point number

Floating Point Number

Used mainly in scientific work

  • s is the sign bit.

  • mantissa contains the significant figures of number.

  • (expfld-bias) is exponent.

  • Just as introducing a sign bit guarantess that the mantisa is always posistive, so introducing the bias guarantees that the number stored as the exponent field is always positive (the actual exponent number can be negative)

  • The smaller the number the smaller relative error;

  • The larger the number the larger relative error;

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Floating point numbers cont d

Floating Point Numbers (cont’d)

  • Example, in 32 bits machine:

    Exponent takes 7 bits or [0,27] or [0,255];

    The sign s take 1 bit.

    bias is set to a fixed integer of 127, consequently exponent expfld has the range [-127,128].

    single precicion: 32bits or 4-byte.

    double precision: 64bits or 8-byte.


Floating point numbers cont d1

Floating Point Numbers (cont’d)

  • Example,in 32 bits machine, the number 0.5 is stored as

    0 0111 1111 1000 0000 0000 0000 0000 000

    bias is 0111 11112 = 12710 and (expfld-bias)=0.

  • Sign bit is 0, denoting positive sign of 0.5

  • The largest possible floating point number is

    0 1111 1111 1111 1111 1111 1111 1111 1112 = 2128 = 3.4 x 1038

  • The smallest possible floating point number is

    0 0000 0000 1000 0000 0000 0000 0000 0002 = 2-128 = 2.9 x 10-39

    (inverse of the largest number)

  • Single precicion (4-byte) numbers have 6-7 decimal places of precicion (1 part in 2-23)


Underflow overflow

Underflow & Overflow

  • The real result of computation may be unrepresentable because its exponent is beyond the range available in the floating-point system (underflow or overflow).

  • Overflow is usually more serious problem than underflow in the sense that there is no good approximation in a floating point system to arbitrarily large numbers, whereas zero is often a reasonable approximation for arbitrarily small numbers.

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Underflow and overflow

underflow and overflow

  • A sample pseudocode

    under =1.

    over = 1.

    begin do N times

    under = under/2.

    over = over * 2.

    write out: loop number, under, over

    end


Machine precision

Machine Precision

  • Computer has always limited precicion.

    for 32-bit machine, usually:

    single precicion (4-byte) or 6-7 decimal places of precicion

    double precicion (8-byte) or 15-16 decimal places of precicion

    Symbolic manipulation program can store number with infinite precicion


Machine precicion cont d

Machine Precicion (cont’d)

  • Example, in 32bits machine

    7 + 1.0 x 10-7 = ?

    7 = 0 10000010 1110 0000 0000 0000 0000 000

    10-7 = 0 01100000 1101 0110 1011 1111 1001 010

    It would be incorret to add the numbers with the different exponent.

    The exponent bits for 10-7 (01100000) need to be converted until be the same as the exponent bits of 7 (10000010).


Machine precicion cont d1

Machine Precicion (cont’d)

The exponent of the smaller number is made larger while decreasing pregresively the mantissa by shifting bits to the right:

10-7 = 0 01100000 1101 0110 1011 1111 1001 010

= 0 01100001 0110 1011 0101 1111 1100 101 (0)

= 0 01100010 0011 0101 1010 1111 1110 010 (10)

= 0 01100011 0001 1010 1101 0111 1111 001 (010)

...

= 0 10000010 0000 0000 0000 0000 0000 000 (0001101 ...)

  • 7 + 1.0 x 10-7 = 7

    There is no more room left to store the last digits, they are lost. Computer will ignore 10-7.


Machine precicion number

Machine Precicion Number

  • Each computer has a precision number m , the maximum positive number, that can be added to the number stored as 1 without changing the number stored as 1:

    1c + m = 1c

    the subscript c is a reminder that this is the number stored in the computer memory.

  • For any number, holds

    xc = x (1+ ),    m

  • For single precision:  10-7 and for double precision  10-16

  • The number larger than 2128, an overflow occurs.

  • The number smaller than 2128, an underflow occurs.

  • An overflow/underflow number may end up being a machine-dependent pattern: NaN (Not a Number) or unpredictable number.


Determining the precision number

Determining the precision number

eps = 1.

begin do N times

eps = eps/2

one = 1. + eps

write out: loop number, one, eps

end do

Note: to print out a decimal number, the computer must make a conversion from its internal format. Not only this take time, but also if the internal number is close to being garbage, it is not clear what will get printed out. You may print them in octal or hexadecimal to obtain a truly precise indication of the stored numbers.


Typical floating point system

Typical Floating-Point System

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Rounding

Rounding

  • Real numbers that are exactly representable in a given floating-point system are called machine numbers.

  • If a real number x is not representable as a floating-point number, it must be approximated by some “nearby” floating-point number, denoted by fl(x).

  • The process of choosing a nearby floating-number fl(x) to approximate x is called rounding, and the error introduced by such approximation is called rounding error, or roundoff error.

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Example floating point number system

Example floating-point number system

-2 -1 0 1 2

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Rounding rules

Rounding Rules

  • Chop: fl(x) is the next floating-point number toward zero from x (a.k.a. round toward zero)

  • Round to nearest: fl(x) is the nearest floating-point number to x: in case of tie, we use the floating whose last stored digit is even (a.k.a. round to even)->The default rounding rule in IEEE standard systems

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Example of rounding

Example of Rounding

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Relative error

Relative Error

  • Relative Error in representing a nonzero real number x in a floating-point system is determined by:

  • Alternatively,

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Floating point arithmetic

Floating-Point Arithmetic

  • In adding or subtracting two floating-point numbers, their exponents must match

  • Multiplication of two floating-point numbers does not require that their exponents match- the exponents are simply summed and the mantissas mulitiplied.

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Example

Example

  • X = 1.92403 x 102 and Y=6.35782 x 10-1

    X + Y = 1.93039 x 102 ->The last two digits of Y have no effect on the result

    X*Y = 1.22326 x 102 ( p digits are discarded)

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Example1

Example

  • Has a finite sum in a floating-point system eventhough the real series is divergent.

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Cancellation

Cancellation

  • Often occurs when subtracting a floating-point numbers.

  • Causes a potentially serious loss of information.

    Example: (1+)-(1- )=1-1=0 !!!

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Rules

Rules

  • Computing a small quantity as difference of large quantities is generally bad idea, for rounding error is likely to dominate the result

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Examples of cancellation

Examples of Cancellation

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


Summing series

Summing Series

  • A classical


Computation of total energy of helium atom using monte carlo technique

Computation of Total Energy of Helium Atom using Monte Carlo Technique

AGUS NABA: Computaional Physics, Physics Dept., FMIPA-UB


  • Login