1 / 23

Computer Systems 1 Fundamentals of Computing

Computer Systems 1 Fundamentals of Computing. Negative & Real Number Binary. Negative & Real Binary. Binary subtraction Real numbers in binary Ranges Fixed point Floating point notation. Binary Subtraction. Using two’s complement we can perform binary subtraction Essentially addition

urbain
Download Presentation

Computer Systems 1 Fundamentals of Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Systems 1Fundamentals of Computing Negative & Real Number Binary

  2. Negative & Real Binary • Binary subtraction • Real numbers in binary • Ranges • Fixed point • Floating point notation Computer Systems 1 (2004-2005)

  3. Binary Subtraction • Using two’s complement we can perform binary subtraction • Essentially addition • e.g- 20 - 13 = 20 + (-13) • Take two binary numbers • The first number is called the minuend • Second number is called the subtrahend • Second number negated using two’s complement • Binary addition is then applied • Drop carry from result if required Computer Systems 1 (2004-2005)

  4. Binary Subtraction • E.g. • 20 - 13 = 20 + (-13) = 7 = 010100 (20) + 110011 (t/c -13)(001101 = 110010+1) 1000111 • Drop carried leading 1 • = 000111 = decimal 7 Computer Systems 1 (2004-2005)

  5. Binary Subtraction • What if the minuend is smaller than the subtrahend (negative number result)? • E.g. • 20 - 23 = -3 • Or • 20 + (-23) = (-3) = 010100 (20) + 101001 (t/c -23)(010111 = 101000+1) 111101 111101 = -3 Computer Systems 1 (2004-2005)

  6. Number Range • Number of bits in a word determine the number range • The sign bit from two’s complement and sign & magnitude representation methods reduce the range from that of an unsigned number • Sign bit is ignored using Sign & Magnitude • RANGE (4 bit): 1111 to 0111 (-7 to 7) • RANGE (8 bit): 11111111 to 01111111 (-127 to 127) • Sign bit is used in the calculation of Two’s Complement numbers but still reduces the range • Because if sign indicator is used negate value • RANGE (4 bit): 1000 to 0111 (-8 to 7) • RANGE (8 bit): 10000000 to 01111111 (-128 to 127) Computer Systems 1 (2004-2005)

  7. Number Range - Overflow • If the result of a calculation goes beyond the fixed bit size of a word then overflow has occurred • Overflow is detected and highlighted by the ALU • A common problem is that the sign bit is used as part of the magnitude of a number • Especially when potentially working with negative numbers • E.g. for 96 + 64 = 160 = 01100000 + 01000000 10100000 = 160 (is -96 in two’s complement) • Overcome by adding more bits to the word size Computer Systems 1 (2004-2005)

  8. Real Numbers • So far we have looked at integer methods for number representation • Real numbers are integers and fractions of a number system • Essentially all values • Real numbers provide greater accuracy than integers • To be useful the computer must be able to work with real numbers • Methods of representing real numbers using binary • Fixed point notation • Floating point notation Computer Systems 1 (2004-2005)

  9. INTEGER . FRACTION Fixed Point Notation • Splits numbers using a point “.” • Integer section • Fractional section • Fraction is assumed to be to the right of the point • Degree of precision is decided by position of ‘point’ • Point can be placed anywhere in the binary word • More digits to the left means greater number of integers to be represented • Essentially a greater numerical range • More digits to the right means greater accuracy of real numbers • Fractional information can be more precise • Point position within a word is assumed in practice • The same binary word can have many different values depending on the position of the point Computer Systems 1 (2004-2005)

  10. Fixed Point Notation • E.g- • Using 8 bit words • 000010.11 = 2.75 • 1x2 & (1x0.5) + (1x0.25) • 000101.1 = 5.5 • (1x4) + (1x1) & 1x0.5 • Fixed point numbers are prone to overflow and underflow when using fixed word sizes • Overflow with larger numbers • Underflow when fractions are too small Computer Systems 1 (2004-2005)

  11. Floating Point Notation • The point is capable of moving • Floating • Converting fixed point to floating point is normalisation • Representation is achieved by: • A mantissa • An exponent • And a radix • m * re • m = mantissa (+ or -), r = radix, e = exponent (+ or -) • In decimal Radix is 10 • In binary Radix is 2 • E.g.- in decimal 5.2 * 106 (=5,200,000) Computer Systems 1 (2004-2005)

  12. Floating Point Notation • E.g.- ‘Normal’ Decimal numbers • 2.5 * 103 = 2500 • 8.9 * 108 = 890,000,000 • 4.3 * 102 = 430 • E.g.- Decimal Numbers < 1 and > 0 • 5.24 * 10-5 = 0.0000524 • 2.531 * 10-7 = 0.0000002531 • 6.7 * 10-2 = 0.067 Computer Systems 1 (2004-2005)

  13. Floating Point Notation • Converting FIXED to FLOAT • Binary Floating Point • If the number is greater than one: • Point floats to the left • Position before the MSB • If the number is a fraction less than ONE but greater than 0, and binary 1 doesn’t follow the point (0.5) • Point floats to the right • Position before first non-zero bit • Negative exponent (e) Computer Systems 1 (2004-2005)

  14. Floating Point Notation • E.g.- ‘Normal’ Binary numbers • 101.01010 = 0.10101010 * 23 • 1010.0110 = 0.10100110 * 24 • 111011.01 = 0.11101101 * 26 • E.g.- Binary Numbers less than 1 and > 0 • 0.0011001 = 0.11001 * 2-2 • 0.0000101 = 0.101 * 2-4 • 0.0000011 = 0.11 * 2-5 Computer Systems 1 (2004-2005)

  15. Floating Point Notation • Storing floating point numbers • Two parts to be stored • Mantissa • More bits the greater the magnitude • Exponent • More bits the greater the precision • Usually allocated a third to a half of the bits • E.g- in a 16-bit word • Mantissa is given 12 bits • Exponent gets 4 bits • Or- in a 16-bit word • Mantissa = 10 bits • Exponent = 6 bits Computer Systems 1 (2004-2005)

  16. SIGN | Mantissa (fraction) Exponent (integer) 1 sign bit + 11 bits = 12 bits 4 bits Floating Point Notation • Allocation of bits • E.g.- 16 bit allocation • Binary point appears to the right of the sign bit Computer Systems 1 (2004-2005)

  17. Representing floating point numbers • E.g.- 16-bit system • 1110.0000011 = 0.11100000011 * 24 • Mantissa = 0111000000112 • Exponent = 01002 • Can be written 0111000000112 * 01002 • Actually stored as: • 01110000001101002 • because number format for floating point has already been decided • We know where the mantissa ends, etc. Computer Systems 1 (2004-2005)

  18. Negative floating point • Negative floating point numbers can be stored using the Two’s complement method • A normalised positive mantissa must be between 0.5 and less than 1 (0.100..n – 0.111..n) • A normalised negative mantissa must be between -1 and less than -0.5 (1.000..n – 1.011..n) • Mantissa • Sign bit before binary point is used • 1 = negative • 0 = positive • MSB to right of binary point • 1 = positive • 0 = negative • If sign bit and MSB are the same then normalisation is required Computer Systems 1 (2004-2005)

  19. Negative floating point • E.g.- • 0.1xxxxxxxxxx = positive mantissa • 1.0xxxxxxxxxx = negative mantissa • 0.0xxxxxxxxxx = invalid positive • 1.1xxxxxxxxxx = invalid negative • Exponent • Usually, normal Two’s complement rules apply • Because exponent does not use a binary point • E.g.- 11002 • = (1 * -8) + (1 * 4) • = -410 Computer Systems 1 (2004-2005)

  20. Floating point accuracy • We can see that precision can be lost when using floating point numbers • Especially when performing calculations • Methods of improving precision become useful • Ways to do this: • Mantissa length • Rounding • Double precision numbers & arithmetic Computer Systems 1 (2004-2005)

  21. Floating point accuracy • Mantissa length • Increase the word size allocated to the mantissa storage • Still never completely solves the problem • Rounding • Attempt to reduce the problem cause by losing a binary 1 from the number • Occurs during shifting and normalisation • In rounding if the last bit to be discarded is a 1 then a 1 should be added to the LSB position • E.g.(5-bit num) – 1.0101 shifted = 0.10101 (last 1 is lost) • = 0.1010 when rounded with the 1 lost = 0.1011 • Can also cause more problems than it solves • Many different rounding algorithms Computer Systems 1 (2004-2005)

  22. Double precision numbers • Uses two sequential memory words to store data when required • Most significant half • Least significant half • Can accommodate numbers being used or hold the result of calculations • Essentially expands the available size for the mantissa Computer Systems 1 (2004-2005)

  23. CS1 - Week 23 • Binary subtraction using two’s complement • Actually addition • Normal subtraction • Subtraction with negative result • Ranges • Overflow • Underflow • Real number representation • Fixed point notation • Binary • Floating point notation • Decimal • Binary • Storage Computer Systems 1 (2004-2005)

More Related