1 / 21

Computing Systems

Computing Systems. Basic arithmetic for computers. Numbers. Bit patterns have no inherent meaning conventions define relationship between bits and numbers Numbers can be represented in any base Binary numbers (base 2) 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001...

steffi
Download Presentation

Computing Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computing Systems Basic arithmetic for computers claudio.talarico@mail.ewu.edu

  2. Numbers • Bit patterns have no inherent meaning • conventions define relationship between bits and numbers • Numbers can be represented in any base • Binary numbers (base 2) • 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001... • decimal: 0...2n-1 • Of course it gets more complicated: • negative numbers • fractions and real numbers • numbers are finite (overflow) • How do we represent negative numbers? i.e., which bit patterns will represent which numbers?

  3. Possible Representations Sign MagnitudeOne's Complement Two's Complement 000 = +0 000 = +0 000 = +0001 = +1 001 = +1 001 = +1010 = +2010 = +2 010 = +2011 = +3 011 = +3 011 = +3100 = -0 100 = -3 100 = -4101 = -1 101 = -2 101 = -3110 = -2 110 = -1 110 = -2111 = -3 111 = -0 111 = -1 • Issues: balance, number of zeros, ease of operations • Which one is best? Why?

  4. … -4 -3 -2 -1 0 1 2 3 4 … 0 1 • 1 0000 1111 0001 2 • 2 0010 1110 3 • 3 1101 0011 4 • 4 1100 0100 5 1011 0101 • 5 1010 0110 6 • 6 1001 0111 1000 • 7 7 • 8 Two’s complement format

  5. Two’s complement operations • Negating a two’s complement number: invert all bits and add 1 • remember: “negate” and “invert” are quite different! • The sum of a number and its inverted representation must be 111….111two, which represent –1 • Converting n bit numbers into numbers with more than n bits: • copy the most significant bit (the sign bit) into the other bits0010  0000 0010 1010  1111 1010 • Referred as "sign extension" • MIPS 16 bit immediate gets converted to 32 bits for arithmetic

  6. Two’s complement • Two’s complement gets its name from the rule that the unsigned sum of an n bit number and its negative is 2n • Thus, the negation of a number x is 2n-x • adding a number x and its negate considering the bit patterns as unsigned: • Negative integers in two’s complement notation look like large numbers in unsigned notation 111…11two

  7. MIPS word is 32 bits long • 32 bit signed numbers: 0000 0000 0000 0000 0000 0000 0000 0000two = 0ten 0000 0000 0000 0000 0000 0000 0000 0001two = + 1ten 0000 0000 0000 0000 0000 0000 0000 0010two = + 2ten ... 0111 1111 1111 1111 1111 1111 1111 1110two = + 2,147,483,646ten 0111 1111 1111 1111 1111 1111 1111 1111two = + 2,147,483,647ten= 231-1 1000 0000 0000 0000 0000 0000 0000 0000two = – 2,147,483,648ten = -231 1000 0000 0000 0000 0000 0000 0000 0001two = – 2,147,483,647ten 1000 0000 0000 0000 0000 0000 0000 0010two = – 2,147,483,646ten ... 1111 1111 1111 1111 1111 1111 1111 1101two = – 3ten 1111 1111 1111 1111 1111 1111 1111 1110two = – 2ten 1111 1111 1111 1111 1111 1111 1111 1111two = – 1ten maxint minint msb (bit 31) lsb (bit 0)

  8. Addition and subtraction • Just like in grade school (carry/borrow 1s)0111 0111 0110+ 0110 - 0110 - 0101 • Two's complement operations easy • subtraction using addition of negative numbers0111 + 1010 • Overflow (result too large for finite computer word): • adding two n-bit numbers does not yield an n-bit number0111 + 0001 _1101 0001 0001 0001 _1000

  9. Detecting overflow • No overflow when adding a positive and a negative number • No overflow when signs are the same for subtraction • Overflow occurs when the value affects the sign: • overflow when adding two positives yields a negative • or, adding two negatives gives a positive • or, subtract a negative from a positive and get a negative • or, subtract a positive from a negative and get a positive • Consider the operations A + B, and A – B • Can overflow occur if B is 0 ? • Can overflow occur if A is 0 ? cannot occur ! can occur !

  10. Effects of overflow • An exception (interrupt) occurs • Control jumps to predefined address for exception • Interrupted address is saved for possible resumption • Details based on software system / language • Don't always want to detect overflow • new MIPS instructions: addu, addiu, subu • unsigned integers are commonly used for memory addresses where overflows are ignorednote:addiustill sign-extends!note:sltu, sltiufor unsigned comparisons

  11. Floating point numbers (a brief look) • We need a way to represent • numbers with fractions, e.g., 3.1416 • very small numbers, e.g., .000000001 • very large numbers, e.g., 3.15576 x 109 • solution: scientific representation • sign, exponent, significand: (–1)signx significand x 2E • more bits for significand gives more accuracy • more bits for exponent increases range • A number in scientific notation that has no leading 0s is called normalized (1+fraction)

  12. Floating point numbers • A floating point number represent a number in which the binary point is not fixed • IEEE 754 floating point standard: • single precision: (32 bits) • 1 bit sign, 8 bit exponent, 23 bit fraction • double precision: (64 bits) • 1 bit, 11 bit exponent, 52 bit fraction

  13. IEEE 754 floating-point standard • Leading “1” bit of significand is implicit • Exponent is “biased” to make sorting easier • all 0s is smallest exponent all 1s is largest (almost) • bias of 127 for single precision and 1023 for double precision • summary: (–1)sign´ (1+fraction) ´ 2exponent – bias • Example: • decimal: -.75 = - ( ½ + ¼ ) • binary: -.11 = -1.1 x 2-1 • floating point: • exponent = E + bias = -1+127=126 = 01111110two • IEEE single precision: sign exponent fraction 1 01111110 10000000000000000000

  14. IEEE 754 encoding of floating points

  15. Floating point complexities • Operations are somewhat more complicated (see text) • In addition to overflow we can have “underflow” • Overflow: a positive exponent too large to fit in the exponent field • Underflow: a negative exponent too large to fit in the exponent field • Accuracy can be a big problem • IEEE 754 keeps two extra bits, guard and round • four rounding modes • positive divided by zero yields “infinity” • zero divide by zero yields “not a number” (NaN) • other complexities • Implementing the standard can be tricky • Not using the standard can be even worse: Pentium bug!

  16. IEEE 754 encoding

  17. Floating point addition/subtraction To add/sub two floating point numbers: • Step1. we must align the decimal point of the number with smaller exponent • compare the two exponents • shift the significand of the smaller number to the right by an amount equal to the difference of the two exponents • Step 2. add/sub the significands • Step 3. The sum is not in normalized scientific notation, so we need to adjust it • either shifting right the significand and incrementing the exponent, or shifting left and decrementing the exponent • Step 4. We must round the number • add 1 to the least significant bit if the first bit being thrown away is 1 • Step 5. If necessary re-normalize

  18. Floating point addition/subtraction

  19. Accuracy of floating point arithmetic • Floating-point numbers are often approximations for a number they can’t really represent • Rounding requires the hardware to include extra bits in the calculation • IEEE 754 always keeps 2 extra bits on the right during intermediate additions called guard and round respectively

  20. Common fallacies • Floating point addition is associative x + (y+z) = (x+y)+z • Example: • Just as a left shift can replace an integer multiply by a power of 2, a right shift is the same as an integer division by a power of 2 (true for unsigned, but not for signed even when we sign extend) • Example: (shift right by two = divide by 4ten) -5ten = 1111 1111 1111 1111 1111 1111 1111 1011two 1111 1111 1111 1111 1111 1111 1111 10two = -2ten It should be –1ten

  21. Concluding remarks • Computer arithmetic is constrained by limited precision • Bit patterns have no inherent meaning (side effect of the stored-program concept) but standards do exist • two’s complement • IEEE 754 floating point • Computer instructions determine “meaning” of the bit patterns • Performance and accuracy are important so there are manycomplexities in real machines • Algorithm choice is important and may lead to hardware optimizations for both space and time (e.g., multiplication)

More Related