120 likes | 250 Views
Explore the fundamental concepts of data representation in computer science. Learn about the significance of binary, decimal, and hexadecimal systems, and how computers interpret data using bits and bytes. This overview explains the meaning of number sequences, differences between bit strings and byte strings, and how to convert between various numeric bases. Understand the smallest and largest byte values, the representation of characters in ASCII and Unicode, and engage in practical exercises to solidify your knowledge.
E N D
CSCI 1001 overview of computer science REPRESENTING DATA I
What do each of these mean: VII 7 How do computers represent data?
Review: Decimal What does the digit sequence 563 mean? 5×102 + 6×101 + 3×100
bit: a value that is either 0 or 1 string: a sequence of bits byte: a string of 8 bits
Binary A bit string can represent a number: 110 = 1×22 + 1×21 + 0×20 A byte has the place values: What is the smallest possible byte? the largest?
Input: a positive integer N • Find the largest k so that 2k ≤ N • for each power_of_2 ∈ {2k,2k-1,…,1}: • if N ≥ power_of_2 then: • set N = N - power_of_2 and output 1 • else: • output 0 • 3. stop. • setp = 1 and k = 0 • whilep ≤ N/2: • set p = p×2 and k=k+1 PRACTICE: 11011100, 93, 42
Hexadecimal Base 16: 0,1,2,…,9,A,B,C,D,E,F 0x37 • = 5510 0x1A • = 2610 0xC2 • = 19410 How many bits for a hex digit? 4 (a nibble)
Hex Binary Decimal 0x2C 10100011 217
ascii 1 Letter = One byte “MINNESOTA” ⇒ 0x4D494E4E45534F5441 Moby Dick = 1,255,836 bytes
1 1 1 1011 0111 + 1 1 0 0 0
http://cs1001.us/ Please read Chapter 4.2.2 for Wednesday’s lecture.