1 / 46

Multimedia Systems Lecture 6 – Basics of Compression

Multimedia Systems Lecture 6 – Basics of Compression. Broad Classification. Entropy Coding (statistical) lossless; independent of data characteristics (e.g. RLE, Huffman, LZW, Arithmetic coding) Source Coding lossy ; may consider semantics of the data depends on characteristics of the data

warren
Download Presentation

Multimedia Systems Lecture 6 – Basics of Compression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multimedia Systems Lecture 6 – Basics of Compression

  2. Broad Classification • Entropy Coding (statistical) • lossless; independent of data characteristics • (e.g. RLE, Huffman, LZW, Arithmetic coding) • Source Coding • lossy; may consider semantics of the data • depends on characteristics of the data • (e.g. DCT, DPCM, ADPCM, color model transform) • Hybrid Coding (used by most multimedia systems) • combine entropy with source encoding • (e.g., JPEG-2000, MPEG-2, MPEG-4, MPEG-7)

  3. Data Compression • Branch of information theory • minimize amount of information to be transmitted • Transform a sequence of characters into a new string of bits • same information content • length as short as possible

  4. Concepts • Coding (the code) maps source messages from alphabet (A) into code words (B) • Source message (symbol) is basic unit into which a string is partitioned • can be a single letter or a string of letters • EXAMPLE: aabbbccccdddddeeeeeefffffffgggggggg • A = {a, b, c, d, e, f, g, space} • B = {0, 1}

  5. Taxonomy of Codes • Block-block • source messages and code words of fixed length; e.g., ASCII • Block-variable • source message fixed, code words variable; e.g., Huffman coding • Variable-block • source variable, code word fixed; e.g., RLE, LZW • Variable-variable • source variable, code words variable; e.g., Arithmetic

  6. Example of Block-Block • Coding “aabbbccccdddddeeeeeefffffffgggggggg” • 40 characters

  7. source message probability codeword a 2/40 1001 b 3/40 1000 c 4/40 011 d 5/40 010 e 6/40 111 f 7/40 110 g 8/40 00 space 5/40 101 source message probability codeword a 2/40 1001 b 3/40 1000 c 4/40 011 d 5/40 010 e 6/40 111 f 7/40 110 g 8/40 00 space 5/40 101 source message probability codeword a 2/40 1001 b 3/40 1000 c 4/40 011 d 5/40 010 e 6/40 111 f 7/40 110 g 8/40 00 space 5/40 101 source message probability codeword a 2/40 1001 b 3/40 1000 c 4/40 011 d 5/40 010 e 6/40 111 f 7/40 110 g 8/40 00 space 5/40 101 Probability of occurrence

  8. Example of Variable-Variable • Coding “aabbbccccdddddeeeeeefffffffgggggggg” Table 1

  9. Static Codes • Mapping is fixed before transmission • message represented by same code word every time it appears in message (ensemble) • Huffman coding is an example • Better for independent sequences • probabilities of symbol occurrences must be known in advance;

  10. Dynamic Codes • Mapping changes over time • also referred to as adaptive coding. • Attempts to exploit locality of reference • periodic, frequent occurrences of messages • dynamic Huffman is an example • Hybrids? • build set of codes, select based on input

  11. Traditional Evaluation Criteria • Algorithm complexity • running time • Amount of compression • redundancy • compression ratio • How to measure?

  12. Measure of Information • Consider symbols siand the probability of occurrence of each symbol p(si) • In case of fixed-length coding , smallest number of bits per symbol needed is • L ≥ log2(N) bits per symbol • Example: Message with 5 symbols need 3 bits (L ≥ log25)

  13. Variable-Length Coding Entropy • What is the minimum number of bits per symbol? • Answer: Shannon’s result – theoretical minimum average number of bits per code work is known as Entropy (H)

  14. Entropy (Theoretical Limit) = -.25 * log2 .25 + -.30 * log2 .30 + -.12 * log2 .12 + -.15 * log2 .15 + -.18 * log2 .18 H = 2.24 bits

  15. Average Codeword Length = .25(2) +.30(2) +.12(3) +.15(3) +.18(2) L = 2.27 bits

  16. Example H = -.01*log2.01 + -.99*log2.99 = .08 L = .01(1) +.99(1) = 1

  17. Compression Ratio • Compare the average message length and the average codeword length • average L(message) / average L(codeword) • Example:{aa, bbb, cccc, ddddd, eeeeee, fffffff, gggggggg} • Average message length is 5 Bits (log2 35) • If we use code-words from table 1, then • We have {0,1,10,11,100,101,110} • Average codeword length is 2.14 Bits • Compression ratio: 5/2.14 = 2.336

  18. Symmetry • Symmetric compression • requires same time for encoding and decoding • used for live mode applications (teleconference) • Asymmetric compression • performed once when enough time is available • decompression performed frequently, must be fast • used for retrieval mode applications (e.g., an interactive CD-ROM)

  19. Entropy Coding Algorithms(Content Dependent Coding)

  20. 1- Run-length Encoding (RLE) • Replaces sequence of the same consecutive bytes with number of occurrences • Number of occurrences is indicated by a special flag (e.g., !) • Example: • abcccccccccdeffffggg (20 Bytes) • abc!9def!4ggg (13 bytes) • Then the compression ratio = 20/13 ~ 1.6

  21. Variations of RLE (Zero-suppression technique) • Assumes that only one symbol appears often (blank). • Replace blank sequenceby M-byte and a byte with number of blanks in sequence • Example: M3, M4, M14,…

  22. 2- Huffman Encoding • Statistical encoding, depends on occurrence frequency of single characters or sequences of data bytes • To determine Huffman code, it is useful to construct a binary tree • Leaves are characters to be encoded • Nodes carry occurrence probabilities of the characters belonging to the subtree • Example: How does a Huffman code look like for symbols with statistical symbol occurrence probabilities: • P(A)= 0.16, P(B)= 0.51, P(C)= 0.09, P(D)= 0.13, P(E)=0.11 ?

  23. Huffman Encoding (Example) Step 1 : Sort all Symbols according to their probabilities (left to right) from Smallest to largest these are the leaves of the Huffman tree P(B) = 0.51 P(C) = 0.09 P(E) = 0.11 P(D) = 0.13 P(A)=0.16

  24. Huffman Encoding (Example) Step 2: Build a binary tree from left to Right Policy: always connect two smaller nodes together (e.g., P(CE) and P(DA) had both Probabilities that were smaller than P(B), Hence those two did connect first P(CEDAB) = 1 P(B) = 0.51 P(CEDA) = 0.49 P(CE) = 0.20 P(DA) = 0.29 P(C) = 0.09 P(E) = 0.11 P(D) = 0.13 P(A)=0.16

  25. Huffman Encoding (Example) Step 3: label left branches of the tree With 0 and right branches of the tree With 1 P(CEDAB) = 1 1 0 P(B) = 0.51 P(CEDA) = 0.49 1 0 P(CE) = 0.20 P(DA) = 0.29 0 1 1 0 P(C) = 0.09 P(E) = 0.11 P(D) = 0.13 P(A)=0.16

  26. Huffman Encoding (Example) Step 4: Create Huffman Code Symbol A = 011 Symbol B = 1 Symbol C = 000 Symbol D = 010 Symbol E = 001 P(CEDAB) = 1 1 0 P(B) = 0.51 P(CEDA) = 0.49 1 0 P(CE) = 0.20 P(DA) = 0.29 0 1 0 1 P(C) = 0.09 P(E) = 0.11 P(D) = 0.13 P(A)=0.16

  27. Huffman Exercise1 • Construct the Huffman • coding tree? Solution

  28. Huffman Exercise 2 • Compute Entropy (H) • Build Huffman tree • Compute averagecode length • Code “BCCADE”

  29. Solution • Compute Entropy (H) • H = 2.1 bits • Build Huffman tree • Compute code length • L = 2.2 bits • Code • “BCCADE” => 01011000011001

  30. Converting Decimal Fractions to Binary • Step-by-step method for converting the decimal value .625 to a binary representation.. Step 1: Begin with the decimal fraction and multiply by 2. The whole number part of the result is the first binary digit to the right of the point. Because .625 x 2 = 1.25, the first binary digit to the right of the point is a 1.So far, we have .625 = .1??? . . . (base 2) . Step 2: Next we disregard the whole number part of the previous result (the 1 in this case) and multiply by 2 once again. The whole number part of this new result is the second binary digit to the right of the point. We will continue this process until we get a zero as our decimal part or until we recognize an infinite repeating pattern. Because .25 x 2 = 0.50, the second binary digit to the right of the point is a 0.So far, we have .625 = .10?? . . . (base 2) .

  31. Converting Decimal Fractions to Binary Step 3: Disregarding the whole number part of the previous result (this result was .50 so there actually is no whole number part to disregard in this case), we multiply by 2 once again. The whole number part of the result is now the next binary digit to the right of the point. Because .50 x 2 = 1.00, the third binary digit to the right of the point is a 1.So now we have .625 = .101?? . . . (base 2) . Step 4: In fact, we do not need a Step 4. We are finished in Step 3, because we had 0 as the fractional part of our result there. Hence the representation of .625 = .101 (base 2) .

  32. 3- Arithmetic Coding • Optimal algorithm as Huffman coding with respect to compression ratio. • Arithmetic coding typically has a better compression ratio than Huffman coding, as it produces a single symbol rather than several separate codeword.

  33. Arithmetic Coding • A message is encoded as a real number in an interval from zero to one [0,1), this depends on the probability of the symbols. • The Mapping is “built” as each new symbol arrives. • As each symbol is processed find new {upper and lower limit} for interval. • Each real number (< 1) is represented as binary fraction • 0.5 = 2-1 (binary fraction = 0.1); • 0.25 = 2-2 (binary fraction = 0.01), • 0.125=2-3 (binary fraction = 0.001), • 0.0625=2-4 (binary fraction = 0.0001),…………

  34. Arithmetic coding algorithm 1. We begin with a “current interval" [L; H) initialized to [0; 1). 2. For each symbol, we perform two steps : (a) We subdivide the current interval into subintervals, one for each possible alphabet symbol. The size of a symbol's subinterval is proportional to the estimated probability that the symbol will be the next symbol in the file, according to the model of the input. (b) We select the subinterval corresponding to the symbol that actually occurs next in the file, and make it the new current interval. 3. We output enough bits to distinguish the final current interval from all other possible final intervals.

  35. Algorithm (cont.) • Computation of the sub-interval corresponding to the ith symbol that occurs:

  36. Example encode “eaii!”

  37. After seeing ! i i a e 0.9 1 0.2336 0.2336 0.236 0.26 0.5 ! ! ! ! ! ! 0.8 u u u u u u 0.6 o o o o o o 0.5 i i i i i i e e e e e e 0.2 a a a a a a 0 0.23354 0.233 0.23 0.2 0.2

  38. Example (cont.) • The size of the final range is • 0.2336 - 0.23354 = 0.00006, so any number in this interval can be used to represent the message (0.2336) • that is also exactly the multiplication of the probabilities of • the five symbols in the message “eaii!” : • (0.3)* (0.2)* (0.1)* (0.1)* (0.1) = 0.00006. • it takes 5 decimal digits to encode the message.

  39. 0 0.2 0.3 0.5 0.55 0.85 0.9 1 Example 2 • A source output symbols {A, B, C, D, E, F, $}. $ is the termination symbol. Their probabilities are as follows. P(A) = 0.2 P(B) = 0.1 P(C) = 0.2 P(D) = 0.05 P(E) = 0.3 P(F) = 0.05 P($) = 0.1

  40. Code: 0101010101 0.3 0 0.3 0.322 A 0.34 0.2 B 0.3 C 0.5 D 0.55 0.322 E 0.334 0.85 F 0.9 0.3328 $ 0.5 1 0.34 0.334 Example 2 Now we have an input string C A E $ 0.333 = 0.0101010101

  41. Adaptive Encoding (Adaptive Huffman) • Huffman code change according to usage of new words and new probabilities can be assigned to individual letters. • If Huffman tables adapt, they must be transmitted to receiver side.

  42. Hybrid Coding (Usage of RLE/Huffman, Arithmetic Coding) RLE, Huffman Arithmetic Coding

  43. Compression Utilities and Formats • Compression tool examples: • winzip, pkzip, compress, gzip • General compression formats: • .zip, .gz • Common image compression formats: JPEG, JPEG 2000, BMP, GIF, PCX, PNG, TGA, TIFF, WMP

  44. Compression Utilities and Formats • Common audio (sound) compression formats: • MPEG-1 Layer III (known as MP3), RealAudio (RA, RAM, RP), WMA, AIFF,WAVE • Common video (sound and image) compression formats: • MPEG-1, MPEG-2, MPEG-4, DivX, Quicktime (MOV), RealVideo (RM), Windows Media Video (WMV), Video for Windows (AVI), Flash video (FLV)

  45. Summary • Important Lossless (Entropy) Coders • RLE, Huffman Coding and Arithmetic Coding • Important Lossy (Source ) Coders • Quantization • Differential PCM (DPCM) – calculate difference from previous values – one has fewer values to encode • Loss occurs during quantization of sample values

  46. The End

More Related