1 / 36

Lossless Image Compression

Lossless Image Compression. Recall: run length coding of binary and graphic images Why does it not work for gray-scale images? Image modeling revisited Predictive coding of gray-scale images 1D Predictive coding: DPCM 2D fixed and adaptive prediction Applications: Lossless JPEG and JPEG-LS.

huy
Download Presentation

Lossless Image Compression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lossless Image Compression • Recall: run length coding of binary and graphic images • Why does it not work for gray-scale images? • Image modeling revisited • Predictive coding of gray-scale images • 1D Predictive coding: DPCM • 2D fixed and adaptive prediction • Applications: Lossless JPEG and JPEG-LS EE465: Introduction to Digital Image Processing

  2. Lossless Image Compression • No information loss – i.e., the decoded image is mathematically identical to the original image • For some sensitive data such as document or medical images, information loss is simply unbearable • For others such as photographic images, we only care about the subjective quality of decoded images (not the fidelity to the original) EE465: Introduction to Digital Image Processing

  3. Data Compression Paradigm Y entropy coding binary bit stream source modeling discrete source X P(Y) probability estimation Probabilities can be estimated by counting relative frequencies either online or offline The art of data compression is the art of source modeling EE465: Introduction to Digital Image Processing

  4. Recall: Run Length Coding Y Transformation by run-length counting Huffman coding binary bit stream discrete source X P(Y) probability estimation Y is the sequence of run-lengths from which X can be recovered losslessly EE465: Introduction to Digital Image Processing

  5. Image Example col. 156 159 158 155 158 156 159 158 160 154 157 158 157 159 158 158 156 159 158 155 158 156 159 158 160 154 157 158 157 159 158 158 156 153 155 159 159 155 156 155 155 155 155 157 156 159 152 158 156 153 157 156 153 155 154 155 159 159 156 158 156 159 157 161 row Runmax=4 EE465: Introduction to Digital Image Processing

  6. Why Short Runs? EE465: Introduction to Digital Image Processing

  7. Why RLC bad for gray-scale images? • Gray-scale (also called “photographic”) images have hundreds of different gray levels • Since gray-scale images are acquired from the real world, noise contamination is inevitable You simply can not freely RUN in a gray-scale image EE465: Introduction to Digital Image Processing

  8. Source Modeling Techniques • Prediction • Predict the future based on the causal past • Transformation • transform the source into an equivalent yet more convenient representation • Pattern matching • Identify and represent repeated patterns EE465: Introduction to Digital Image Processing

  9. The Idea of Prediction • Remarkably simple: just follow the trend • Example I: X is a normal person’s temperature variation through day • Example II: X is intensity values of the first row of cameraman image • Markovian school (short memory) • Prediction does not count on the data a long time ago but the most recent ones (e.g., your temperature in the evening is more correlated to that in the afternoon than that in the morning) EE465: Introduction to Digital Image Processing

  10. 1D Predictive Coding • 1st order Linear Prediction original samples x1 x2 …… xn-1 xn xn+1 … … xN prediction residues y1 y2 …… yn-1 yn yn+1 … … yN x1 x2 …… xN y1 y2 …… yN - Encoding y1=x1 initialize yn=xn-xn-1, n=2,…,N prediction - Decoding y1 y2 …… yN x1 x2 …… xN x1=y1 initialize prediction xn=yn+xn-1, n=2,…,N EE465: Introduction to Digital Image Processing

  11. Numerical Example … original samples 90 92 91 93 93 95 a b a-b prediction residues 90 2 -1 2 0 2 a b decoded samples … 90 92 91 93 93 95 a+b EE465: Introduction to Digital Image Processing

  12. Image Example (take one row) H(X)=6.56bpp original row signal x (left) and its histogram (right) EE465: Introduction to Digital Image Processing

  13. Source Entropy Revisited • How to calculate the “entropy” for a given sequence (or image)? • Obtain the histogram by relative frequency counting • Normalized the histogram to obtain probabilities Pk=Prob(X=k),k=0-255 • Plug the probabilities into entropy formula You will be asked to implement it in the assignment EE465: Introduction to Digital Image Processing

  14. Cautious Notes • The entropy value calculated in previous slide need to be understood as the result if we choose to model the image by an independent identically distributed (i.i.d.) random variable. • It does not take spatially correlated and varying characteristics into account • The true entropy is smaller! EE465: Introduction to Digital Image Processing

  15. Image Example (con’t) H(Y)=4.80bpp prediction residue signal y (left) and its histogram (right) EE465: Introduction to Digital Image Processing

  16. Interpretation • H(Y)<H(X) justifies the role of prediction (intuitively it decorrelates the signal). • Similarly, H(Y) is result if we choose to model the residue image by an independent identically distributed (i.i.d.) random variable. • It is an improved model when compared with X due to the prediction • The true entropy is smaller! EE465: Introduction to Digital Image Processing

  17. High-order 1D Prediction Coding • k-th order Linear Prediction original samples x1 x2 …… xn-1 xn xn+1 … … xN x1 x2 …… xN y1 y2 …… yN - Encoding y1=x1,y2=x2,…,yk=xk initialize prediction - Decoding y1 y2 …… yN x1 x2 …… xN x1=y1,x2=y2,…,xk=yk initialize prediction EE465: Introduction to Digital Image Processing

  18. Why High-order? • By looking at more past samples, we can have a better prediction of the current one • Compare c_, ic_ , dic_ and predic_ • It is a tradeoff between performance and complexity • The performance quickly diminishes as the order increases • Optimal order is often signal-dependent EE465: Introduction to Digital Image Processing

  19. 1D Predictive Coding Summary Y entropy coding binary bit stream Linear Prediction discrete source X P(Y) probability estimation Prediction residue sequence Y usually contains less uncertainty (entropy) than the original sequence X EE465: Introduction to Digital Image Processing

  20. From 1D to 2D 1D X(n) n future causal past 2D … … … raster-scanning Zigzag-scanning EE465: Introduction to Digital Image Processing

  21. 2D Predictive Coding raster scanning order causal half-plane Xm,n EE465: Introduction to Digital Image Processing

  22. Ordering Causal Neighbors 6 4 3 2 5 1 Xm,n where Xk: the k nearest causal neighbors of Xm,n in terms of Euclidean distance prediction residue EE465: Introduction to Digital Image Processing

  23. Lossless JPEG Notes Predictor w n nw n+w-nw w-(n-nw)/2 n-(w-nw)/2 (n+w)/2 # 1 2 3 4 5 6 7 horizontal vertical nw n diagonal x w 3rd-order 2nd-order EE465: Introduction to Digital Image Processing

  24. Numerical Examples 1D a b X=[156 159 158 155] Y=[156 3 -1 -3] a-b 2D Initialization: no prediction applied 156 159 158 155 160 154 157 158 156 159 158 155 160 154 157 158 156 3 -1 -3 160 -6 3 1 156 3 -1 -3 160 -6 3 1 horizontal predictor X= Y= Note: 2D horizontal prediction can be viewed as the vector case of 1D prediction of each row EE465: Introduction to Digital Image Processing

  25. Numerical Examples (Con’t) 156 159 158 155 160 154 157 158 156 159 158 155 160 154 157 158 156 159 158 155 4 -5 -1 3 -4 5 1 -3 4 -5 -1 3 vertical predictor X= Y= Note: 2D vertical prediction can be viewed as the vector case of 1D prediction of each column Q: Given a function of horizontal prediction, can you Use this function to implement vertical prediction? A: Apply horizontal prediction to the transpose of the image and then transpose the prediction residue again EE465: Introduction to Digital Image Processing

  26. Image Examples Comparison of residue images generated by different predictors vertical predictor horizontal predictor H(Y)=5.05bpp H(Y)=4.67bpp Q: why vertical predictor outperforms horizontal predictor? EE465: Introduction to Digital Image Processing

  27. Analysis with a Simplified Edge Model 100 100 50 50 100 50 100 50 nw n 50 50 100 100 50 100 50 100 x w H_edge V_edge H_predictor Y=50 Y=0 V_predictor Y=0 Y=50 Conclusion: when the direction of predictor matches the direction of edges, prediction residues are small EE465: Introduction to Digital Image Processing

  28. Horizontal vs. Vertical Do we see more vertical edges than horizontal edges in natural images? Maybe yes, but why? EE465: Introduction to Digital Image Processing

  29. Importance of Adaptation • Wouldn’t it be nice if we can switch the direction of predictor to locally match the edge direction? • The concept of adaptation was conceived several thousands ago in an ancient Chinese story of how to win a horse racing emperor general good 90 > 80 How to win? fair > 70 60 poor 50 40 > EE465: Introduction to Digital Image Processing

  30. Median Edge Detection (MED) Prediction nw n x w Key: MED use the median operator to adaptively select one from three candidates (Predictors #1,#2,#4 in slide 44) as the predicted value. EE465: Introduction to Digital Image Processing

  31. Another Way of Implementation nw n x If w Q: which one is faster? You need to find it out using MATLAB yourself else if EE465: Introduction to Digital Image Processing

  32. Proof by Enumeration Case 1: nw>max(n,w) If nw>n>w, then n-nw<0 and w-nw<0, so n+w-nw<w<n and median(n,w,n+w-nw)=min(n,w)=w If nw>w>n, then n-nw<0 and w-nw<0, so n+w-nw<n<w and median(n,w,n+w-nw)=min(n,w)=n Case 2: nw<min(n,w) If nw<n<w, then n-nw> and w-nw>0, so n+w-nw>w>n and median(n,w,n+w-nw)=max(n,w)=w If nw<w<n, then n-nw>0 and w-nw>0, so n+w-nw>n>w and median(n,w,n+w-nw)=max(n,w)=n Case 3: n<nw<w or w<nw<n n+w-nw also lies between n and w, so median(n,w,n+w-nw)=n+w-nw EE465: Introduction to Digital Image Processing

  33. Numerical Examples H_edge V_edge n=100,w=50, nw=100 n=50,w=100, nw=100 100 50 100 50 100 100 50 50 n+w-nw=50 n+w-nw=50 Note how we can get zero prediction residues regardless of the edge direction EE465: Introduction to Digital Image Processing

  34. Image Example Fixed vertical predictor H=4.67bpp Adaptive (MED) predictor H=4.55bpp EE465: Introduction to Digital Image Processing

  35. JPEG-LS (the new standard for lossless image compression)* EE465: Introduction to Digital Image Processing

  36. Summary of Lossless Image Compression • Importance of modeling image source • Different classes of images need to be handled by different modeling techniques, e.g., RLC for binary/graphic and prediction for photographic • Importance of geometry • Images are two-dimensional signals • In 2D world, issues such as scanning order and orientation are critical to modeling EE465: Introduction to Digital Image Processing

More Related