1 / 67

JPEG2000

Image Processing seminar (2003). JPEG2000 . The next generation still image-compression standard. Presented by: Eddie Zaslavsky. Contents :. 1. Why another standard? 2. JPEG2000 3. Examples 4. Conclusions 5. Add-on: EWZ algorithm. Why another standard?.

kisha
Download Presentation

JPEG2000

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Processing seminar (2003) JPEG2000 The next generation still image-compression standard Presented by: Eddie Zaslavsky

  2. Contents: • 1. Why another standard? • 2. JPEG2000 • 3. Examples • 4. Conclusions • 5. Add-on: EWZ algorithm

  3. Why another standard? • Low bit-rate compression: At low bit-rates (e.g. below 0.25 bpp for highly detailed gray-level images) the distortion in JPEG becomes unacceptable. • Lossless and lossy compression: Need for standard, which provide lossless and lossy compression in one codestream. • Large images: JPEG doesn't compress images greater then 64x64K without tiling.

  4. Why another standard? (cont'd) • Single decompression architecture: JPEG has 44 modes, many of them are application specific and not used by the majority of the JPEG decoders. • Transmission in noisy environments: in JPEG quality suffers dramatically, when bit errors are encountered. • Computer generated imaginary: JPEG is optimized for natural images and performs badly on computer generated images • Compound documents: JPEG fails to compress bi-level (text) imagery.

  5. JPEG2000 - Targets • Coding standard for: different types of still images (gray-level, color, ...) different characteristics (natural, scientific, ...) different imaging models (client/server, real-time,...) within a unified and integrated system. • This coding system is intended for: low bit-rate applications, exhibiting rate-distortion and subjective image quality performance superior to existing standards.

  6. JPEG2000 (encoder-decoder scheme)

  7. JPEG2000 - Overview • The source image is decomposed into components (up to 256). • The image components are (optionally) decomposed into rectangular tiles. The tile-component is the basic unit of the original or reconstructed image. • A wavelet transform is applied on each tile. The tile is decomposed into different resolution levels. • The decomposition levels are made up of subbands of coefficients that describe the frequency characteristics of local areas of the tile components, rather than across the entire image component. • The sub-bands of coefficients are quantized and collected into rectangular arrays of code blocks.

  8. JPEG2000 - Overview (cont'd) • The bit planes of the coefficients in a code block (i.e. the bits of equal significance across the coefficients in a code block) are entropy coded. • The encoding can be done in such a way that certain regions of interest (ROI) can be coded at a higher quality than the background. • Markers are added to the bit stream to allow for error resilience. • The code stream has a main header at the beginning that describes the original image and the various decomposition and coding styles that are used to locate, extract, decode and reconstruct the image with the desired resolution, fidelity, region of interest or other characteristics.

  9. Pre-Processing • 1. Image tiling: • Image may be quite large in comparison to the amount of memory available to the codec. • Partition of the original image into rectangular non- overlapping blocks (tiles), to be compressed independently 2. DC-level shifting: • The codec expects its input sample data to have a nominal dynamic range that is approximately centered about zero (0 -- 255 -> -128 -- 128) • If the sample values are unsigned, the nominal dynamic range of the samples is adjusted by subtracting a bias from each of the sample values ( 2 P-1 , P is the component’s precision)

  10. Pre-Processing - Tiling • All operations, including component mixing, wavelet transform, quantization and entropy coding are performed independently on the image tiles. • Tiling affects the image quality both subjectively and objectively • Smaller tiles create more tiling artifacts

  11. Pre-Processing (cont'd) • 3. Components transformation: • Maps data from RGB to YCrCb (Y, Cr, Cb - less statistically dependent; compress better); serves to reduce the correlation between components, leading to improved coding efficiency. There are reversible and irreversible transforms. Inverse reversible component transform Forward reversible component transform

  12. Pre-Processing - Component Transformations • Component transformations improve compression and allow visually relevant quantization: • Irreversible component transformation (ICT): • Floating point • For use with irreversible (floating point 9/7) wavelet • Reversible component transformation (RCT) : • Integer approximation • For use with reversible (integer 5/3) wavelet

  13. ICT (example)

  14. Wavelet Transform • Floating point 9/7 wavelet filter for lossy compression • Best performance at low bit rate • High implementation complexity, especially for hardware • Integer 5/3 wavelet filter for lossless coding • Integer arithmetic, low implementation complexity We filter each row and column with a high pass and low pass filter, followed by downsampling by 2 (to keep the sample rate). Now we have divided the tile to sub-bands. All info (index, position, precincts, etc.), regarding the single tile, is put together in a contiguous stream of data called a packet.

  15. Code-blocks, precincts and packets

  16. Wavelet Transform • Two filtering modes: • Convolution based: performing a series of dot products between the two filter masks and the extended 1-D signal. • Lifting based: sequence of very simple filtering operations for which alternately odd sample values of the signal are updated with a weighted sum of even sample values, and vise versa. Lossless 1D DWT Lossy 1D DWT P and U stand for Prediction and Update. • = 1.586,  = 0.052,  = 0.882,  = 0.443, K = 1.230

  17. Wavelet Transform • Symmetric extension: • To ensure that for the filtering operations that take place at both boundaries of the signal, one signal sample exists and spatially corresponds to each co-efficient of the filter mask.

  18. DWT (example) • In JPEG2000 multiplestages of the DWT are performed. JPEG2000 supports from 0 to 32 stages. For natural images, usually between 4 to 8 stages are used.

  19. Quantization • The wavelet coefficients are quantized using a uniformquantizer with deadzone. For each subband b, a basic quantizer step size Δbis used to quantize all the coefficients in that subband according to: • Example: Given a quantizer step of 10 and an encoder input value of21.82, the quantizer index is determined as shown:

  20. Coefficient Bit Modeling • Wavelet coefficients are associated with different sub-bands arising from the 2D separable transform applied. • These coefficients are then arranged into rectangular blocks within each sub-band, called code-blocks.

  21. Coefficient Bit Modeling (cont'd) • Code-blocks are then coded a bit-plane at a time starting from the Most Significant Bit-Plane to the Least Significant Bit-Plane (if some MSB-planes contain no 1s, the MSB-plane is set to the top most bit-plane, with at least one 1, the number of bit-planes which are skipped is then encoded in a header.) = MSB-plane LSB-plane

  22. Coefficient Bit Modeling (cont'd) • For each bit-plane in a code-block, a special code- block scanpattern is used for each of three coding passes.

  23. 3 Passes Scanning • Each coefficient bit in the bit-plane is coded in only one of the Three Coding Passes: • 1. Significance Propagation • 2. Magnitude Refinement • 3. Clean-up

  24. 3 Passes Scanning • Significance Propagation Pass • If a bit is insignificant (=0) but at least one of it's eight neighbors is significant (=1), then it is encoded. • If the bit at the same time is a 1, it's significance flag is set to 1 and the sign of the symbol is encoded. 2. Magnitude Refinement Pass: • Samples which are significant and were not coded in the significance propagation pass. 3. Clean-up Pass: • It codes all bits which were passed over by the previous two coding passes (insignificant bits). It is the first pass for MSB plane. The encoding is done by the MQ-coder, a low complexity entropy coder.

  25. Quality layers organization • The resulting bit streams for each code-block are organized into quality layers. A quality layer is a collection of some • consecutive bit-plane coding passes from each tile. Each code- block can contribute an arbitrary number of bit-plane coding passes to a layer, but not all coding passes must be assigned to a quality layer. Every additional layer increases the image quality.

  26. Rate Control • Rate control is the process by which the code-stream is altered so that a target bit rate can be reached. • Once the entire image has been compressed, a post-processing operation passes over all the compressed blocks and determines the extent to which each block's embedded bit stream should be truncated in order to achieve the target bit rate. • The ideal truncation strategy is one that minimizes distortion while still reaching the target bit-rate. • The code-blocks are compressed independently, so any bitstream truncation policy can be used.

  27. Bit stream organization • In bit stream organization, the compressed data from the bit-plane coding passes are separated into packets. • Then, the packets are multiplexed together in an ordered manner to form one code-stream. Each precinct generates one packet, even if the packet is empty. A packet is composed of a header and the compressed data.

  28. Bit stream organization

  29. Bit stream organization (cont'd) • There are 5 ways to order the packets, called progressions, where position refers to the precinct number: • Quality: layer, resolution, component, position • Resolution 1: resolution, layer, component, position • Resolution 2: resolution, position, component, layer • Position: position, component, resolution, layer • Component: component, position, resolution, layer • The sorting mechanisms are ordered from most significant to least significant. It is also possible for the progression order to change arbitrarily in the code-stream.

  30. Code stream organization (diagram)

  31. Decoding • The decoder basically performs the opposite of the encoder: The code-stream is received by the decoder according to the progression order stated in the header. The coefficients in the packets are then decoded and dequantized, and the reverse-ICT is performed: In the case of irreversible compression, the decompression results in loss of data. The resulting image is not exactly like the original.

  32. Characteristics: • So, what is new in JPEG2000, comparing to previous encoding protocols??? • Compress once - decompress many ways • Region-Of-Interest encoding • Progression • Error resilience

  33. JPEG2000 - Markets & Applications

  34. Compress once, decompress many ways In JPEG2000, the compressor decides the maximum resolution and maximum image quality to be used. It is also possible to perform random access by decompressing only a certain region of the image or a specific component of the image (e.g. the grayscale component of a color image). Both can be performed with varying qualities and resolutions. In each case it is possible to locate, extract, and decode the bytes required for the desired image product without decoding the entire code-stream.

  35. Region-of-interest (ROI) • A ROI is a part of an image that is encoded with higher quality than the rest of the image (the background). The encoding is done in such a way that the information associated with the ROI precedes the information associated with the background. • 2 methods: Scaling basedandMaxshift

  36. Region-of-interest (ROI) - Scaling based • The wavelet transformis calculated • ROI mask is derived, indicating the set of coefficients that are required for up to lossless ROI reconstruction • The wavelet coefficients are quantized • The coefficients that lay out of the ROI are downscaled by a specified scaling value • The resulting coefficients are progressively entropy encoded (with the most significant bit planes first) • ROI's scaling value and coordinates are added to the bit stream.

  37. Region-of-interest (ROI) - Maxshiftmethod • ROI mask (a bit map) is created describing which quantized transform coefficients must be encoded with better quality. • The quantized transform coefficients outside the ROI mask (background coefficients) are scaled down so that the bits associated with the ROI are placed in highest bit-planes and coded before the background. • Selection of scaling value S: S  max(Mb) , • where Mb is the largest number of magnitude bit planes for any background coefficient in any code-block in the current component: after the scaling of the background coefficients, the LSB of all shifted ROI coefficients is above the MSB (non zero) of all background's coefficients. • Advantage: arbitrary shaped ROIs without the need for shape information at the decoder.

  38. ROI - example Original Image with ROI Defined Decoded Image with ROI Intact

  39. Scalability and bit-stream parsing • 2 important modes of scalability: • Resolution/Spatial • Quality (SNR) • Bit-stream parsing • A combination of spatial and quality scalability. • It is possible to progress by spatial scalability to a given (resolution) level and then change the progression by SNR at a higher level.

  40. Resolution scalability

  41. Resolution scalability

  42. Resolution scalability

  43. Resolution scalability

  44. Quality scalability

  45. Quality scalability

  46. Quality scalability

  47. Error resilience • Error effects: • In a packet body: corrupted arithmetically coded data for some code-block => severe distortion. • In a packet head: wrong body length can be decoded, code block data can be assigned to wrong code-blocks => total synchronization loss. • Bytes missing (i.e. network packet loss): combined effects of error in packet head and body

  48. Protecting code-block data • Segmentation symbols: special symbol sequence is coded at the end of each bit-plane. If wrong sequence is decoded, an error has occurred and the last bit-plane is corrupted (at least). • Regular predictable termination: the arithmetic coder is terminated at the end of each coding pass using a special algorithm (predictable termination). The decoder reproduces the termination and if it does not find the same unused bits at the end, an error has occurred in the last coding pass (at least). • Both mechanism can be freely mixed, but slightly decrease the compression efficiency.

  49. Protecting packet head • SOP resynchronization marker: every packet can be preceded by an SOP marker with a sequence index. If an SOP marker with correct sequence index isn't found just before the packet head, an error has occurred. In such case the next, unaffected packet is searched in the codestream, and decoding proceed from there. • PPM/PPT markers: the packet head content can be moved to the main or tile headers in the codestream and transmitted through a channel with a much lower error rate. • Precincts: they limit packet head errors to a small image area.

  50. Error resilience (cont'd)

More Related