1 / 43

Adaptive Algorithms for Optimal Classification and Compression of Hyperspectral Images

Adaptive Algorithms for Optimal Classification and Compression of Hyperspectral Images. Tamal Bose * and Erzs é bet Mer é nyi # * Wireless@VT Bradley Dept. of Electrical and Computer Engineering Virginia Tech # Electrical and Computer Engineering Rice University. Outline. Motivation

lam
Download Presentation

Adaptive Algorithms for Optimal Classification and Compression of Hyperspectral Images

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Algorithms for Optimal Classification and Compression of Hyperspectral Images Tamal Bose* and Erzsébet Merényi# *Wireless@VT Bradley Dept. of Electrical and Computer Engineering Virginia Tech #Electrical and Computer Engineering Rice University

  2. Outline • Motivation • Signal Processing System • Adaptive Differential Pulse Code Modulation (ADPCM) Scheme • Transform Scheme • Results • Conclusion

  3. Status-Quo Raw data (limited onboard processing) Unreliable links Unacceptable latencies Delay in science and discovery Restricts deep space missions Mission Scientists Mission Control High stress Reduced productivity KNOWLEDGE

  4. High-Speed Real-Time On-Board Signal Processing Objectives • Computationally efficient signal processing algorithms with the following features: • Adaptive filter based algorithms that continuously adapt to new environments, inputs, events, disturbances, etc. • Modular algorithms suitable for implementation in distributed processors • Cognitive algorithms that learn from its environments; high degree of artificial intelligence built-in for mission technology and for science data gathering/processing Impact on Science • State-of-the-art signal processing algorithms to: • enable onboard science • detect events and take necessary action; e.g. collecting and processing data as a result of detecting dust storms in Mars • process and filter science data with “machine intelligence”; e.g. data compression with signal classification metrics, so that that certain features can be preserved Concept: • Current science/technology plans • Scientific processing and data analysis • Data compression/filtering • Autonomous mission control; e.g. automatic landing site identification, instrument control, etc. • Cognitive radio based communications to optimize power, cost, bandwidth, processing speed, etc. DSP Algorithms 11/12/2007

  5. Impact • Large body of knowledge developed for on-board processing. Two main classes (Filtering and Classification): • Adaptive filtering algorithms (EDS, FEDS, CG, and many variants) • Algorithms for 3-D data de-noising, filtering, compression, and coding. • Algorithms for hyperspectral image clustering, classification, onboard science (HyperEye) • Algorithms for joint classification and compression.

  6. Spacecraft system Environment: Mars, Earth, … planet surfaces Decision / control subsystem Data acquisition subsystem: Hyperspectral imager HyperEye IDU: “Precision” manifold learning system Unsupervised clustering Novelty detection Supervised classification: Continuous production of surface cover maps Training data Labeled and unlabeled remote sensing observations Alert for navigation decision HyperEye Intelligent Data Understanding in on-board context

  7. HyperEye: Intelligent Data Understanding environment “Precision” manifold learning system To on-board autonomous decision system Human interaction Alerts Decision control Artificial Neural Net core Visualization & summary Products Self-Organizing Map (unsupervised) with non-standard capabilities Clusterextraction from SOM, discovery Evaluation (by domain expert, ANN expert, … ) HyperEye “precision” learner Supervised SOM-hybrid classifier Supervised class maps, class stats Feedback to learning On-board component On-ground component

  8. Specific Goals (this talk) • Maximize compression ratio with classification metrics • Minimize mean square error under some constraints • Minimize classification error

  9. Signal Processing System

  10. TOOLS & ALGORITHMS • Digital filters • Coefficient adaptation algorithms • Neural nets, SOMs • Pulse code modulators • Image transforms • Nonlinear optimizers • Entropy coding

  11. Scheme-I • ADPCM is used for compression • SOM mapping is used for clustering • Genetic algorithm is used to minimize the minimum cost function • Compression is done along spatial and/or spectral domain

  12. ADPCM system Prediction error Reconstruction Reconstruction error = quantization error Cost function

  13. Several different algorithms are used for adaptive filtering • Least Mean Square (LMS) • Recursive Least Squares (RLS) • Euclidean Direction Search (EDS) • Conjugate Gradient (CG) • The Quantizer is Adaptive • Jayant quantizer • Lloyd-Max optimum quantizer • Custom quantizer as needed

  14. i C(i,j,k) represents prediction coefficients R is a prediction window over which C(i,j,k) is nonzero j Predictor Footprint Filter coefficient position Cubic Filter Position to be predicted

  15. EDS Algorithm The least squares cost function: An iterative algorithm for minimizing has the form: The cost function at the next step is Now we find α such that the above is minimized:

  16. EDS Algorithm

  17. Unsupervised neural network A mapping from high-dimensional input data space onto a regular two-dimensional array of neurons The neurons of the map are connected to adjacent neurons by topology (rectangular or hexagonal) One neuron wins the competition; then change its weights and its neighborhood Source:http://www.generation5.org/content/2004/aisompic.asp Self-organzing map — SOM Competition layer (output layer) weights Input layer

  18. The learning process of the SOM • Competition A winning neuron is selected when {output(i)=<input, weight>} = the shortest Euclidean distance between input vector and weights • Update Update the weight values of the winning neuron and its neighborhood • Repeat As the learning proceeds, the learning rate and the size of the neighborhood decreases gradually

  19. GA-ADPCM Algorithm • Apply SOM mapping to the original image. • Generate initial population of ADPCM coefficients. • Implement ADPCM (LMS, EDS, RLS, etc.) processing using these sets of coefficients. • Apply SOM mapping to the decompressed images. • Calculate the fitness scores (clustering errors) between the decompressed images and the original image. Fitness Scores 4 2 3 1 ADPCM SOM Coefficients 1 2 3 Population 1 2 3 4

  20. Sort the fitness scores and choose the 50% fittest individuals. • Apply the genetic operations (crossover and mutation) and create the new coefficient population. • Go to Step 2 and repeat this loop until the termination condition is achieved. • The termination condition is when the clustering error smaller than a certain threshold Coefficients 1 2 3 Fitness Scores 1 2 Population 4 2

  21. F=Ce/N. Ce is the number of pixels clustered incorrectly. N is the total pixels in the image. F is the percentage of incorrectly clustered pixels. Ceis obtained by the following steps: Calculate Cm=Co-Cg, where Co is the matrix containing the clustering result of the original image. Cg is the matrix containing clustering result of the image after ADPCM compression and decompression. Cm is the difference between the two clustered images. Assign all the nonzero points in Cm matrix to be 1 and add them together to get the clustering error Ce. Fitness function 0 0 1 1 0 0 1 0 1 Cm 1 2 3 2 1 3 3 1 2 Co 1 2 2 1 1 3 1 1 3 Cg 0 0 1 1 0 0 2 0 -1 Cm =

  22. Transform Domain Scheme • Image transform is used for compression • DFT, DCT, DST, DWT, etc. • Parameters (block size, number of bits) can be adjusted by cost function • Compression is done along: • spectral domain, spatial domain, or both • Quantization: • Uniform, non-uniform, optimum, custom, etc. • Bit allocation: • non-uniform

  23. Transform-Domain Algorithm

  24. Method I: fix the number of quantization bits, adjust block size (DCT length) • Method II: fix block size (DCT length), adjust the number of quantization bits • Several other combinations

  25. Results

  26. Hyperspectral cube- Lunar Crater Volcanic Field (LCVF)

  27. Jasper Ridge (JR) One frame of the hyperspectral cube

  28. Clustered results comparison between ADPCM and GA-ADPCM One block of the original image Clustered image by LMS Clustered image by EDS Clustered original image Clustered image by GA-LMS Clustered image by GA-EDS

  29. Fitness score for GA-LMS Fitness score for GA-EDSfitness scores = clustering errors

  30. Clustering error comparison between ADPCM and GA-ADPCM

  31. Clustering error comparison between LMS and GA-LMS Block size=16 Classes=4 Block size=32 Classes=4 Block size=64 Classes=4 Block size=16 Classes=3 Block size=32 Classes=6 Block size=64 Classes=8

  32. Clustering error comparison between EDS and GA-EDS Block size=16 Classes=4 Block size=32 Classes=4 Block size=64 Classes=4

  33. Clustering results between uncompressed image and transformed image Clustered image of original image Clustered image after transform

  34. Mean spectral signatures of the SOM clusters identified in the Jasper Ridge image. Left: from the original image. Right: from the image after applying DCT compression and decompression

  35. Clustering Errors using Different Block Sizes in JR Clustering Errors using Different Number of Bits in JR

  36. Spectral signature comparison (Mean, STD, Envelope) of whole hyperspectral data LCVF Uncompressed Data LCVF after ADPCM compression LCVF after DCT compression

  37. LCVF Uncompressed Data LCVF after ADPCM compression LCVF after DCT compression

  38. Classification accuracy Measuring the effect of compression on classification accuracy. Data: Hyperspectral image of Lunar Crater Volcanic Field, 196 spectral bands, 614 x 420 pixels. Classifications were done for 23 known surface cover types. Original uncompressed data are labeled with “LCVF”, a compressed-uncompressed data set with “D1c16” using ADPCM, a compressed-uncompressed data set with “DCT194b8hb4” using DCT (8-bit quantization for significant data, 4-bit for insignificant data). “D1c8b3” is using ADPCM with 3-bit Jayant quantization.

  39. Conclusion • New algorithms have been developed and implemented that use the concept of classification metric driven compression • GA-ADPCM algorithm was simulated: • Optimized the adaptive filter in an ADPCM using GA • Reduced clustering error • Drawback – increased computational cost • Feedback-Transform algorithm was simulated: • Select the optimal block size (DCT length) and number of quantization bits to achieve a balance between a low clustering error, and computational complexity, and memory usage • Compression along spectral domain preserves the spectral signatures of the clusters • Results using the above algorithms are promising

  40. Acknowledgments • Graduate students: • Mike Larsen (USU) • Kay Thamvichai (USU) • Mike Mendenhall (Rice) • Li Ling (Rice) • Bei Xei (VT) • B. Ramkumar (VT) • NASA AISR Program

More Related