1 / 41

Evolving Hardware Dr. Janusz Starzyk Ohio University

Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach. Evolving Hardware Dr. Janusz Starzyk Ohio University. Neural Network Data Classification. Concept of “ Logic Brain” Random learning data generation Multiple space classification of data

parrishr
Download Presentation

Evolving Hardware Dr. Janusz Starzyk Ohio University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach Evolving Hardware Dr. Janusz StarzykOhio University

  2. Neural Network Data Classification • Concept of “ Logic Brain” • Random learning data generation • Multiple space classification of data • Feature function extraction • Dynamic selectivity strategy • Training procedure for data identification • FPGA implementation for fast training process

  3. Neural Network Data Classification Abdulqadir Alaqeeli, and Jing Pang • Concept of “ Logic Brain” • Threshold setup converts analog to digital world • “Logic Brain” is possible based on artificial neural • network • Random learning data generation • Gaussian distribution random multiple dimension • data generation • Half data sets prepared for learning procedure • Another half used later for training procedure

  4. Neural Network Data Classification • Multiple space classification of data • Each space can be represented by a set of minimum base vectors • Feature function extraction and dynamic selecting strategy • Conditional entropy extracts information in each • subspace • Different combinations of base vectors compose the redundant sets of new subspace •  expansion strategy • Minimum function selection •  shrinking strategy

  5. Neural Network Data Classification • FPGA implementation for fast training process • Learning results are saved on board • Testing data sets are generated on board and sent • through the artificial neural network generated on • board to test the successful data classification rate • The results are displayed on board • Promising application • Especially useful for feature extraction of large data sets • Catastrophic circuit fault detection

  6. Class A X X X X X X X X X X O X X X X O O O O O O O O O O Class B Information Index: Background • A priori class probabilities are known • Entropy measure based on conditional probabilities

  7. Information Index: Background • P1 and P2 and a priori class probabilities • P1w and P2w are conditional probabilities of correct classification for each class • P12w and P21w are conditional probabilities of misclassification given a test signal • P1w , P2w, P12w and P21w are calculated using Bayesian estimates of their probability density functions

  8. Information Index: Background • probability density functions of P1w , P2w, P12w, P21w

  9. nonuniform grid uniform grid S i S k S k S i S i < S k S i = S k for N dimensions , mn grid points are needed to estimate Direct Integration

  10. pdf1 pdf2 W(Xi) xi Xi generated with pdf1 Monte Carlo Integration pdf

  11. Information Index: probability density functions P2w

  12. Information Index: weighted pdfs P2w

  13. Information Index: Monte Carlo Integration • To integrate the probability density function • generate random points xi with pdf1 • weight generated points according to • estimate the conditional probability P1w using

  14. Information Index and Probability of Misclassification

  15. Standard Deviation of Information in MC Simulation

  16. Normalized Standard Deviation of Information

  17. Information Index: Status • MIIFS was generalized to continuous distributions • N-dimensional information index was developed • Efficient N-dimensional integration was used • Information error analysis was performed • Information index can be used with non Gaussian distributions • For small training sets and low information index information error is larger than information

  18. Optimum Transformation: Background • Principal Component Analysis (PCA) based on Mahalanobis distance suffers from scaling • PCA assumes Gaussian distributions and estimates covariance matrices and mean values • PCA is sensitive to outliers • Wavelets provide compact data representation and improve recognition • Improvement shows no statistically significant difference in recognition for different wavelets • Need for a specialized transformation

  19. Optimum Transformation: Haar Wavelet • Example

  20. Optimum Transformation: Haar Wavelet • Repeat average and difference log2(n) times

  21. Optimum Transformation: Haar Wavelet • Waveforminterpretation

  22. Optimum Transformation: Haar Wavelet • Matrix interpretation • b=W*a where

  23. Optimum Transformation: Haar Wavelet • Matrix interpretation for the class of signals B=W*A • where A is (n x m) input signal matrix • Selection of n best coefficients performed using the information index Bs1=S1*W*A • where S1 is (n x n*log2(n)) selection matrix

  24. Optimum Transformation: Evolutionary Iterations • Iterating on the selected result Bs2=S2*W* Bs1 • where S2 is a selection matrix or Bs2=S2*W* S1*W* A • after k iterations Bsk= Sk*W* ... S2*W* S1*W* A • So, the optimized transformation matrix T= Sk*W* ... S2*W* S1*W • can be obtained from the Haar wavelet

  25. Optimum Transformation: Evolutionary Iterations • Learning with the evolved features

  26. Optimum Transformation: Evolutionary Iterations • Waveforminterpretation of T rows

  27. Optimum Transformation: Evolutionary Iterations • Meanvalues and the evolved transformation Original Signals and the evolved transformation 1.5 1 0.5 Signal Value 0 -0.5 -1 -1.5 0 20 40 60 80 100 120 140 Bin Index

  28. Two Class Training • Training on HRR signals 17o depression angle profiles of BMP2 and BTR60

  29. Wavelet-Based Reconfigurable FPGA forClassification t window 8bit 8bit Sample # 1 1 Haar-Wavelet Transform N.N. input signal is recognized Sample # m k 8bit 8bit Note: k  m

  30. 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 (0+1)/2 (0+1)/2 (0+1)/2 Block Diagram of The Parallel Architecture

  31. 2 4 R: register using IOBs R: register using CLBs R R 3 1 0 2 A: registered Average D: registered difference (0-1) (2-3) (0-1) (0+1)/2 (0+1)/2 (2+3)/2 A R D R A R D R A R D R A D A D A D A D Simplified Block Diagram of The Serial Architecture First the Blue Second the Green

  32. Data In RAM 16x8 RAM 16x8 RAM 16x8 RAM 16x8 RAM 16x8 PE PE PE PE WA RA WA RA WA RA WA RA WA RA Done Start Control Control Control Control RAM-Based Wavelet

  33. The Processing Element 10 211 X 9 65X 9 95X 208 10211 -8 9XX 0 0 1 0 1 8 10211

  34. Results: For One Iteration of Haar Wavelet • For 8 samples: • Parallel arch.: 120 CLBs, 128 IOBs, 58ns. • Serial arch. : 98 CLBs*, 72 IOBs, 148ns*. Parallel Arch. wins for larger number of samples. • For 16 samples: • Parallel arch.: 320 CLBs, 256 IOBs, 233ns. • RAM-Based arch.: 136 CLBs, 16 IOBs, ~ 1s. RAM-Based Arch. Wins since 1s is not so slow. ------------------------------------------------------------ * These values increase very fast when the # of samples increases, and the delay becomes extremely higher.

  35. Reconfigurable Haar-Wavelet-Based Architecture PE PE PE PE ‘ Data

  36. Test Results • Testing on HRR signals 15o depression angle profiles of BMP2 and BTR60 • With 15 features selected correct classification for BMP2 data is 69.3% and for BTR60 is 82.6% • Comparable results in SHARP Confusion Matrix for BMP2 data is 56.7% and for BTR60 is 67%

  37. Problem Issues • BTR60 signals with 17o and 15o depression angles do not have compatible statistical distributions

  38. Problem Issues • BMP2 and BTR60 signal distributions are not Gaussian

  39. Work Completed • Information index and its properties • Multidimensional MC integration • Information as a measure of learning quality • Information error • Wavelets and their effect on pattern recognition • Haar wavelet as a linear matrix operator • Evolution of the Haar wavelet • Statistical support for classification

  40. Recommendations and Future Work • Training Data must represent a statistical sample of all signals not a hand picked subset • Probability density functions will be approximated using parametric or NN approach • Information measure will be extended to k-class problems • Training and test will be performed on 12 class data • Dynamic clustering will prepare decision tree structure • Hybrid, evolutionary classifier will be developed

More Related