1 / 12

Automatic Target Recognition with Support Vector Machines

Automatic Target Recognition with Support Vector Machines. Computational Neuro-Engineering Laboratory Department of Electrical and Computer Engineering University of Florida. Qun Zhao, Jose Principe. December 4, 1998. Overview. Introduction to SAR ATR. 4 Classifiers. Experiment results.

haamid
Download Presentation

Automatic Target Recognition with Support Vector Machines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic Target Recognition with Support Vector Machines Computational Neuro-Engineering Laboratory Department of Electrical and Computer Engineering University of Florida Qun Zhao, Jose Principe December 4, 1998

  2. Overview • Introduction to SAR ATR • 4 Classifiers • Experiment results • Conclusions

  3. 1.Introduction Recognition of vehicles in synthetic aperture radar (SAR) is a difficult problem due to the low resolution of the sensor (1 meter) and the speckle (noise) intrinsic to the image formation. Another difficulty is due to the operating conditions. Vehicles can be placed in high clutter backgrounds, partial occluded, and NEW vehicles may be found that were not used in the training set. Training data is always limited. We use here the MSTAR I and II database (Veda).

  4. 1. Data Examples BMP2 BTR72 T72 DS1 D7

  5. 2. Four Classifiers 1). Perceptron with hard limiter (perceptron training) 2). Perceptron with sigmoids (delta rule)

  6. 2. Four Classifiers • 3). Optimal Separating Hyperplane

  7. 2. Four Classifiers • 4). Support vector machine • Training: kernel-Adatron (FrieB, T., Cristianini, N., and Campbell, C. 1998). • Use Gaussian Kernel.

  8. 3. Experiments • 3 Target classes: T72, BTR70, and BMP2 Pairwise classification • Image sizes 80 x 80. Aspect 0 ~ 180 degrees. • Training: 17 degree depression Number of Training samples: 406 • Testing: 15 degree depression Number of Testing samples: 724

  9. 3. Experiments • 1. Classification

  10. 3. Experiments - Recognition • Added two more vehicles to test set. They are called confusers. • Confusers: 2S1 and D7 Number of confuser images : 275 This becomes a recognition problem. The point PD=0.9 of the receiver operating characteristics (ROC) is chosen for the comparison. Output of classifiers are thresholded to achieve PD=0.9. Now performance is measured by error rate and false alarms.

  11. 3. Experiments - Recognition

  12. 4. Conclusion • Classification and recognition are different problems, and the latter is more realistic (and hard). • SVMs with the Gaussian kernel perform better for recognition. The local shape of the Gaussian kernel is very useful and should be utilized (samples that are far away from the class centers tend to have small feature values). • In our problem (large input space) the optimal separating hyperplane performs better for classification. • Kernel-Adatron: easy and fast training

More Related