1 / 16

Dr. Martin G. Bello ALPHATECH Inc. 6 New England Executive Park Burlington, Mass. 01803

6 New England Executive Park, Burlington, MA 01803 781-273-3388 3811 N. Fairfax Dr., Arlington, VA 22203 703-524-6263 4445 Eastgate Mall, San Diego, CA 92121 858-812-7874. A. L. P. H. A. T. E. C. H. ,. I. n. c.

gale
Download Presentation

Dr. Martin G. Bello ALPHATECH Inc. 6 New England Executive Park Burlington, Mass. 01803

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 6 New England Executive Park, Burlington, MA 01803 781-273-3388 3811 N. Fairfax Dr., Arlington, VA 22203 703-524-6263 4445 Eastgate Mall, San Diego, CA 92121 858-812-7874 A L P H A T E C H , I n c . Comparison of Support Vector Machines and Multilayer Perceptron Networks in Building Mine Classification Models Dr. Martin G. Bello ALPHATECH Inc. 6 New England Executive Park Burlington, Mass. 01803 August 29, 2003 (Research Funded by ONR as Part of the 6.2 MCM Program Element, with Associated Technical Agent: NSWC Coastal Systems Station, Panama City, FL.)

  2. PRESENTATION OVERVIEW • Mine Countermeasures Overview • Mine Hunting Algorithm Structure • Overview of Multilayer Perceptron, Support Vector Machine Classifier Construction Methodologies • Alternative Classifier Construction Performance Result Comparisons • Conclusions

  3. Mine Countermeasures Overview

  4. PMA 2000 NEW POST MISSION ANALYSIS CONCEPT-I Coastal Systems Station WHOI / REMUS UUV & MSTL Sonar Bluefin BPAUV / Klein Sonar SONAR MISSION TAPE

  5. PMA 2000 NEW POST MISSION ANALYSIS CONCEPT-II Coastal Systems Station OBJECTIVE INCREASE SPEED & ROBUSTNESS OF POST MISSION ANALYSIS SONAR MISSION TAPE MULTIPLE COMPUTER-AIDED DETECTION & CLASSIFICATION (CAD / CAC) ALGORITHMS CSS CAD/CAC Display, Analysis, & Fusion of Contacts Alphatech CAD/CAC Raytheon CAD/CAC Lockheed CAD/CAC PAYOFF OF FUSING MULTIPLE CAD / CAC ALGORITHMS • REDUCED FALSE ALARM RATES • ENVIROMENTALLY ROBUST • DIVERSE ALGORITHMS HAVE FEW FALSE ALARMS IN COMMON

  6. PMA 2000 NEW POST MISSION ANALYSIS CONCEPT-III Coastal Systems Station CSS CAD/CAC Bluefin BPAUV / Klein sonar Sonar Data Lockheed CAD/CAC Display, Analysis, & Fusion of Contacts Alphatech CAD/CAC Operator Marks Mine-Like Objects (MLO’s)

  7. AGGREGATE DETECTION/CLASSIFICATION ALGORITHM STRUCTURE Input Side-Scan Sonar • Normalization Algorithm Enforces More Uniform “Local” Background • Anomaly Screening Extracts Blobs/Tokens Corresponding to Mine-Like(ML) Target Candidates • Features are “Local” Functionals of Image Calculated for each Blob/Token • Feature Vector Multilayer Perceptron Neural NetworkLog-Likelihood Calculation or Alternatively Feature VectorSVM score calculation Imagery Image Normalization Algorithm Anomaly Screening Algorithm Feature Calculation for Screener Tokens Token Log- Likelihood Ratio Calculation Token Ranking/Thresholding

  8. ANOMALY-SCREENING ALGORITHM STRUCTURE • Anomaly Statistic Quantifies Deviation from “Local” Background Characteristics • (Distinct Highlight(H) and Highlight/Shadow(HS) and Shadow(S) Contrast Statistics Have Been Conceived) • MP, PC = Blob Anomaly Statistic Maximum Intensity and Pixel Count (PC) • rMP, rPC= Ranks Associated with MP, PC • Blob Filtering Identifies Candidate ML-Tokens • Current Screening Algorithm Employs both H, HS, and S Based Segmentation Statistics, Deriving the Final Collection of Screened Tokens as those HS-blobs which Intersect either a H- or S-blob

  9. BASELINE CLASSIFICATION FEATURE VECTOR -f DEFINITIONS-I • f1= PC for HS-segmentation • f2= min(rPC, rMP) for HS-segmentation • f3= “Local” Blob Pixel Count for HS-segmentation • f4= Mean Blob Anomaly Statistic Intensity for HS-segmentation • f5= Standard Deviation of Blob Anomaly Statistic Intensity for HS-segmentation • f6= “Local” Blob Count for HS-segmentation • f7= MP for HS-segmentation • f8 = (1,0) Valued Indicator for Existence of Intersecting H-segmentation Blob • f9 = MP forIntersecting H-Segmentation Blob • f10 = PC for Intersecting H-Segmentation Blob • f11 = (1,0) Valued Indicator for Existence of Intersecting S-segmentation Blob • f12 = PC for Intersecting S-Segmentation Blob Original Feature Set-1992 Highlight Related Shadow Related

  10. BASELINE CLASSIFICATION FEATURE VECTOR -f DEFINITIONS-II Statistical Intensity Distribution Related • f13 = Mean Normalized ImageOver HS-Segmentation Blob • f14 = Maximum Normalized Image Over HS-Segmentation Blob • f15 = Standard Deviation of Normalized Image Over HS-segmentation Blob • f16 = Skewness Coefficient of Normalized Image over HS-Segmentation Blob • f17 = Kurtosis Coefficient of Normalized Image Over HS-Segmentation Blob • f18 = Perimeter of HS-Segmentation Blob • f19 = (16*PC)/(Perimeter*Perimeter) for HS-Segmentation Blob • f20 = Perimeter/(2*(Bounding-box-width + Bounding-box-height)) for HS-Segmentation Blob • f21 = (Major-axis – Minor-axis)/ (Major-axis + Minor-axis) for HS-Segmentation Blob • f22 = Major-axis • f23 = Orientation of HS-Segmentation Blob Shape Related

  11. DISCRETE COSINE TRANSFORM AND PSEUDO-ZERNIKE MOMENT FEATURE DEFINITION • Discrete Cosine Transform(DCT) Features Defined on Window Centered on HS-Segmentation Blob, is a vector of quantities obtained by stacking rows of the below defined matrix… • Pseudo-Zernike Moment Features(PZM) Defined on Window Centered on HS-Segmentation Blob…

  12. OVERVIEW OF COMPARED CLASSIFIER CONSTRUCTION STRATEGIES-I • Traditional Classifier Construction first involves a “Feature Selection” stage where an Information Theoretic Measure, or actual Discrimination Performance of a simple classifier are optimized • Multilayer Perceptron(MLP) Based Training Algorithm Optimizes Mutual Information Between Feature Vector and “True” Class Using a Recursive “Backpropagation-like” approach • Cross-ValidationUsing a Test Set is Employed to Terminate Training when a specified objective function corresponding to the integral over a Test Set Derived Receiver Operating Characteristic Curve(ROC), is maximized • The above steps may be repeated for on the order of 50-100 network optimizations to arrive at a “best” solution

  13. OVERVIEW OF COMPARED CLASSIFIER CONSTRUCTION STRATEGIES-II • Support Vector Machine(SVM) Implementation Adopted is SVMlight, developed by Professor Thorsten Joachims of Cornell University • The Linear SVM “Soft-Margin” Training Formulation Is defined as a Quadratic Programming Problem, Optimizing a sum of two terms related to the squared norm of the classifier inner-product related parameter vector, and a weighted sum of “slack” variables related to miss-classification of training samples • SVMlightemploys an iterative “working-subset” strategy to solve the dual of the above described Quadratic Programming Problem, avoiding the excessive memory and time requirements of off the shelf Quadratic Programming Implementations, for large training data sets.

  14. COMPARISON OF SVM AND MLP RESULTS-I • 415F= Baseline 23F Set + 256F (DCT Transform Related) + 136F (PZM Related) • 48F = Baseline 23F Set + 25F Selected from DCT, PZM Related Features using Genetic Algorithm Based Approach (NeuralWare Predict Algorithm) • 6F= Feature Set Selected from 48F using Genetic Algorithm Based Approach (NeuralWare Predict Algorithm) • 6F,48F, 23F Results Using MLP networks are superior to SVM based Results using Aggregate 415F Set Average False Alarms per Image Low False Alarms Desired

  15. COMPARISON OF SVM AND MLP RESULTS-II • 6F,48F, 23F SVM Results are superior to SVM based Results using the Aggregate 415F Set • Baseline 23F Result using MLP network classifier is the best Average False Alarms per Image

  16. CONCLUSIONS • MLP and SVM based classifier construction strategies frequently achieve similar performance. In this study, the MLP approach more consistently resulted in the best performance for a feature set of limited size • There is an advantage to employing the GA based feature selection technique first, as opposed to the blind use of an aggregate collection of features • SVM Implementation Needs to be generalized to Incorporate Cross-Validation over Weighting Parameter Associated with Miss-Classification Terms

More Related