1 / 23

Segmentation and classification of man-made maritime objects in TerraSAR-X images

Segmentation and classification of man-made maritime objects in TerraSAR-X images IEEE International Geoscience and Remote Sensing Symposium Vancouver, Canada July 27 th 2011 Michael Teutsch , email: michael.teutsch@iosb.fraunhofer.de Günter Saur, email: guenter.saur@iosb.fraunhofer.de.

gjudy
Download Presentation

Segmentation and classification of man-made maritime objects in TerraSAR-X images

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Segmentation and classification of man-made maritime objects in TerraSAR-X images IEEE International Geoscience and Remote Sensing Symposium Vancouver, Canada July 27th 2011 Michael Teutsch, email: michael.teutsch@iosb.fraunhofer.de Günter Saur, email: guenter.saur@iosb.fraunhofer.de

  2. Outline • Motivation • Concept • Segmentation • Classification • Examples • Conclusions and future work

  3. Motivation I • Applications: • Tracking of cargo ship traffic • Surveillance of fishery zones, harbours, shipping lanes • Detection of abnormal ship behaviour, criminal activities • Search for lost containers or hijacked ships Aims / Challenges: • Detectionof man-madeobjects (not here) • Preciseorientationandsizeestimation • Separation ofclutter, non-ships, different shiptypes • Robustnessagainstvarious SAR-specificnoiseeffects • Fast processing time • Here: Analyzeobjectappearance, avoidmodelsandpriorknowledge

  4. Motivation II: Difficult examples

  5. Concept

  6. Pre-processing • 3x3 median filter • Ground Sampling Distance (GSD) normalization to 2.0 meters/pixel

  7. Segmentation I: Structure-emphasizing LBP filter Local Binary Pattern: Rotation invariant uniform LBPs: Texture primitives: Timo Ojala et al., „Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, July 2002.

  8. Segmentation II: Structure-emphasizing LBP filter Rotation invariant uniform LBPs (texture primitives): Rotation invariant variance measure: For each pixel position (x,y), fixed P, and varying R:

  9. Segmentation III: Rotation compensation with HOG A. Korn, „Toward a Symbolic Representation of Intensity Changes in Images“, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 5, 1988.

  10. Segmentation IV: Rotation compensation with HOG+PCA PCA FUSION

  11. Segmentation V: Size estimation with row/col. histograms

  12. Segmentation VI: Experimental data set • 17 different TerraSAR-X StripMap images • 756 manually labeled detections including orientation and length • No ground truth, manual labeling is sensed truth • Labeling inspired by CFAR-detection including potential clutter • Scale normalization to 2.0 meters / pixel

  13. Segmentation VII: Orientation and size estimation results

  14. Segmentation VIII: Examples

  15. Classification I: Classes non-ship ship structure 1 clutter (ambiguity) unstructured ship clutter ship structure 2

  16. Classification II: Concept • G. Saur, M. Teutsch, „SAR signature analysis for TerraSAR-X based ship monitoring“, Proceedings of SPIE Vol. 7830, 2010. • M. Teutsch, W. Krüger, „Classification of small Boats in Infrared Images for maritime Surveillance“, 2nd International Conference on WaterSide Security (WSS), Marina di Carrara, Italy, Nov. 3-5, 2010.

  17. Classification III: Experiments and results • 5 classes: clutter, non-ship, unstr. ship, structure 1, structure 2 • 543 samples with good segmentation and possible manual labeling: • 53 clutter, 110 non-ship, 322 unstr. ship, 17 structure 1, 41 stucture 2 • 362 training samples and 181 test samples • Runtime for segmentation and classification: ~ 2 sec per detection • Classification results:

  18. Classification IV: Examples clutter unstructured ship unstructured ship unstructured ship non-ship ship structure 1

  19. Classification V: Examples for whole processing chain ship structure 2 ship structure 2 unstructured ship

  20. Conclusions • Aim: Segmentation and classification of man-made objects in satellite SAR • Challenge: Robustness against various object appearances, noise effects • Segmentation: Pre-processing, structure-emphasizing filter with LBPs, orientation estimation with HOGs and PCA, size estimation with row/column histograms, median orientation estimation error: 5.2° • Classification: Extensive feature calculation, feature evaluation and selection, classification with cascaded SVM and 3-NN, 81% correct classification Future work • Improvesizeestimation (LBPs insteadofrow/columnhistograms?) • More dataforclassification (esp. structureclasses) • Other approachesfor 3rd classification-stage (localfeatures?) • Is objectstructurednessandclassifiabilitybased on appearancemeasurable?

  21. Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB Thanks a lotforyourattention! Karlsruhe Ettlingen Ilmenau

  22. Segmentation: Orientation estimation error distrib.

  23. Segmentation: Examples – The bad guys

More Related