1 / 24

Air Force Office of Scientific Research

Air Force Office of Scientific Research. Basic Research: Target Recognition, Navigation 21 October 2004. The Basic Research Manager for the Air Force. Dr. Jon Sjogren AFOSR/NM 703-696-6564 www.afosr.af.mil.

Download Presentation

Air Force Office of Scientific Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Air Force Office of Scientific Research Basic Research: Target Recognition, Navigation 21 October 2004 The Basic Research Manager for the Air Force Dr. Jon Sjogren AFOSR/NM 703-696-6564 www.afosr.af.mil Distribution authorized to DoD components only (Critical Technology) (10/01/04). Other requests for this document shall be referred to AFOSR/PIP

  2. Signals Communication/Surveillance 6.1 Funding Profile FY04 Total ProgramATR-Navig Intramural (lab tasks) $1,035 K $ 625 K Extramural (university grants) $1,374 K $ 832 K HBCU/MI, DEPSCoR, DURIP, $1,650 K $ 900 K STTR, Darpa ISP MURI $1,838 K $ 0 K Total Administered $5,897 K $2,357 K

  3. Automatic Target Recognition (ATR):Foundations • The “Programme” is to move towardIntegrated treatment of • Synthetic Aperture (SAR) • High Resolution Ranging (HRR) • Laser Radar (Ladar) • Infra-Red (IR) • Image Formation accentuates the target features that you seek • Gravitate target detection/identification/recognition toward the sensor • Combine physical models (electromagnetic scattering) with • Statistical models of reception (Doppler, phase and bearing) • “Factor in” the clutter and hostile interference • ‘structure of (radar) clutter as it affects detection has defied solution’ : Army Night Vision Lab • Study of parameter spaces that describe complex scenes (several moving targets): “General Pattern Theory”, A. Lanterman interprets U. Grenander

  4. Mathematical Methods Enable Sensing • Colorado State Univ. (Kirby): Self-correlation of images, eigen-object analysis, manifold structures and dimensional reduction of data. • Georgia Tech. (Lanterman): Pattern recognition, structure of “clutter”. • UC Santa Cruz (Milanfar): Video fusion, edge-modeling, motion and aliasing. • Rice University (Baraniuk): Multi-dimensional wavelet transforms; rapid reconstruction of singularities. • Yale Univ (S. Zucker): Multi-scale texture and color reconstruction; neural techniques based on animal visual recognition. • Boston Univ. (Clem Karl) Unified enhancement and object extraction for ATR. • Rensselaer Polytechnic Inst. (Yazici): Methods of Representation of continuous groups in design of digital filters. • Arizona State Univ. (Morrell/Cochran): Adaptive sensing modality. • Colorado State Univ. (Scharf/Chong): Waveform coding, information-theoretic processing and target/environment modeling. [DARPA] • SUNY Buffalo (Soumekh): SAR return processing, full Sommerfeld interference model, exploitation of massive computation. [DURIP] • Geophex Inc. (STTR, $500K): Time Exposure Acoustics (Passive)

  5. Pattern-Theoretic Foundations ofAutomatic Target Recognition in Clutter Aaron D. Lanterman, Georgia Institute of Technology Inference via Jump-Diffusion Processes Problem • Clutter may have just as much interesting structure as a target, yielding high false alarm rates Objective • ATR algorithms that are robust to scene variability, particularly clutter • Algorithm-independent performance metrics for ATR problems Accomplishments/Transitions Scientific Approach • Instead of trying to “filter out” clutter, allow the algorithm to estimate the clutter structure along with the targets • Riding Moore’s law: real-time scene simulation once required an expensive Silicon Graphics; now a cheap PC with a decent graphics card will do! • Developing metrics based on Kullback-Leibler distances • To facilitate transition, jump-diffusion code is being refactored into flexible, reusable C++ classes employing OpenGL • Developed Kullback-Leibler metrics for a radar scenario

  6. Foundations of ATR with 3-D Data Motivation from the DARPA E3D BAA • DARPA’s E3D program seeks: • “Efficient techniques for rapidly exploiting 3-D sensor data to precisely locate and recognize targets.” • Achieve specific and detailed milestones. • Natural questions: • If such a milestone is not reached, is that the fault of the algorithm or the sensor? • What performance from a particular sensor is necessary to achieve a certain level of ATR performance, independent of the question of what algorithm is used? • AFOSR Foundations of ATR program fills the gap

  7. Foundations of ATR with 3-D DataApplying the Grenander Program • Many ad-hoc algorithms have been built • Derive lower bounds on the performance of any algorithm • Feature extraction may involve loss of information; so use all the data! • These are based on extraction of features • Algorithm design is driven by real-time constraints imposed by current hardware • Computers keep getting faster; what are the ultimate limits placed by the sensor hardware itself? • Pose is nuisance variable in the ATR problem; pattern theory deals with it head-on • At a given viewing angle, Target A at one orientation may look much like Target B at a different orientation

  8. complex phase Barbara real wavelet subband complex magnitude Multi-scale Geometric Analysis Richard Baraniuk, Rice University 1-d complex wavelet 2-d complex wavelets 3-d hyper-complex wavelets • Highly directionalatomic representation to match signal geometry • Complex, quaternion, octonionstructure matched to piecewise-smooth multi-D signals with singularities along manifolds • Enables coherentmagnitude/phase multi-scale analysis • Applications: geometric multi-scale estimation, detection, classification, segmentation, compression

  9. Complex Wavelet Analysis real wavelet coefficients complex magnitude blue=0 red+ - blue green=0 red+

  10. Multi-scale Geometric Compression • Zoom of image compressed using JPEG2000 wavelet encoder • Strong artifacts at low bit-rates • Zoom of image compressed using geometry-based WSFQ coder • Employs cartoon image modelcombining wavelets and wedgelets • reduced artifacts • state-of-the-art compression • optimal approximation theorem • Explicit geometric information in coded bit-stream • Potential application: multi-scalegeometric target representation

  11. Object-Image Metrics & Duality Object Shape Space Image Shape Space xu: all objects that could have produced the image. Object- Image Relations ux: all images of the object. Duality Theorem: Matching can (in principle)be performed in either object or image space without loss of performance !

  12. Statistical Modeling & Curve EvolutionW. Clement Karl, Boston Univ. • Challenges • Inclusion of accurate sensor and scene models in curve evolution methods • Unified enhancement and object extraction for ATR • Existing Methods • Image enhancement followed by boundary extraction • Physical sensor model often ignored • Progress • Joint ML-EM and curve evolution allowing explicit inclusion of sensor anomaly model and target range behavior • Unified anomaly suppression and object extraction • Application • Laser radar range image target extraction

  13. Laser Radar Range Data Example True Synthetic Range Scene & Initial Curve Reconstructed Scene & Extracted Object Boundary using statistical sensor and scene model Laser Radar Observation With Range Anomalies

  14. Sensor and Processor Integration for Improved ResolutionP. Milanfar,University of California, Santa Cruz Problems • Spatial and temporal resolution of available imaging sensors is not always adequate. Objective • Improvement of spatial and temporal fidelity and resolution of video imagery. • Optimal adaptation of imaging sensor and “impedance match” to post-processor resulting in: • Improved information transfer from scene to user • Improved usage of imaging system’s bandwidth Before After Infrared Sequence from AF Wright Labs Scientific Approach • Development of fast and robust computational estimation framework based on L1 norm. • Prior based on new multi-scale edge model • Study of performance limits via statistical bounds • Improved algorithms minimize the lower bounds • Measurement of information content in space/time • Feedback to sensor to maximize info. Content • Verify algorithms and approach on real data. Accomplishments/Transitions • Algorithms and software suite for resolution enhancement from video available to AFRL • Video-to-still/Video (gray and color) • Proof-of concept implementation of sensor optimization implemented on an IEEE 1394 camera. • Transitions and extensions to • Closed-loop operation of adaptive sensor • Joint optimization and operation of sensor and resolution enhancement algorithms.

  15. Fusion of Multiple Video Frames • Reconstruction Problem: Given the frames, estimate the high resolution image. (Super-resolution) • Implicit problem: Estimate the motion vectors Nuisance Parameters Desired unknowns

  16. Generic Super-resolution Algorithm Motion Estimation Image Reconstruction

  17. Effect of Aliasing How does aliasing affect the ability to estimate translation between sets of images? Lots of aliasing Little aliasing Note “false” motions.

  18. “Algebraic and Topological Structure for Signal and Image Processing”, Michael Kirby, Colorado State Univ. • Large data sets of images or signals often possess “geometric structure” that may be exploited to assist in analysis, classification and representation. • Failure to exploit such structure leads to inferior solutions. • Data may be represented by manifolds or algebraic varieties. New algorithms involve • Geometric, Algebraic, Topological Approaches • Whitney’s theorem. Nash’s theorem. • Parameterizing Subspace Optimization Problem • Smooth optimization over Grassmannians. • Maxi-min approximation criterion. Michael Kirby, Department of Mathematics, Colorado State University www.math.colostate.edu/~kirby

  19. Shortcomings of Subspace Methods Problem • Subspace approaches not optimal given large variations in eigen-coefficients over one person. • Face images under varying pose and illumination lie on a manifold! (see left) Objective • Model manifolds directly. • Classification on manifolds versus subspaces Applications • Biometrics, human identification, face recognition, machine lip reading, signal separation. Image 25 Image 100 25 100 200 2 1 Image 200 3 4 Time Time Eigen-image sequence coefficient variation Michael Kirby, Department of Mathematics, Colorado State University www.math.colostate.edu/~kirby

  20. Cognition and Image Fusion/Recognition Steven Zucker, David and Lucile Packard Professor,Yale University • Biologically-inspired work funded out of Life Sciences as part of AFOSR/ (NM and NL) Data Fusion Concentration • Models of the Primate Cortex motivate Visual recognition algorithms that go beyond Edge Detection • Column-to-Column interaction among vision cells: object recognition through “consistent orientations” e.g. of attached shadow Column-to-column interactions Primate visual cortex

  21. China lake Standard model Yale model Biologically-motivated ATR • By contrast, Layer-to-Layer interactions lead to a suite of Non-Linear operators well-suited for Object Detection • In foggy and other scenes where obscuration is heavy, China Lake database below shows out-performance to Canny model on left (on the right you can see the ship stand out) • The complementary mathematical theory is based on Unit Tangent Bundle for the shading flow field (the cells detect feature orientation) Layer-to-layer interactions

  22. Vision-based Precision Navigation and Control • Various Vision Algorithms offer the potential to reduce/eliminate reliance on GPS or other external navigation sources • Optic Flow • Feature Tracking • Bio-Inspired Vision Systems • Provides the ability to navigate indoors/underground to survey denied targets • Vision-based control could reduce reliance on onboard IMU systems and improve robustness to extended operating conditions Sample Aerial Imagery with optic flow vectors

  23. Agile Autonomous Flight Ability to navigate without external aide Can fly in complex environments without extensive mission planning Self-awareness of surroundings and other movers Can build detailed 3D maps, wide area 3D autonomous target search, generate coordinates for active sensor cueing Indoor Autonomous Agents Ability to self navigate without external sources Can explore complex environment with no a-priori map Self-awareness of potential threats – ability to “hide” Can obtain 3D maps of denied targets for mission planning, possible ability to conduct functional defeat of denied targets A Few Of Many Benefits Arising from ‘Local-Sensing’ Navigation

  24. Summary • To fulfill the Long-Term Challenge “Finding and Tracking” depends on Sensing, mathematical-statistical Signals Analysis, Data Fusion and Bio-mimetics • Collaboration between AFRL, DARPA and National Agencies to achieve ATR in our lifetime • AFOSR moves to sparkplug new methodologies in Imaging science that provide the leading edge of surveillance systems development • Early support and encouragement to the most promising investigators working on problems of critical relevance • The Benefits to DoD of a research focus, managed by AFOSR should reach beyond a single research generation

More Related