1 / 27

Variations on the Kalman Filter and Multitarget Tracking Issues

Variations on the Kalman Filter and Multitarget Tracking Issues. ECE 7251: Spring 2004 Lecture 18 2/18/04. Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL: 404-385-2548 <lanterma@ece.gatech.edu>. The Setup for the Extended KF.

qiana
Download Presentation

Variations on the Kalman Filter and Multitarget Tracking Issues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Variations on the Kalman Filterand Multitarget Tracking Issues ECE 7251: Spring 2004 Lecture 18 2/18/04 Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL: 404-385-2548 <lanterma@ece.gatech.edu>

  2. The Setup for the Extended KF State equation Measurement eqn. Process “noise” covariance are uncorrelated with each other and for differentk Measurement noise covariance Initial guess, before taking any data Covariance indicating confidence of initial guess

  3. The Extended Kalman Filter • In the state update, go ahead and use the nonlinear models for predicting the state and data • Use a local linearization of f and h in computing the covariance update and the Kalman gain:

  4. Linearizing the Dynamics Model • Slope of tangent plane for the dynamics:

  5. Linearizing the Measurements • Slope of tangent plane for the measurements:

  6. Using Polar Radar Measurements in EKF • Nonlinear measurement mapping: (Checking to make sure answer is in correct quadrant) • Local linearization for EKF (constant velocity model):

  7. Problems with the EKF • Even if process noise and measurement noise are Gaussian, since there are nonlinear transformations, the true “Bayesian posterior” density is not Gaussian • No longer optimal in the linear MMSE sense (a nice property the original Kalman filter had) • Only as good as the linearization; if the estimate gets too far off, the linearization is not good, and we get EKF divergence which is horrible, bad, unpleasant, terrible, and devastating. • Covariance estimates are often overoptimistic

  8. Combining Different State Models • A constant velocity model • Tracks well for straight paths • But has trouble catching up with target maneuvers • A constant acceration or single model • Handles maneuvers better • But may give wobbly tracks when target is really flying straight • Idea: try to use several different models • Switch between models, say with different covariance matrices or different orders, by some criteria • Run several Kalman filters in parallel, and let them interact

  9. Interacting Multiple Model • Assume transition probabilities of target switching from one model to another are known • Run Kalman filters for the different models in parallel (some may be extended, some not) • Output of different Kalman filters are mixed based on estimates of model probabilities before being fed back into the Kalman recursion • Estimates of model probabilities continuously updated

  10. Multitarget Multisensor Tracking • Problem: Don’t know which measurements go with which targets • Can have false alarms and missing detections • Targets enter and leave scene at unknown times • Optimal solution has O(M!) type complexity • Association problem is “NP-hard”, meaning it’s as hard as many well known problems (such as the Traveling Salesman) which have no O(M2) solution, or O(M5) solution, or O(M100) solution… • Suboptimal solution must be used in practice

  11. MTMS Ex. 1: Angle-Only Measurements • Electronic Support Measure (ESF) and Infrared Search and Track (IRST) sensors provide angle information, but no range • Can use triangulation to initialize estimates and multisensor multitarget algorithms to track • Ghost targets are a difficult problem

  12. Ghosts in Triangulation Systems • Ex: Two targets with two sensors yields “ghosting ambiguity • Ideally, could solve by adding a third sensor • Only declare a target where 3 beams intersect

  13. Ghosts Due to Measurement Errors • In reality • Angle measurements will be subject to errors • May be missing measurements • Makes locating and tracking much harder

  14. MTMS Ex. 2: Passive Coherent Location • Can track using commerical FM radio or television stations as the illuminator • Examples • Lockheed Martin’s Silent Sentry • Demonstrator systems built by DERA and NATO • Low frequency, low bandwidth, continuous wave • Poor angle and range measurement • Excellent Doppler management • Can use multilateration to initialize estimates • Each transmitter-receiver pair can be thought of as a sensor

  15. Ghosts in Multilateration Systems • A range measurement by transmitter-receiver pair in a bistatic system gives the target on an ellipse • Subject to similar ghosting problems as in triangulation systems

  16. Approaches to Multitarget Tracking (1) • Almost all algorithms use an initial “gating” stage, throwing out widely unlikely associations • Define an ellipsoid around the predicted state estimate based on the prediction covariance • Throw out associations with measurements that fall outside this ellipsoid • Ad-hoc association algorithms • If targets are few and spaced far apart, almost anything reasonable will work! • Symmetric Measurement Equations • Clever approach: make a new set of “pseudomeasurements” which does not depend on the associations! • Problem becomes highly nonlinear; ordinary EKF has trouble handling it • No easy way to handle track initialization • Seem to be of solely theoretic interest at present

  17. Approaches to Multitarget Tracking (2) • Joint Probabilistic Data Association • Handles association on a scan-by-scan basis • Makes “soft” instead of “hard” decisions; several measurements in a single scan may contribute to the state estimate of a particular target • Good combination of ease of implementation and high performance • Multiple Hypothesis Testing • Makes hard associations based on associating several scans into the past • Enumerates hypotheses as branching trees; to prevent computation from getting too much, prunes the less likely ones • Difficult to program; data structures are complicated

  18. Appendix: Constrained Optimization Approach • Cast the multisensor-multitarget problem in terms of a constrained optimization problem • Advantage: Lots of work being done on related problems by the operations research community • Find a good (but not necessarily the best) solution with “Lagrangian relaxation” techniques • Two main groups pushing this approach: • Univ. of Connecticut (Deb, Pattipati, Bar-Shalom, etc.) • Colorado State. Univ./Numerica, Inc. (Aubrey Poore) • Ben Slocumb, a colleague at Numerica (formerly at GTRI), tells me that this is the Way To Go for big problems

  19. Constrained Optimization Notation • Suppose processing n past scans from a single sensor or current scans from n different sensors • indicates number of measurements made on scan k • Assignment variables • From left to right, positions in the subscript indicate scans; number in that position indicates measurement index within that scan • Example: • Means obervation 3 on scan 1 and observation 2 on scan 3 belong to the same source • A zero indicates a missed to detection, so here the source was missed on scan 2 Source: Blackman & Popoli, Sec. 7.3.1

  20. Constrained Optimization Notation Con’t • Calculate “costs” • Negative loglikelihood • Usually get from the means and covariances from a (possibly extended) Kalman filter • Want to minimize the total cost • Subject to the constraint that each measurement belongs to at most one track

  21. Constrained Optimization Formulation Minimize

  22. What Was That Last Slide Saying??? • Suppose M1=M3=3 and M2=2 • Means we received 3 measurements on scan 1 and 3, but only 2 measurements on scan 2 • Constraint equation for the first measurement on the first scan is • That just says that any valid association must take that first measurement on the first scan into account, and that it can’t be claimed by more than one track Source: Blackman & Popouli, pp. 408-409

  23. Relaxed Problem Minimize Lagrange Multipliers This problem can be solved in O(n^3) time Subject to

  24. Interpretation of Lagrange Multipliers • The Lagrange multiplier is responsible for punishing misuse of measurement ik from the kth scan • Example interpretation: • penalizes using measurement #2 from scan 4 more than once • penalizes not using #2 from scan 4 at all • Must iteratively refine the Lagrange multipliers to satisfy the constraints • Extraordinarily complicated • Accounts for main differences between algorithms

  25. The Relaxation Algorithm • Initialize Lagrange Multipliers to zero • Solve relaxed problem • Compute cost (without Lagrange multiplier part) of solution to relaxed problem to get a “lower cost bound” • Enforce constraints on relaxed solution (can use a suboptimal algorithm to do this) • Compute cost of feasible solution to get an “upper cost bound” • If the difference between the upper and lower cost bounds in sufficiently small, • say solution is good enough and stop • Otherwise, adjust Lagrange multipliers and go back to Step 2

  26. Papers by the Colorado Group • A.B. Poore and N. Rijavec, “A Lagrangian Relaxation Algorithm for Multidimensional Assignment Problems Arising from Multitarget Tracking,” SIAM J. of Optimization, vol. 3, no. 3, pp. 544-563, August 1993. • Relax an N dimensional problem to an N-1 dimensional problem • Proceed recursively • ABP and A.J. Robertson, “A New Lagrangian Relaxation Based Algorithm for a Class of Multidimensional Assignment Problems,” Computational Optimization and Applications, vol. 8, pp. 129-150, 1997. • Claims substantial improvement over 1993 algorithm • Relax N-2 constraints at once to make a 2 dimensional problem • Iterate to improve solution

  27. Papers by the Connecticut Group • T. Kirubarajan, H. Wang, YBS, K.R. Pattipati, “Efficient Multisensor Fusion Using Multidimensional Data Association,” IEEE Trans. Aerospace and Electronic Systems, vol. 38, no. 2, pp. 386-398, April 2001. • R.L. Popp, K.R. Pattipati, and YBS, “m-Best S-D Assignment Algorithm with Application to Multitarget Tracking,” IEEE Trans. AES, vol. 37, no. 1, pp. 22-39, January 2001. • S. Deb, M. Yeddanapudi, K.R. Pattipati, and YBS, “A Generalized S-dimensional Assignment for Multisensor-Multitarget State Estimation,” IEEE Trans. AES,vol. 33, no. 2, pp. 523-538, April 1997. • K.R. Pattipati, Deb S., YBS, and R.B. Washburn, “A New Relaxation Algorithm and Passive Sensor Data Association,” IEEE Trans. On Automatic Control, vol. 37, no. 2, pp. 198-213, February 1992.

More Related