1 / 34

Tracking by Sampling Trackers

This paper proposes a visual tracking approach that robustly tracks targets in real-world scenarios by sampling multiple trackers. The approach involves sampling the state space using Monte Carlo sampling and sampling various trackers to adapt to changing tracking environments. The paper also presents methods for sampling appearance models, motion models, state representations, and observation types. The overall procedure and qualitative and quantitative results are provided.

delgadod
Download Presentation

Tracking by Sampling Trackers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tracking by Sampling Trackers Junseok Kwon* and Kyoung Mu lee Computer Vision Lab. Dept. of EECS Seoul National University, Korea Homepage: http://cv.snu.ac.kr

  2. Goal of Visual Tracking • Robustly tracks the target in real-world scenarios Frame #43 Frame #1

  3. Bayesian Tracking Approach edge Intensity Maximum a Posteriori (MAP) estimate

  4. State Sampling Visual tracker Guided by Scale X position Y position State space MAP estimate by Monte Carlo sampling

  5. Problem of previous works Tracking environment changes Fixed Visual tracker can not reflect the changing tracking environment well. Conventional trackers have difficulty in obtaining good samples.

  6. Our approach : Tracker Sampling State sampling Tracker sampling Tracker #1 Scale Y position X position Tracker #2 Scale Y position X position Tracker space Tracker #M Scale Y position X position Sampling tracker itself as well as state

  7. Two challenges Tracker space Tracker space Tracker #1 Tracker #2 Tracker #M How the tracker space is defined? When and which tracker should be sampled?

  8. Challenge 1 : Tracker Space Tracker space • Tracker space • Nobody tries to define tracker space. • Very difficult to design the space because the visual tracker is hard to be described.

  9. Bayesian Tracking Approach • Go back to the Bayesian tracking formulation Updating rule

  10. Bayesian Tracking Approach • What is important ingredients of visual tracker? 1. Appearance model 2. Motion model 3. State representation type 4. Observation type

  11. Tracker Space Appearance model Motion model State representation Observation

  12. Challenge 2 : Tracker Sampling Motion model Appearance model State representation type Observation type Tracker #m Tracker space • Tracker sampling • When and which tracker should be sampled ? • To reflect the current tracking environment.

  13. Reversible Jump-MCMC Delete Delete Delete Delete Add Add Add Add Set of sampled appearancemodels Set of sampled motionmodels Set of sampled state representation types Set of sampled observation types Sampled basic trackers We use the RJ-MCMC method for tracker sampling.

  14. Sampling of Appearance Model Sparse Principle Component Analysis* Appearance models • Make candidates using SPCA* • The candidates are PCs of the target appearance. * A. d’Aspremont et. al.A directformulation for sparse PCA using semidefinite programming.Data Min. SIAMReview, 2007.

  15. Sampling of Appearance Model Our method has the limited number of models • Accept an appearance model • With acceptance ratio

  16. The accepted model increase the total likelihood scores for recent frames • When it is adopted as the target reference

  17. Sampling of MotionModel Motion models K-Harmonic Means Clustering (KHM)* • Make candidates using KHM* • The candidates are mean vectors of the clusters for motion vectors. * B. Zhang, M. Hsu, and U. Dayal. K-harmonic means - a data clusteringalgorithm. HP Technical Report, 1999

  18. Sampling of MotionModel Our method has the limited number of models • Accept a motion model • With acceptance ratio

  19. The accepted model decreases the total clustering error of motion vectors for recent frames • When it is set to the mean vector of the cluster

  20. Sampling of State Representation Fragment 1 Position Fragment 2 Intensity Edge State representation Vertical Projection of Edge (VPE)* • Make candidates using VPE* • The candidates describe the target as the different combinations of multiple fragments. * F.Wang, S. Yua, and J. Yanga. Robust and efficient fragments-basedtracking using mean shift. Int. J. Electron. Commun., 64(7):614–623,2010.

  21. Sampling of State Representation Our method has the limited number of types • Accept a state representation type • With acceptance ratio

  22. The accepted type reduce the total variance of target appearance in each fragmentfor recent frames

  23. Sampling of Observation Gaussian Filter Bank (GFB)* • Make candidates using GFB* • The candidates are the response of multiple Gaussian filters of which variances are different. * J. Sullivan, A. Blake, M. Isard, and J. MacCormick. Bayesian objectlocalisation in images. IJCV, 44(2):111–135, 2001.

  24. Sampling of Observation Our method has the limited number of types • Accept an observation type • With acceptance ratio

  25. The accepted type makes more similar between foregrounds, but more different with foregrounds and backgroundsfor recent frames

  26. Overall Procedure Tracker sampling Tracker #M Tracker #2 Tracker #1 Tracker space State sampling Scale Y position X position Scale Interaction Y position X position Scale Y position X position

  27. Qualitative Results

  28. Qualitative Results Iron-man dataset

  29. Qualitative Results Matrix dataset

  30. Qualitative Results Skating1 dataset

  31. Qualitative Results Soccer dataset

  32. Quantitative Results Average center location errors in pixels MC : Khan et. al.MCMC-based particle filteringfor tracking a variable number of interacting targets. PAMI 2005. IVT : Ross et. al.Incremental learning forrobust visual tracking. IJCV 2007. MIL : Babenko et. al.Visual tracking with onlinemultiple instance learning. CVPR 2009. VTD: Kwon et. al. Visual tracking decomposition. CVPR 2010.

  33. Summary • Visual tracker sampler • New framework, which samples visual tracker itself as well as state. • Efficient sampling strategy to sample the visual tracker.

  34. http://cv.snu.ac.kr/paradiso

More Related