310 likes | 484 Views
E N D
1. B. Tech. ProjectFragments based object trackingguide: Prof. A. Mukerjee Utkarsh Kumar Shah
Y6510
2. Object Tracking A method of following an object through successive image frames to determine its relative movement with respect to other objects.
3. Blobs and Objects Unknown objects in an image are often called blobs.
And if we classify them according to our interest, it becomes an object, such as a vehicle or person
The features of a blob are compared with predefined object criteria to determine if the blob belongs to a particular class of object, like vehicle or person etc.
4. Objective There are various existing algorithms:
Mean shift tracking
Blob tracking
Contour tracking
Visual feature matching
Level set tracking
These tracking algorithms are limited and not able to handle decently some complex situations like:
non-rigid deformations
Rapid motion
full/partial/self occlusion
Multiple object tracking
Fragments based level set tracking is one of the robust object tracking algorithm developed by P. Chockalingam and N. Pradeep
5. Challenges in Object Tracking Occlusion
Static Occlusion
Dynamic Occlusion
6. Object Appearance in image
7. Real time constraint With a lot of time given, an exhaustive search would be able to locate an object in the image frame properly.
However, in most practical cases, we need to use only a small portion of the model space to reduce the computational burden.
8. Work done so far Background Subtraction: Using single gaussian
Read various papers on object tracking algorithms. Of which I will be mentioning few algorithms:
Mean shift tracking
Adaptive fragments based tracking using level sets
9. Mean Shift Tracking Objects are represented by their color-histograms.
It is an iterative scheme algorithm which compares the histogram of the original object in the current frame and that of candidate regions in the next frame of image.
The aim is to find maximum correlation between the two histograms.
13. Object Model & Target Candidate Let b(xi) denote the color bin of the color at xi. Then, the probability q of color u in the model is:
where C is normalization constant
The probability p of color u in the target is:
where Ch is the normalization constant
is the Kroneckar delta function. That is, contribute to q and p if b(xi) = u and b(yi) = u respectively.
14. Color Density Matching Using Bhattacharya coefficient
is the cosine of angles between m dimensional unit vectors and
Large means good color match.
For each image frame, find y that maximizes . This y is the location of the target
15. The Algorithm Given qu of model and location y of target in previous frame:
1. Initialize location of target in current frame as y
2. Compute pu(y) and (p(y),q)
3. Apply mean shift: Compute new location z as
4. Compute pu(z)and (p(z),q)
5. While (p(z),q)< (p(y),q), do z (y+z) (to validate target's new location)
6. If is small enough, stop. Else set y z and goto 1
16. Result and Problems
This is the result of mean shift tracking implemented by P. Guha in IITK. (1) shows a car marked by maroon box and a rickshaw by yellow, (2) shows that due to occlusion by car the rickshaw is not marked and the yellow box of rickshaw is not having object in it, (3) Rickshaw is identified as a new object (marked by blue box) and yellow box marks a person which earlier was marked by red box
17. Adaptive fragments based tracking Approach
Tracking Framework: Target and background is modeled as a mixture of Gaussians. A strength map is computed indicating the probability of each pixel belonging to the foreground.
Image Segmentation: Target is divided into multiple fragments.
Contour Extraction: Contour is extracted using level set implementation.
UpdateMechanism: The fragments are automatically adapted to the image data, being selected by an efficient region-growing procedure and updated according to a weighted average of the past and present image statistics.
The extracted target boundaries are used to learn the dynamic shape of the target over time, enabling tracking to continue under total occlusion.
18. Tracking Framework Bayesian Formulation
Each fragment is characterized by separate Gaussian surface
The likelihood of an individual pixel is given by a Gaussian Mixture Model:
19.
is the probability that the pixel was drawn from the jth fragment
is the number of fragments in the target or background (depending upon )
And
where is the mean and the nxn covariance matrix of the jth fragment in the target and is the gaussian constant.
20. Strength Image The strength image is calculated using the log ratios of the probabilities:
Positive values in the strength image indicate pixels that are more likely to belong to the target than to the background, and vice versa for negative values.
21. Region Segmentation region growing algorithm:
do {
Pick a seed point that is not associated to any fragment
Grow the fragment from the seed point based on the similarity of the pixel and its neighbor‘s appearance
Stop growing the fragment if no more similar pixels are present in the neighborhood of the fragment
} until all pixels are assigned
22. Contour Extraction The region growing algorithm starts with a seed region, , and uses two kinds of representation:
An explicit representation using a singly linked list, L
L represents the boundary of the region being grown
and an implicit representation, , initialized as follows:
24. Update mechanism Update parameters of existing component
For each pixel we ?nd the fragment that contributed most to its likelihood
Then the statistics(mean and covariance) of each fragment are computed using its associated pixels
The appearance parameters are then updated using a weighted average of the initial values and a function of the recent values
25. Finding new fragments:
If a new fragment is not adjacent to the object, it is straight away added to the background model
For fragments that are adjacent to foreground, the motion cues of the new fragment and the object are used.
If the Euclidean distance of the motion vectors for these two regions are less than a threshold, then it is classified as the part of the object.
And the appearance and spatial information of the new fragment is added to the object model
26. Results
The target is tracked properly despite shape deformation and large unpredictable motion
27.
Results of the algorithm on the Elmo sequence. Accurate contours are extracted despite noteworthy non-rigid deformations of the Elmo
28.
Results of the algorithm on the sequence in which a person walks in a complex scene with lot of colors.
29. Work to be done Implement adaptive fragments based tracking algorithm.
Implementation platform: openCV
Evaluation process for the algorithm:
A manual object contour location will be done, marking all the objects in a set of frames.
We will run mean shift algorithm on those frames and find the ratio of overlapped region of object tracked by the algorithm with the actual object marked manually to the original region of the object as obtained by manual marking.
Similarly we will obtain the same ratio using fragments based object tracking algorithm
Compare the two ratios. The larger ratio signifies more accurate object tracking.
30. References [1] Prakash Chockalingam. Non-rigid multi modal object tracking using Gaussian mixture model, The Graduate School of Clemson University, PhD thesis, 2009.
[2] Peter Meer Dorin Comaniciu, Visvanathan Ramesh. Kernel-based object tracking. IEEE Trans. Pattern Analysis Machine Intell, 25(5): 564-575, 2003.
[3] Priwthwijit Guha. Unsupervised Concept Acquisition From Surveillance Videos. PhD thesis, Indian Institute of Technology Kanpur, 2009.
[4] Leow Wee Kheng. Mean shift tracking, Technical report, School of Computing, National University of Singapore
[5] A. Mukerjee, P. Sateesh, P. Guha. Colour and feature based multiple object tracking under heavy occlusions, ICAPR, 2007.
[6] Nalin Pradeep, Prakash Chockalingam. Adaptive fragments based tracking of non-rigid objects using level sets, International Conference on Computer Vision(ICCV), 2009.
[7] D. Ramesh, V. Meer, Venkatesh Babu, Comaniciu. Mean shift object tracking. Technical report, Indian Institute of Science, Bangalore 2007.
31. THANK YOU