Prepared by:. Precise Object Tracking under Deformation. Eng. Mohamed Hassan, EAEA. Supervised by: . Prof. Dr. Hussien Konber, Al Azhar University. Prof. Dr. Mohamoud Ashour, EAEA. Dr. Ashraf Aboshosha, EAEA. Submitted to:. Communication & Electronics Dept., Al Azhar University.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Precise Object Tracking under Deformation
Eng. Mohamed Hassan, EAEA
Prof. Dr. Hussien Konber, Al Azhar University
Prof. Dr. Mohamoud Ashour, EAEA
Dr. Ashraf Aboshosha, EAEA
Communication & Electronics Dept.,
Al Azhar University
Visual tracking applications
Block diagram of object tracking system
Image deformation types
Geometrical Modeling and pose estimation
Conclusion and Future Work
The main objectives of this research work are to:
Overcome the imprecision in object tracking caused by different deformation sources such as noise, change of illumination, blurring, scaling and rotation.
Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model
Presenting a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors.
Change of illumination.
Definition: is considered to be any measurement that is not part of the phenomena of interest. Images are affected by different types of noise:
The following digital filters have been employed for denoising
The process consists simply of moving the filter mask from point to point in an image.
At each point (x,y) the response of the filter at that point is calculated using a predefined relationship.
Pixels of image
The result is the sum of products of the mask coefficients with the corresponding pixels directly under the mask
The filtering operation is based conditionally on the values of the pixels in the neighborhood under consideration.
Order-statistics filters are nonlinear spatial filters whose response is based on ordering (ranking)
Nonlinear Spatial Filters
Wavelet transform, due to its excellent localization, has rapidly become an indispensable signal and image processing tool for a variety of applications.
Wavelet denoising attempts to remove the noise present in the signal while preserving the signal characteristics, regardless of its frequency content.
Figure 1 The two-dimensional FWT - the analysis filter
Figure 2 Two-scale of two-dimensional decomposition
Denoising Proposed Filter
Figure 3 Cascaded spatial filter based on median fitter and Coiflet wavelets
To validate the efficiency of the previous digital filters the following similarity measures have been applied
Table 1. 2D cross correlation similarity measure
Table 2. PSNR similarity measure
Definition: Scaling & rotation is affine Transformation where Straight lines remain straight, and parallel lines remain parallel.
Scaling and Rotation: The linear transformation and radon transformation have been used to recover an image from a rotated and scaled origin.
Scaled &rotated image
Figure 4 Rotated and scaled image
Figure 5 Control point selection
Scaled & rotated image
Figure 6 Recovered by using linear transformation
Radon transform: This transform is able to transform two dimensional images with lines into a domain of possible line parameters, where each line in the image will give a peak positioned at the corresponding line parameters.
Projections can be computed along any angle θ, by use general equation of the Radon transformation:
x' is the perpendicular distance of the beam from the origin and θ is the angle of incidence of the beams.
Figure7 Canny edge detection and edge linking
Figure 8Radon transform projections along 180 degrees, from -90 to +89
Figure 9 Recovered by using radon transform
Blurring:degradation of an image can be caused by motion
Deblurring using Wiener filter
Deblurring using a regularized filter
Deblurring using Lucy-Richardson algorithm
Deblurring using blind deconvolution algorithm
A blurred or degraded image can be approximately described by this equation
Figure 10Deblurring using the blind deconvolution algorithm
(b) Person detection under
(a) Blurred image
(d) Person detection in
Figure 11, Capability of object tracking under blurring (a, b)
with known blur function and after deblurring (c, d
Blurred imagecorrelation with original one
Deblurred image using correct parameterscorrelation
Deblurred image using longer PSFcorrelation
Deblurred image using different anglecorrelation
Figure 12, 2D cross correlation with the deblurring form
Table 3, 2D cross correlation with the deblurring form
Change of illumination
Color model deformation may happen due to the change in illumination
Selecting an appropriate color model (RGB, HSV or ycbcr) to overcome the deformation problem
The RGB color model
mapped to a cube
A Representation of additive color mixing
of the HSV
The cylindrical representation of the HSV
HSV color wheel
YCbCr Color Model
The conversion from RGB to YCbCr
The conversion from YCbCr to RGB
The main advantages of this model are:
The luminance component (Y) of YCbCr is independent of the color
The skin color cluster is more compact in YCbCr than in other color space
YCbCr has the smallest overlap between skin and non-skin data in under various illumination conditions.
YCbCr is broadly utilized in video compression standards
YCbCr is a family of color spaces used in video systems.
YCbCr is one of two primary color spaces used to represent digital component video (the other is RGB).
Figure 13, Comparison of homogeneousobjectextraction
Figure 14, Comparison of inhomogeneousobjectextraction
The most basic morphological operations are dilation and erosion
Binary after removing extra pixel
Binary object after dilation holes
Binary object after closing
Figure 15, The effect of the morphological operation
Figure 16, Center of gravity, ellipse fitting and bound box of an image
Figure 17 object tracking at different distance
Figure 18. The relation between
range (D) and projection size (N)
a = 30606.621
Figure 19. The relation between the range
and location of the object in 3D domain
Figure 19, FIR model structures
Figure 20, Models output w.r.t system output
Figure 21 Model output w.r.t system output
Figure 22 The capability of the model to predict
the output if the system input is known
Developing a novel Universal filter for image denoising
Selecting qualitative radon transformation for correction of the rotation
Intensive comparative study for dealing with kwon/unknown bulrring
Employing a color table thresholding segmentation technique on YCbCr to extract the visual target
3D Geometrical modeling for estimation and prediction of target pose
As a future work, we are going to implement the applied algorithm on an embedded system to develop a visual RADAR System
Conclusion and Future Work