Loading in 2 Seconds...

Automatic Target Recognition Using Algebraic Function of Views

Loading in 2 Seconds...

- 119 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Automatic Target Recognition Using Algebraic Function of Views' - kendall-huff

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Automatic Target Recognition Using Algebraic Function of Views

Computer Vision Laboratory

Dept. of Computer Science

University of Nevada, Reno

Outline

- Background of algebraic functions of views
- Frame work
- Imposing rigid constraint
- Indexing scheme
- Geometric manifold
- Mixture of Gaussians
- Modified Framework
- Future work

UNR - Computer Vision Laboratory

Training stage

Recognition stage

Images from

various viewpoints

New image

Convex grouping

Convex grouping

Image groups

Model groups

Access

Compute index

Selection of reference

views

Index

Structure

Retrieve

Using SVD & IA

Estimate the range of

parameters of AFoVs

Predict the parameters

Using NN/SVD

Predict groups by sampling

parameter space

Verify hypotheses

Using constraints

Validated appearances

Evaluate match

Compute index

UNR - Computer Vision Laboratory

How to generate the appearances of a group?

- Estimate each parameter’s range of values
- Sample the space of parameter values
- Generate a new appearance for each sample of values

UNR - Computer Vision Laboratory

Estimate the Interval values of the Parameters (cont’d)

- Assume normalized coordinates:
- Use Interval Arithmetic(Moore, 1966)
- (note that the solutions will be the identical)

UNR - Computer Vision Laboratory

Models

UNR - Computer Vision Laboratory

Impose rigidity constraints

- For general linear transformations of the object, without additional constraints, it is impossible to distinguish between rigid and linear but not rigid transformation of the object. To impose rigidity, additional constraints must be met.

UNR - Computer Vision Laboratory

Unrealistic Views without the constraints

UNR - Computer Vision Laboratory

View generation

- Select two model views
- Move the centroids of the views to (0, 0) to make the translation parameters become zeros, such that there are no needs to sample them later
- Computer the range of parameters by SVD and IA
- For each sampling step of the parameters (a1, a2, a3, b1, b2, b3), generate the novel views; if the novel view satisfies both the interval constraint and the rigidity constraints, store this view as a valid view

UNR - Computer Vision Laboratory

Realistic Views

UNR - Computer Vision Laboratory

5 nearest neighbor query (a)

Query View

MSE=0.0015

MSE=0.0014

MSE=0.0016

MSE=0.0015

MSE=0.0022

UNR - Computer Vision Laboratory

5 nearest neighbor query (b)

Query view

MSE=2.0134e-4

MSE=6.3495e-4

MSE=5.0291e-4

MSE=9.3652e-4

MSE=0.0017

UNR - Computer Vision Laboratory

5-nearest-neighbor query (c )

Query view

MSE=3.1926e-4

MSE=5.0356e-4

MSE=8.6303e-4

MSE=0.0010

MSE=0.0013

UNR - Computer Vision Laboratory

Geometric manifold

- By applying PCA, each object can be represented as a parametric manifold in two different eigenspaces: the universal eigenspace and the object’s own eigenspace. The universal eigenspace is computed using the generated transformed views of all objects of interest to the recognition system, the object eigenspace is computed using generated views of an object only. Therefore the geometric manifold is parameterized by the parameters of the algebraic functions.

UNR - Computer Vision Laboratory

Eigenspace of the car model

Without the rigid constraints

With the rigid constraints

UNR - Computer Vision Laboratory

The 5-nearest neighbor query results in universal eigenspace (m=3)

UNR - Computer Vision Laboratory

The 5-nearest neighbor query results in universal eigenspace (m=4)

UNR - Computer Vision Laboratory

Parameters Prediction

UNR - Computer Vision Laboratory

Training process

UNR - Computer Vision Laboratory

Actual and predicted parameters

UNR - Computer Vision Laboratory

Mixture of Gaussians

A mixture is defined as a weighted sum of K components where each component is a parametric density function

Each mixture component is a Gaussian with mean and covariance matrix

UNR - Computer Vision Laboratory

Random projection

- A random projection from n dimensions to d dimensions is represented by a dn matrix. It does not depend on the data and can be chosen rapidly.
- Data from a mixture of k Gaussians can be projected into just O(logk) dimensions while still retaining the approximate level of separation between clusters.
- Even if the original clusters are highly eccentric (i.e. far from spherical), random projection will make them more spherical.

S. Dasgupta, “Experiments with random projection”, In proc. Of 16th conference on uncertainty in artificial intelligence, 2001.

UNR - Computer Vision Laboratory

Recognition results by mixture models

The no. of eigenvectors m=8, then apply random projection to 3 dimensional space

UNR - Computer Vision Laboratory

Training stage

Recognition stage

Images from

various viewpoints

New image

Convex grouping

Convex grouping

Image groups

Model groups

A coarse k-d tree

Access

Compute index

Selection of reference

views

Compute probabilities of Gaussian mixtures

Index

Structure

Retrieve

Using SVD & IA

Ranking the candidates

by probabilities

Estimate the range of

parameters of AFoVs

Predict groups by sampling

parameter space

Predict the parameters

Using NN/SVD

Using constraints

Validated appearances

Verify hypotheses

Compute index

Evaluate match

UNR - Computer Vision Laboratory

A coarse k-d tree

Totally, 2242 groups with group size 8 of 10 models has been used to construct the k-d tree.

UNR - Computer Vision Laboratory

Mixture of Gaussians

- A universal eigenspace has been built by more dense views in the transformed space.
- 28 Gaussian mixture models have been built for all the groups in the universal eigenspace offline.

UNR - Computer Vision Laboratory

Test view

- Test view are generated by applying any orthographic projection on the 3D model.
- 2% noise has been added to the test view.
- Assume we can divide the test view into same groups as the reference views
- Assume the correspondences are not known, a circular shift has been applied to the points in the groups of test view to try all the possible situations

UNR - Computer Vision Laboratory

Ranking by probabilities-Car view

The first 4 nearest neighbor are chosen in the coarse k-d tree.

UNR - Computer Vision Laboratory

Ranking by probabilities-Tank view

UNR - Computer Vision Laboratory

Ranking by probabilities-Rocket view

Both of them are correct, because of the symmetry of the data

UNR - Computer Vision Laboratory

Some verification results

Group 1, MSE=8.0339e-5

Group 2, MSE=4.3283e-5

UNR - Computer Vision Laboratory

Some verification results

Group 2, MSE=5.3829e-5

Group 1, MSE=2.5977e-5

Group 3, MSE=5.9901e-5

UNR - Computer Vision Laboratory

Some verification results

Group 1, MSE=3.9283e-5

Group 1 (Shift 4), MSE=3.3383e-5

UNR - Computer Vision Laboratory

Some verification results

Group 2, MSE=4.4156e-5

Group 3, MSE=3.3107e-5

UNR - Computer Vision Laboratory

Future work

- Apply the system to the real data
- Integrate AFoVs with convex grouping
- Optimal selection of reference views

UNR - Computer Vision Laboratory

Download Presentation

Connecting to Server..