- 88 Views
- Uploaded on
- Presentation posted in: General

Statistics and Shape Analysis

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Statistics and Shape Analysis

By Marc Sobel

- Humans recognize shapes via both local and global features.
- (i) matching local features between shapes like curvature, distance to centroid can be statistically modeled via building statistics and parameters to reflect the matching.
- (ii) matching the relationship between global features of shapes (are they both apples or not?)

- How can we incorporate both local and global features in shape matching?
- An obvious paradigm is to model global features as governed by priors, and local features given global features as a likelihood.

- Let u1,…,un be the vertices of one shape and v1,…,vm the vertices of another shape. We’d like to biuld correspondences between the vertices which properly reflect the relationship between the shapes. We use the notation (ui,vj) for a correspondence of this type. We use the terminology for a particle consisting of a set of such correspondences.
- Let Xi,l be the l’th local feature measure for vertex i of the first shape and Yj,l the l’th local feature measure for vertex j of the second shape. For now assume these feature measures are observed.
- We’d like to biuld a particle which reflects the local and global features of interest.

- If shapes result from one another via rotation and scaling then the order of shape 1 correspondence points should match the order of shape 2 correspondence points: i.e., if (i1,j1) is one correspondence and (i2,j2) is another, then either i1<i2 and j1<j2 or i1>i2 and j1>j2. We can incorporate this into a prior.

- We have that:

- Based on the observed features we form weight statistics:
- Let W denote the weight matrix associated with the features.
- Therefore given that a correspondence ‘C’ belongs in the ‘true’ set of correspondences, we write the simple likelihood in the form,

- At stage t, putting ω as the parameter, we define the likelihood:

- Model a prior for all sets of correspondences which are strongly contiguous:
- a) a simple prior (we use ω for the weight variable)
- b) I] a prior giving more weight to diagonals than other correspondences.
- II] we can define such a prior sequentially based on the fact that

- Put
- With ‘DIAG[i,j]’ referring to the positively oriented diagonal to which (i,j) belong.

- We would like to simulate the posterior distribution of contiguous correspondences. We do this by calculating the weights:

- Here we simulate:

- Define the posterior probabilities:
- For parameter λ, described below.

- The weights for the simpler model are particularly easy:
- Choosing λ tending to infinity properly, we get convergence to the MAP estimator of the simple particle filter.

- We have:

- Based on the observed features we form weight parameters:
- Let W denote the weight matrix associated with the features.
- Therefore given that a correspondence ‘C’ belongs in the ‘true’ set of correspondences, we write the likelihood in the form,

- We write the likelihood in the form:

- We assume standard priors for the mu’s and nu’s. We also assume a prior for the set of contiguous correspondences.
- The particle is updated as follows: define,

- At stage t we have particles,
- Their weights are given by: