1 / 85

Using the Particle Filter Approach to Building Partial Correspondences between Shapes

Using the Particle Filter Approach to Building Partial Correspondences between Shapes. Rolf Lakaemper, Marc Sobel Temple University, Philadelphia,PA,USA. Part I: Motivation. The Goal: Finding correspondences between feature-points in two (similar) shapes. The Motivation:

noe
Download Presentation

Using the Particle Filter Approach to Building Partial Correspondences between Shapes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using the Particle Filter Approach to Building Partial Correspondences between Shapes Rolf Lakaemper, Marc Sobel Temple University, Philadelphia,PA,USA

  2. Part I: Motivation

  3. The Goal: Finding correspondences between feature-points in two (similar) shapes

  4. The Motivation: Shape recognition. Classically divided into three steps: • Finding correspondences (this talk) • Alignment • Shape Similarity

  5. We want to handle partialcorrespondences between arbitrary point sets based on local descriptors and certain global constraints. This includes but is not limited to…

  6. The simplest case: Closed boundaries vs. Closed boundaries

  7. Advanced: Closed polygons vs. polygons representing parts

  8. More advanced: Partial matching of unordered 2D point sets

  9. …and unordered 3D point sets Bad example due to insufficient constraints. Later we’ll see how to improve it.

  10. These tasks can be described as an optimization problem. They will only differ in the way the global constraints are defined. We will present a Particle Filter (PF) based solution unifying these problems. The PF system is able to learn properties of the global constraints.

  11. General Approach Single Particle (configuration of established correspondences) Local Constraints GC update rule Particle Filtering Global Constraints (GC) Task PF Framework

  12. Part II: Illustration of the Approach using the simple example of correspondences between closed boundary curves

  13. The example data: boundary polygons • Each boundary curve is uniformly sub sampled • Each shape is represented by an ordered set of boundary points

  14. II.A Local Constraints

  15. For each boundary point we computelocal feature descriptors, eventually leading to a local correspondence matrix. This matrix describes the local constraints. Single Particle (configuration of established correspondences) Local Constraints GC update rule Particle Filtering Global Constraints (GC)

  16. Computation of Local Feature Descriptors. As an example we use • Centroid Distance • Curvature Remark: research is not about optimal, new and fancy local descriptors. On the contrary: for several reasons we use relatively weak descriptors

  17. Centroid Distance (normalized, average dist. = 1) Relative distance to center of polygon (mean of vertices) Extendable to parts • Curvature e.g. turn angle

  18. Using each descriptor independently, we compute the correspondence probability between all pairs of points. • The correspondence is computed in a symmetric way: • How likely is it that pi in shape1 corresponds to qk in shape2 relative to all points in shape2 AND • How likely is it that qk in shape2 corresponds to pi in shape1 relative to all points in shape1

  19. Example: Centroid Distance • Compute correspondence matrix MD1 = [md1ij] ui = centroid distance of point i in shape1 vj = centroid distance of point j in shape2 Gσ = Gauss Distribution with standard deviation σ • Row normalize MD1

  20. MD1 described the correspondence probability of a point in shape1 to a point in shape2. To find the correspondence probability shape2 to shape1, compute MD2 (column normalized):

  21. Finally, the correspondence matrix MD is the element wise product of MD1 and MD2: MD = MD1 .* MD2

  22. The correspondence matrix MC using curvature is computed accordingly • The final localcorrespondence matrix is the joint probability of both features L = MD .* MC

  23. Correspondences MC MD MC.*MD

  24. MC MD MC.*MD

  25. Examples for SELF similarity matrices MC.*MD ?

  26. MC.*MD

  27. Conclusion about L: • L defines a probability Pc over the set C of correspondences • L(S1,S2) = LT (S2,S1). This means L is order independent with respect to S1, S2. This of course does NOT necessarily mean that L is symmetric. • L is symmetric  S1=S2 • just as a note: M(S1,S1) (the self similarity matrix) is not necessarily diagonal dominant (see ‘device’ example).

  28. L defines the weights of single correspondences. Finding the optimal correspondence configuration is the task of finding a certain path in L under certain constraints. In our example, the constraints are: • One to one correspondences only • Order preservation The following section will formalize the optimization problem.

  29. II.B Correspondence as optimization problem

  30. Definitions: • A groupingg in the set G of all groupings is a configuration of correspondences. • Global constraints restrict the search space for our optimization process to G- (a subset of G), the admissible groupings • Using L (=Pc), we define a weight function W over G

  31. We formulate the correspondence problem as one of choosing the grouping, g^ ∈ G− from the set of admissible groupings, G− with maximal weight or, more specifically: Lemma 1: the optimal grouping is complete

  32. The optimization problem could typically be solved using dynamic programming We want to use particle filters to solve the correspondence problem Reason: particle filters provide a less restricted framework, which enables us to extend the system to solve more general and complex recognition problems (parts, inner structures, 3D shapes)

  33. II.C Using Particle Filters to solve the optimization problem

  34. Particle Filters Single Particle (configuration of established correspondences) Local Constraints GC update rule Particle Filtering Global Constraints (GC)

  35. Some general remarks: Particle Filtering, e.g. in contrast to the deterministic Dynamic Programming, is a statistical approach. It does not guarantee an optimal (but only a near optimal) solution.

  36. But: the weight matrix is built from non-precise local descriptors. Hence a precise, optimal solution does not necessarily make sense anyway.

  37. The goal of particle filters (PF) is to estimate the posterior distribution over the entire search space using discrete distributions (constructed dynamically at each of a number of different iterations) based on a limited number of particles. For our optimization problem, we are interested in the strongest particle. Whatever dialect of PF is used, they always consist of 2 major steps:

  38. Let’s say we have n particles (hypotheses). Each particle has a ‘weight’. Step 1: Prediction Corresponding to additional information, update each particle and compute its new weight. Step 2: Evaluation Pick n updated particles according to their weights. ‘Better’ particles have a higher chance to survive.

  39. In our example problem, a single particle is a set of order preserving correspondences. Its weight is computed using the weights of the single correspondences.

  40. Prediction: add a new correspondence (order preserving) based on the and compute the new weight. The new correspondence is picked using the distribution defined by the weight matrix L Evaluation: Residual sub sampling. Additionally we use a RECEDE step: every m steps, n correspondences are deleted (m>n). This can be seen as an add on to the update step.

  41. Adding correspondences randomly means: we do not need to know a starting point, we also do not need to know the direction of the path (this is an advantage, though not the main advantage over dynamic programming) • The order constraint decreases the size of the search space rapidly

  42. Updating using the correspondence matrix L as underlying distribution has 2 effects: • Correspondences of high probability are likely to be picked first • Correspondences in indistinct areas of L have a uniform distribution. This problem is automatically solved, since our system prefers particles with a higher number of correspondences.

  43. The main contribution: L is dynamically modified using GLOBAL constraints These constraints are modeled by a global constraint matrix, which is re-built for each particle in each step. The matrix contains real valued elements, weakening the admissibility definition to ‘degree of admissibility’

  44. II.D Global constraints in our example

  45. Adding Global Constraints Single Particle (configuration of established correspondences) Local Constraints GC update rule Particle Filtering Global Constraints (GC)

  46. Finding the optimal global correspondence between two shapes: we want an optimal correspondence configuration based on L AND • We want to maximize the number of correspondences • We want to conserve the point order

  47. We want to find a maximal and order preserving path in M We do NOT know the number of correspondences, the start point or the path direction (Matrix M is a torus, i.e. circular in both dimensions)

  48. An order preserving max. path is a set of correspondences of shape vertices

  49. Order preservation can be formulated in terms of a global constraint matrix with entries {0,1}

  50. The particle selection process operates on the product of the local and global matrix. The strict distinction between local and global constraints will be of use in more advanced applications, which we will see later. Our optimization process operates on a dynamically changing matrix.

More Related