1 / 34

Embedding Gestalt Laws in Markov Random Fields by Song-Chun Zhu

Embedding Gestalt Laws in Markov Random Fields by Song-Chun Zhu. Purpose of the Paper. Proposes functions to measure Gestalt features of shapes Adapts [Zhu, Wu Mumford] FRAME method to shapes Exhibits effect of MRF model obtained by putting these together. Recall Gestalt Features.

wayne-paul
Download Presentation

Embedding Gestalt Laws in Markov Random Fields by Song-Chun Zhu

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Embedding Gestalt Lawsin Markov Random Fieldsby Song-Chun Zhu

  2. Purpose of the Paper • Proposes functions to measure Gestalt features of shapes • Adapts [Zhu, Wu Mumford] FRAME method to shapes • Exhibits effect of MRF model obtained by putting these together.

  3. Recall Gestalt Features (à la [Lowe], and others) • Colinearity • Cocircularity • Proximity • Parallelism • Symmetry • Continuity • Closure • Familiarity

  4. FRAME [Zhu, Wu, Mumford] • F ilters • R andom fields • A nd • M aximum • E ntropy • A general procedure for constructing MRF models

  5. Three Main Parts • Data • Learn MRF models from data • Test generative power of learned model

  6. Elements of Data • A set of images representative of the chosen application domain • An adequate collection of feature measures or filters • The (marginal) statistics of applying the feature measures or filters to the set of images

  7. Data: Images • Zhu considers 22 animal shapes and their horizontal flips • The resulting histograms are symmetric • More data can be obtained • But are there other effects?

  8. Sample Animate Images

  9. Contour-based Feature Measures • Goal is to be generic • But generic shape features are hard to find • φ1 = κ(s), the curvature • κ(s) = 0 implies the linelets on either side of Γ(s) are colinear • φ2 = κ'(s), its derivative • κ'(s) = 0 implies three sequential linelets are cocircular • “Other contour-based shape filters can be defined in the same way”

  10. Zhu's Symmetry Function • Ψ(s) pairs linelets across medial axes • Defined and computed by minimizing an energy functional constructed so that • Paired linelets are as close, parallel and symmetric as possible, and • There are as few discontinuities as possible

  11. Region-based Feature Measures • φ3(s) = dist(s, ψ(s)) • Measures proximity of paired linelets across a region • φ4(s) = φ3'(s), the derivative • φ4(s) = 0 implies paired linelets are parallel • φ5(s) = φ'4(s) = φ3''(s) • φ5(s) = 0 implies paired linelets are symmetric

  12. Another Possible Shape Feature • φ6(s) = 1 where ψ(s) is discontinuous 0 otherwise • Counts the number of “parts” a shape has • Can Gestalt “familiarity” be (statistically?) measured?

  13. The Statistic • The histogram of feature φ over curve Γ is H(z; φk, Γ) = ∫δ(z-φk(s)) ds δ is the Dirac function: mass 1 at 0, and 0 otherwise • μ(z; φk) denotes the average over all images • Zhu claims μ is a close estimation of the marginal distribution of the “true distribution” over shape space, assuming the total number of linelets is small.

  14. Statistical Observations On 22 images and their flips φ1at scales 0, 1, 2 φ5 φ3 φ4

  15. Construct a Model • Ω is the space of shapes • Φ is a finite subset of feature filters • We seek a probability distribution p on Ω ∫Ω p(Γ) dΓ = 1 (1) • That reproduces the statistics for all φ in Ω ∫Ω p(Γ) δ(z-φ(s)) dΓ = μ(z; φ) (2)

  16. Construct a Model, 2 • Idea: Choose the p with maximal entropy • Seems reasonable and fair, but is it really the best target/energy function? • Lagrange multipliers and calculus of variations lead to p(Γ; Φ, Λ) = exp(–∑φЄΦ∫λφ(z) H(φ, Γ, z) dz)/ Z where Z is the usual normalizing factor Λ = { λφ | φЄΦ }

  17. It's a Gibbs Distribution • In other words, it has the form of a Gibbs distribution, and therefore determines a Markov Random Field (MRF) model.

  18. Markov Chain Monte Carlo • Too hard to compute λ's and p analytically • Idea: Sample Ω according to the distribution p, stochastically update Λ to update p, and repeat until p reproduces all μ(z; φ) for φ Є Φ • Monte Carlo because of random walk • Markov Chain in the nature of the loop

  19. Markov Chain Monte Carlo, 2 • From the sampling produce μ'(z; φ) • Same as μ(z; φ) except based on a random sample of shape space • For the purposes of today's discussion, the details are not important • For φЄΦ μ'(z; φ) = μ(z; φ) • Zhu et al. assume there exists a “true underlying distribution”

  20. The Nonaccidental Statistic • For φ' not in the set Φ we expect μ'(z; φ') ≠ μ(z; φ') • μ'(z; φ') is the accidental statistic for φ' • It is a measure of correlation between φ' and Φ • The “distance” (L1, L2, or other) between μ'(z; φ') and μ(z; φ') is the nonaccidental statistic for φ' • It is a measure of how much “additional information” φ' carries above what is already in Φ

  21. The Algorithm (simplified) • Enter your set Γ = { γ } of shapes • Enter a (large) set { φ } of candidate feature measures • Compute μ(φ, Γ) for all φ in Φ • Compute μ'(φ) relative to a uniform distribution on Ω • Until the nonaccidental statistic of all unused features is small enough, repeat:

  22. Algorithm, 2 • Of the remaining φ , add to Φ one with maximal nonaccidental statistic • Update: • Set of Lagrange multipliers Λ = { λ } • Probability distribution model p(Φ, Λ) • The μ'(φ) for remaining candidate features φ

  23. Experiments and Discussion • Let my description of these experiments stimulate your thoughts on such issues as • Are there better Gestalt feature measures? • What is the best possible outcome of a generative model of shape? • What feature measures should be added to the Gestalt ones? • How useful were these experiments and what other might be worth doing?

  24. Experiment 1 • When the only feature used is the curvature the model generated

  25. Experiment 1, continued • A Gaussian model (with the same κ-variance) produced

  26. Experiment 2 • Experiment 2 uses both κ and κ' • The nonaccidental statistic of κ' with respect to the model based on κ can be seen here

  27. Experiment 2, continued • This time the model generated these shapes, purported to be smoother and more scale invariant

  28. Experiment 3 • The nonaccidental statistics of the three region-based shape features relative to the model produced in Experiment 2

  29. Experiment 3, continued • So r'' was omitted, this model has Φ = { κ, κ', r, r' }

  30. Experiment 3, continued • This model produced such shapes as

  31. Concluding Discussion • Zhu acknowledges that the selection of training shapes might introduce a bias; but

  32. Discussion, continued • Zhu acknowledges that the paucity of Gestalt features limits the possible neighborhood structures used to define a MRF. • Zhu acknowledges that these models do not account for high-level shape properties, and suggests that a composition system might address this problem.

  33. Questions and Comments • Although it is in the nature of an MRF-model to propagate local properties, I think there needs to be a higher-level basis (than linelets) for measuring the Gestalt features of a shape! • Are there better Gestalt feature measures? • What feature measures should be added to the Gestalt ones?

  34. More Questions for Discussion • What is the best possible outcome of a generative model of shape? Is such a thing worth pursuing? • How useful were Zhu' experiments and what others might be worth doing?

More Related