1 / 13

An Efficient Approach to Learning Inhomogenous Gibbs Models

An Efficient Approach to Learning Inhomogenous Gibbs Models. Ziqiang Liu, Hong Chen, Heung-Yeung Shum Microsoft Research Asia CVPR 2003 Presented by Derek Hoiem. Overview. Build histograms for projections to 1-D Feature selection: max KL divergence between estimated and true distribution

tan
Download Presentation

An Efficient Approach to Learning Inhomogenous Gibbs Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Efficient Approach to Learning Inhomogenous Gibbs Models Ziqiang Liu, Hong Chen, Heung-Yeung Shum Microsoft Research Asia CVPR 2003 Presented by Derek Hoiem

  2. Overview • Build histograms for projections to 1-D • Feature selection: max KL divergence between estimated and true distribution • 1-D histograms for a feature computed from training data and MCMC sampling • Fast solution with good starting point and importance sampling

  3. Maximum Entropy Principle • p(x) and f(x) should have same stats over observed features but p(x) should be as random as possible over other dimensions

  4. Gibbs Distribution and KL-Divergence • The solution: Gibbs distribution Λ minimizes the KL divergence:

  5. Inhomogeneous Gibbs Model • Gaussian and MoG deemed inadequate • Use vector-valued features (histograms)

  6. Approximate Information Gain and KL-Divergence • Effectiveness of feature defined by reduction in KL-divergence: • Approximate information gain given by (old params constant): • For a vector-valued feature: Key Contribution! gain starting point

  7. Estimating Λ: Importance Sampling • Obtain reference samples xref by MCMC from starting point • Update Λ by: Good starting point Bad starting point

  8. A Toy Success Story True Reference (Initial) Optimized Estimate

  9. Caricature Generation: Representation • Learn mapping from photo to caricature • Active appearance models: • Photos: shape + texture (44-D after PCA) • Caricature: shape (25-D after PCA)

  10. Caricature Generation: Learning • Gain(1)=.447 Gain(17)=.196 • 100,000 reference samples • 8 hours on 1.4GHz 256MB • vs 24 hours on 667MHz 18-D • Estimate: • Draw samples from: • Approximate to:

  11. Caricature Generation: Results

  12. Caricature Generation: Results

  13. Comments • Claims 100x speedup from efficiency analysis (33% speedup in reality)

More Related