learning inhomogeneous gibbs models n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Learning Inhomogeneous Gibbs Models PowerPoint Presentation
Download Presentation
Learning Inhomogeneous Gibbs Models

Loading in 2 Seconds...

play fullscreen
1 / 37

Learning Inhomogeneous Gibbs Models - PowerPoint PPT Presentation


  • 62 Views
  • Uploaded on

Learning Inhomogeneous Gibbs Models. Ce Liu celiu@microsoft.com. How to Describe the Virtual World. Histogram. Histogram: marginal distribution of image variances Non Gaussian distributed. Texture Synthesis (Heeger et al, 95). Image decomposition by steerable filters Histogram matching.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Learning Inhomogeneous Gibbs Models' - aure


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
learning inhomogeneous gibbs models

Learning Inhomogeneous Gibbs Models

Ce Liu

celiu@microsoft.com

histogram
Histogram
  • Histogram: marginal distribution of image variances
  • Non Gaussian distributed
texture synthesis heeger et al 95
Texture Synthesis (Heeger et al, 95)
  • Image decomposition by steerable filters
  • Histogram matching
frame zhu et al 97
FRAME (Zhu et al, 97)
  • Homogeneous Markov random field (MRF)
  • Minimax entropy principle to learn homogeneous Gibbs distribution
  • Gibbs sampling and feature selection
our problem
Our Problem
  • To learn the distribution of structural signals
  • Challenges
    • How to learn non-Gaussian distributions in high dimensions with small observations?
    • How to capture the sophisticated properties of the distribution?
    • How to optimize parameters with global convergence?
inhomogeneous gibbs models igm
Inhomogeneous Gibbs Models (IGM)

A framework to learn arbitrary high-dimensional distributions

  • 1D histograms on linear features to describe high-dimensional distribution
  • Maximum Entropy Principle– Gibbs distribution
  • Minimum Entropy Principle– Feature Pursuit
  • Markov chain Monte Carlo in parameter optimization
  • Kullback-Leibler Feature (KLF)
1d observation histograms
1D Observation: Histograms
  • Feature f(x): Rd→ R
    • Linear feature f(x)=fTx
    • Kernel distance f(x)=||f-x||
  • Marginal distribution
  • Histogram
learning descriptive models1
Learning Descriptive Models
  • Sufficient features can make the learnt model f(x) converge to the underlying distribution p(x)
  • Linear features and histograms are robust compared with other high-order statistics
  • Descriptive models
maximum entropy principle
Maximum Entropy Principle
  • Maximum Entropy Model
    • To generalize the statistical properties in the observed
    • To make the learnt model present information no more than what is available
  • Mathematical formulation
inhomogeneous gibbs distribution
Inhomogeneous Gibbs Distribution
  • Solution form of maximum entropy model
  • Parameter:

Gibbs potential

estimating potential function
Estimating Potential Function
  • Distribution form
  • Normalization
  • Maximizing Likelihood Estimation (MLE)
  • 1st and 2nd order derivatives
parameter learning
Parameter Learning
  • Monte Carlo integration
  • Algorithm
minimum entropy principle
Minimum Entropy Principle
  • Minimum entropy principle
    • To make the learnt distribution close to the observed
  • Feature selection
feature pursuit
Feature Pursuit
  • A greedy procedure to learn the feature set
  • Reference model
  • Approximate information gain
proposition
Proposition

The approximate information gain for a new feature is

and the optimal energy function for this feature is

kullback leibler feature
Kullback-Leibler Feature
  • Kullback-Leibler Feature
  • Pursue feature by
    • Hybrid Monte Carlo
    • Sequential 1D optimization
    • Feature selection
acceleration by importance sampling
Acceleration by Importance Sampling
  • Gibbs sampling is too slow…
  • Importance sampling by the reference model
flowchart of igm
Flowchart of IGM

Obs

Samples

Obs

Histograms

IGM

MCMC

Syn

Samples

Feature

Pursuit

KL

Feature

KL<e

N

Y

Output

toy problems 1
Toy Problems (1)

Feature

pursuit

Synthesized

samples

Gibbs potential

Observed

histograms

Synthesized

histograms

Circle

Mixture of two Gaussians

applied to high dimensions
Applied to High Dimensions
  • In high-dimensional space
    • Too many features to constrain every dimension
    • MCMC sampling is extremely slow
  • Solution: dimension reduction by PCA
  • Application: learning face prior model
    • 83 landmarks defined to represent face (166d)
    • 524 samples
face prior learning 1
Face Prior Learning (1)

Observed face examples

Synthesized face samples without any features

face prior learning 2
Face Prior Learning (2)

Synthesized with 10 features

Synthesized with 20 features

face prior learning 3
Face Prior Learning (3)

Synthesized with 30 features

Synthesized with 50 features

thank you

CSAIL

Thank you!

celiu@csail.mit.edu