Shape modeling
This presentation is the property of its rightful owner.
Sponsored Links
1 / 82

Shape modeling PowerPoint PPT Presentation


  • 100 Views
  • Uploaded on
  • Presentation posted in: General

Shape modeling. Dimensionality reduction. Computer Vision: Stages. Image formation Low-level Single image processing Multiple views Mid-level Grouping information Segmentation High-level Estimation, Recognition Classification. Why Model variations.

Download Presentation

Shape modeling

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Shape modeling

Shape modeling


Computer vision stages

Dimensionality reduction

Computer Vision: Stages

  • Image formation

  • Low-level

    • Single image processing

    • Multiple views

  • Mid-level

    • Grouping information

    • Segmentation

  • High-level

    • Estimation,

    • Recognition

    • Classification


Why model variations

Why Model variations

Some objects have similar basic form but some variety in the contour shape as and perhaps also pixel values


Today

Segmentation using snakes (from segmentation)

Modeling variations (PCA)

Eigen faces and active shape models

Combining shape and pixel values ( Active Appearance models)

Today


Deformable contours

Deformable contours

a.k.a. active contours, snakes

Given: initial contour (model) near desired object

(Single frame)

[Snakes: Active contour models, Kass, Witkin, & Terzopoulos, ICCV1987]

Fig: Y. Boykov


Deformable contours1

Deformable contours

a.k.a. active contours, snakes

Given: initial contour (model) near desired object

Goal: evolve the contour to fit exact object boundary

(Single frame)

[Snakes: Active contour models, Kass, Witkin, & Terzopoulos, ICCV1987]

Fig: Y. Boykov


Deformable contours intuition

Deformable contours: intuition

Image from http://www.healthline.com/blogs/exercise_fitness/uploaded_images/HandBand2-795868.JPG

Figure from Shapiro & Stockman


Deformable contours2

initial

final

intermediate

Deformable contours

a.k.a. active contours, snakes

  • Initialize near contour of interest

  • Iteratively refine: elastic band is adjusted so as to

  • be near image positions with high gradients, and

  • satisfy shape “preferences” or contour priors

Fig: Y. Boykov


Deformable contours3

initial

final

Deformable contours

a.k.a. active contours, snakes

Like generalized Hough transform, useful for shape fitting; but

intermediate

Hough

Fixed model shape

Single voting pass can detect multiple instances

Snakes

Prior on shape types, but shape iteratively adjusted (deforms)

Requires initialization nearby

One optimization “pass” to fit a single contour


Deformable contours4

initial

intermediate

final

Deformable contours

a.k.a. active contours, snakes

  • How is the current contour adjusted to find the new contour at each iteration?

  • Define a cost function (“energy” function) that says how good a possible configuration is.

  • Seek next configuration that minimizes that cost function.

What are examples of problems with energy functions that we have seen previously?


Snakes energy function

Snakes energy function

The total energy (cost) of the current snake is defined as:

Internal energy: encourage prior shape preferences: e.g., smoothness, elasticity, particular known shape.

External energy (“image” energy): encourage contour to fit on places where image structures exist, e.g., edges.

A good fit between the current deformable contour and the target shape in the image will yield a low value for this cost function.


Shape modeling

Parametric curve representation

(continuous case)

Fig from Y. Boykov


Parametric curve representation discrete form

Parametric curve representation(discrete form)

  • Represent the curve with a set of n points


External energy intuition

External energy: intuition

  • Measure how well the curve matches the image data

  • “Attract” the curve toward different image features

    • Edges, lines, etc.


External image energy

External image energy

How do edges affect “snap” of rubber band?

Think of external energy from image as gravitational pull towards areas of high contrast

Magnitude of gradient

- (Magnitude of gradient)


External image energy1

External image energy

  • Image I(x,y)

  • Gradient images and

  • External energy at a point v(s) on the curve is

  • External energy for the whole curve:


Internal energy intuition

Internal energy: intuition

A priori, we want to favor smooth shapes, contours with low curvature, contours similar to a known shape, etc. to balance what is actually observed (i.e., in the gradient image).

http://www3.imperial.ac.uk/pls/portallive/docs/1/52679.JPG


Internal energy

Internal energy

For a continuous curve, a common internal energy term is the “bending energy”.

At some point v(s) on the curve, this is:

The more the curve bends  the larger this energy value is.

The weights α and β dictate how much influence each component has.

Elasticity,

Tension

Stiffness,

Curvature

Internal energy for whole curve:


Dealing with missing data

Dealing with missing data

  • The smoothness constraint can deal with missing data:

[Figure from Kass et al. 1987]


Total energy continuous form

Total energy(continuous form)

// bending energy

// total edge strength

under curve


Shape modeling

Discrete energy function:external term

  • If the curve is represented by n points

Discrete image gradients


Discrete energy function internal term

Discrete energy function:internal term

  • Curve is represented by n points

Elasticity,

Tension

Stiffness

Curvature


Penalizing elasticity

Penalizing elasticity

  • Current elastic energy definition uses a discrete estimate of the derivative, and can be re-written as:

Possible problem with this definition?

This encourages a closed curve to shrink to a cluster.


Penalizing elasticity1

Penalizing elasticity

  • To stop the curve from shrinking to a cluster of points, we can adjust the energy function to be:

  • This encourages chains of equally spaced points.

Average distance between pairs of points – updated at each iteration


Function of the weights

large

small

medium

weight controls the penalty for internal elasticity

Function of the weights

Fig from Y. Boykov


Optional specify shape prior

Optional: specify shape prior

  • If object is some smooth variation on a known shape, we can use a term that will penalize deviation from that shape (more about this later):

    where are the points of the known shape.

Fig from Y. Boykov


Summary elastic snake

Summary: elastic snake

  • A simple elastic snake is defined by

    • A set of n points,

    • An internal elastic energy term

    • An external edge based energy term

  • To use this to locate the outline of an object

    • Initialize in the vicinity of the object

    • Modify the points to minimize the total energy

How should the weights in the energy function be chosen?


Energy minimization

Several algorithms proposed to fit deformable contours,

Energy minimization


Energy minimization greedy

  • For each point, search window around it and move to where energy function is minimal

    • Typical window size, e.g., 5 x 5 pixels

  • Stop when predefined number of points have not changed in last iteration, or after max number of iterations

  • Note

    • Convergence not guaranteed

    • Need decent initialization

Energy minimization: greedy


Tracking via deformable models

Tracking via deformable models

  • Use final contour/model extracted at frame t as an initial solution for frame t+1

  • Evolve initial contour to fit exact object boundary at frame t+1

  • Repeat, initializing with most recent frame.


Deformable contours5

Deformable contours

Tracking Heart Ventricles

(multiple frames)


Shape

Shape

  • How to describe the shape of the human face?


Face aam

Face AAM


Objective formulation

Objective Formulation

  • Millions of pixels

  • Transform into a few parameterse.g. Man / woman, fat / skinny etc.


Point correspondence

Point correspondence

Shape representation: collection of points (or marks)

Given a set of curves or surfaces we have the point correspondence problem: how to define dense marks, that correspond across the set of shapes.


Face aam1

Face AAM

  • Training set

    • 35 face images annotated with 58 landmarks

  • Active Appearance Model

    • RGB texture model sampled at 30.000 positions

    • 28 model parameters span 95% variation

Training image

Annotation

Model mesh

Shape-compensation


Eigenfaces the idea

Eigenfaces: the idea

Think of a face as being a weighted combination of some “component” or “basis” faces. These basis faces are called eigenfaces

-8029 2900 1751 1445 4238 6193


Eigenfaces representing faces

Eigenfaces: representing faces

  • These basis faces can be differently weighted to represent any face

  • So we can use different vectors of weights to represent different faces

-8029 -1183 2900 -2088 1751 -4336 1445 -669 4238 -4221 6193 10549


Learning eigenfaces

Learning Eigenfaces

Q: How do we pick the set of basis faces?

A: We take a set of real training faces

Then we find (learn) a set of basis faces which best represent the differences between them

We’ll use a statistical criterion for measuring this notion of “best representation of the differences between the training faces”

We can then store each face as a set of weights for those basis faces


Using eigenfaces recognition reconstruction

Using Eigenfaces: recognition & reconstruction

  • We can use the eigenfaces in two ways

  • 1:we can store and then reconstruct a face from a set of weights

  • 2:we can recognise a new picture of a familiar face


Principal component analysis

Principal Component analysis

A sample of nobservations in the 2-D space

Goal: to account for the variation in a sample

in as few variables as possible, to some accuracy


Principal component analysis1

Principal Component analysis

  • the 1stPC is a minimum distance fit to a line in spacealong the direction of most variance (eigen value=variance)

  • the 2nd PCa minimum distance fit to a line

  • in the plane perpendicular to the 1st PC

PCs are a series of linear least squares fits to a sample,

each orthogonal to all the previous. Combined they constitute a change of basis


Learning eigenfaces1

Learning Eigenfaces

  • How do we learn them?

  • We use a method called Principle Components Analysis (PCA)

  • To understand this we will need to understand

    • What an eigenvector is

    • What covariance is

  • But first we will look at what is happening in PCA qualitatively


Subspaces

Subspaces

Imagine that our face is simply a (high dimensional) vector of pixels

We can think more easily about 2d vectors

Here we have data in two dimensions

But we only really need one dimension to represent it


Finding subspaces

Finding Subspaces

Suppose we take a line through the space

And then take the projection of each point onto that line

This could represent our data in “one” dimension


Finding subspaces1

Finding Subspaces

  • Some lines will represent the data in this way well, some badly

  • This is because the projection onto some lines separates the data well, and the projection onto some lines separates it badly


Finding subspaces2

Finding Subspaces

  • Rather than a line we can perform roughly the same trick with a vector

  • Now we have to scale the vector to obtain any point on the line


Eigenvectors

Eigenvectors

  • An eigenvector is a vector that obeys the following rule:

    Where is a matrix, is a scalar (called the eigenvalue)

    e.g. one eigenvector of is since

    so for this eigenvector of this matrix the eigenvalue is 4


Eigenvectors1

Eigenvectors

  • We can think of matrices as performing transformations on vectors (e.g rotations, reflections)

  • We can think of the eigenvectors of a matrix as being special vectors (for that matrix) that are scaled by that matrix

  • Different matrices have different eigenvectors

  • Only square matrices have eigenvectors

  • Not all square matrices have eigenvectors

  • An n by n matrix has at most n distinct eigenvectors

  • All the distinct eigenvectors of a matrix are orthogonal (ie perpendicular)


Covariance

Covariance

  • Which single vector can be used to separate these points as much as possible?

  • This vector turns out to be a vector expressing the direction of the correlation

  • Here I have two variablesand

  • They co-vary (y tends to change in roughly the same direction as x)


Covariance1

Covariance

  • The covariances can be expressed as a matrix

  • The diagonal elements are the variances e.g. Var(x1)

  • The covariance of two variables is:


Eigenvectors of the covariance matrix

Eigenvectors of the covariance matrix

  • The covariance matrix has eigenvectors

    covariance matrix

    eigenvectors

    eigenvalues

  • Eigenvectors with larger eigenvectors correspond to

    directions in which the data varies more

  • Finding the eigenvectors and eigenvalues of the

    covariance matrix for a set of data is termed

    principle components analysis


Expressing points using eigenvectors

Expressing points using eigenvectors

  • Suppose you think of your eigenvectors as specifying a new vector space

  • i.e. I can reference any point in terms of those eigenvectors

  • A point’s position in this new coordinate system is what we earlier referred to as its “weight vector”

  • For many data sets you can cope with fewer dimensions in the new space than in the old space


Eigenfaces

Eigenfaces

  • All we are doing in the face case is treating the face as a point in a high-dimensional space, and then treating the training set of face pictures as our set of points

  • To train:

    • We calculate the covariance matrix of the faces

    • We then find the eigenvectors of that covariance matrix

  • These eigenvectors are the eigenfaces or basis faces

  • Eigenfaces with bigger eigenvalues will explain more of the variation in the set of faces, i.e. will be more distinguishing


Eigenfaces image space to face space

Eigenfaces: image space to face space

  • When we see an image of a face we can transform it to face space

  • There are k=1…n eigenfaces

  • The ith face in image space is a vector

  • The corresponding weight is

  • We calculate the corresponding weight for every eigenface


Recognition in face space

Recognition in face space

  • Recognition is now simple. We find the euclidean distance d between our face and all the other stored faces in face space:

  • The closest face in face space is the chosen match


Reconstruction

Reconstruction

  • The more eigenfaces you have the better the reconstruction, but you can have high quality reconstruction even with a small number of eigenfaces

    8270 50

    3020 10


Summary

Summary

  • Statistical approach to visual recognition

  • Also used for object recognition

  • Problems

  • Reference: M. Turk and A. Pentland (1991). Eigenfaces for recognition, Journal of Cognitive Neuroscience, 3(1): 71–86.


Active appearance models

Active Appearance Models

  • Appearance models capture variability of objects

    • In terms of shape (landmark points) and texture (image intensities)

  • Ingredients

    • Statistical analyses of an annotated training set

  • Result

    • A generative model synthesising complete images of objects

  • Registration

    • Adjusting the model to fit an image

  • If this adjustment done automatically and fast, we have an ActiveAppearance Model (AAM)[Edwards, Taylor & Cootes, AFGR 1998], [Cootes, Edwards & Taylor, ECCV 1998]


Modelling shape i

Generalized Procrustes Analysis

Modelling shape I

One shape

Principal Component Analysis

Shape parameterisation


Modelling shape ii

Modelling Shape II


Pc1 vs pc2

PC1 vs PC2


Modelling shape iii

Modelling Shape III

  • The three first shape modes shown with a static texture

Mode 1 – 38%

Mode 2 – 13%

Mode 3 – 9%


Modelling texture i

Modelling texture I

  • Grey-level images

  • Colour images

  • Principal Component Analysis


Modelling texture ii

Modelling Texture II

  • The three first texture modes shown with a static shape

Mode 1 – 21%

Mode 2 – 10%

Mode 3 – 8%


Aamexplorer demo

AAMExplorer Demo


Model optimisation i

Model optimisation I

  • AAMs use a simple and efficient iterative scheme

    • Synthesize

    • Calculate difference between synthetic and real image

    • Estimate parameter update

    • Update

    • Iterate until convergence


Model optimisation ii

Model optimisation II

  • Estimation of the update matrix R

    • We wish to find

    • Consider the 1st order Taylor expansion around p*

    • Using this, we obtain the least-squares solution


Model optimisation iv

Model optimisation IV

  • We already know the optimal solution (annotations)

    • Updates are based on differences in a shape-normalised frame(hence “similar” optimisation problems)

    • Limited support

    • Works well for “mild” texture changes

    • Extensions for moresevere texture variationhave been proposed


Example aam search

Example AAM search

reconstructed face

unknown face

detected facial features


Applications of face image analysis

Applications of face image analysis

  • Biometric security

    • Access systems

  • Lip-reading

    • Assisted speech recognition

    • Automated lip-syncing in cartoons

  • Eye-tracking

    • Human-computer interaction

    • Attention analysis (maps, instruments, road signs, road cues)

  • Virtual characters

    • Hugo!

  • Medical applications

    • Improve understanding of syndromic facial dysmorphologies, e.g. the Noonan syndrome


  • Virtual personal makeover

    Virtual personal makeover


    Virtual personal makeover1

    Virtual personal makeover

    • Applications of automated face analysis

      • A kiosk system for testing cosmetic products developed by the Toronto-based company Approach Infinity Media

      • Core image analysis technology supplied by IMM, DTU

    “Our Personal Makeover tool allows you to fully customize your own unique style with thousands of different looks. Try new hair-styles, hair colors, thousands of different makeup styles, and lots of other fun accessories, all on your own image.”

    [from www.ai-media.com]


    More examples of appearance models and image modalities

    More Examples ofAppearance Models And Image Modalities


    Eye t racking

    Eye tracking

    • Down-scaling of eye-tracking systems to consumer hardware, i.e. low-priced web-cameras


  • Login