- By
**hilde** - Follow User

- 131 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Shape modeling' - hilde

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Virtual energy function is minimalpersonal makeover

### More Examples of energy function is minimalAppearance Models And Image Modalities

Computer Vision: Stages

- Image formation
- Low-level
- Single image processing
- Multiple views

- Mid-level
- Grouping information
- Segmentation

- High-level
- Estimation,
- Recognition
- Classification

Why Model variations

Some objects have similar basic form but some variety in the contour shape as and perhaps also pixel values

Segmentation using snakes (from segmentation)

Modeling variations (PCA)

Eigen faces and active shape models

Combining shape and pixel values ( Active Appearance models)

TodayDeformable contours

a.k.a. active contours, snakes

Given: initial contour (model) near desired object

(Single frame)

[Snakes: Active contour models, Kass, Witkin, & Terzopoulos, ICCV1987]

Fig: Y. Boykov

Deformable contours

a.k.a. active contours, snakes

Given: initial contour (model) near desired object

Goal: evolve the contour to fit exact object boundary

(Single frame)

[Snakes: Active contour models, Kass, Witkin, & Terzopoulos, ICCV1987]

Fig: Y. Boykov

Deformable contours: intuition

Image from http://www.healthline.com/blogs/exercise_fitness/uploaded_images/HandBand2-795868.JPG

Figure from Shapiro & Stockman

final

intermediate

Deformable contoursa.k.a. active contours, snakes

- Initialize near contour of interest
- Iteratively refine: elastic band is adjusted so as to
- be near image positions with high gradients, and
- satisfy shape “preferences” or contour priors

Fig: Y. Boykov

final

Deformable contoursa.k.a. active contours, snakes

Like generalized Hough transform, useful for shape fitting; but

intermediate

Hough

Fixed model shape

Single voting pass can detect multiple instances

Snakes

Prior on shape types, but shape iteratively adjusted (deforms)

Requires initialization nearby

One optimization “pass” to fit a single contour

intermediate

final

Deformable contoursa.k.a. active contours, snakes

- How is the current contour adjusted to find the new contour at each iteration?
- Define a cost function (“energy” function) that says how good a possible configuration is.
- Seek next configuration that minimizes that cost function.

What are examples of problems with energy functions that we have seen previously?

Snakes energy function

The total energy (cost) of the current snake is defined as:

Internal energy: encourage prior shape preferences: e.g., smoothness, elasticity, particular known shape.

External energy (“image” energy): encourage contour to fit on places where image structures exist, e.g., edges.

A good fit between the current deformable contour and the target shape in the image will yield a low value for this cost function.

Parametric curve representation(discrete form)

- Represent the curve with a set of n points

External energy: intuition

- Measure how well the curve matches the image data
- “Attract” the curve toward different image features
- Edges, lines, etc.

External image energy

How do edges affect “snap” of rubber band?

Think of external energy from image as gravitational pull towards areas of high contrast

Magnitude of gradient

- (Magnitude of gradient)

External image energy

- Image I(x,y)
- Gradient images and
- External energy at a point v(s) on the curve is
- External energy for the whole curve:

Internal energy: intuition

A priori, we want to favor smooth shapes, contours with low curvature, contours similar to a known shape, etc. to balance what is actually observed (i.e., in the gradient image).

http://www3.imperial.ac.uk/pls/portallive/docs/1/52679.JPG

Internal energy

For a continuous curve, a common internal energy term is the “bending energy”.

At some point v(s) on the curve, this is:

The more the curve bends the larger this energy value is.

The weights α and β dictate how much influence each component has.

Elasticity,

Tension

Stiffness,

Curvature

Internal energy for whole curve:

Dealing with missing data

- The smoothness constraint can deal with missing data:

[Figure from Kass et al. 1987]

Discrete energy function:external term

- If the curve is represented by n points

Discrete image gradients

Discrete energy function:internal term

- Curve is represented by n points

Elasticity,

Tension

Stiffness

Curvature

Penalizing elasticity

- Current elastic energy definition uses a discrete estimate of the derivative, and can be re-written as:

Possible problem with this definition?

This encourages a closed curve to shrink to a cluster.

Penalizing elasticity

- To stop the curve from shrinking to a cluster of points, we can adjust the energy function to be:
- This encourages chains of equally spaced points.

Average distance between pairs of points – updated at each iteration

small

medium

weight controls the penalty for internal elasticity

Function of the weightsFig from Y. Boykov

Optional: specify shape prior

- If object is some smooth variation on a known shape, we can use a term that will penalize deviation from that shape (more about this later):
where are the points of the known shape.

Fig from Y. Boykov

Summary: elastic snake

- A simple elastic snake is defined by
- A set of n points,
- An internal elastic energy term
- An external edge based energy term

- To use this to locate the outline of an object
- Initialize in the vicinity of the object
- Modify the points to minimize the total energy

How should the weights in the energy function be chosen?

Several algorithms proposed to fit deformable contours,

Energy minimization- For each point, search window around it and move to where energy function is minimal
- Typical window size, e.g., 5 x 5 pixels

- Stop when predefined number of points have not changed in last iteration, or after max number of iterations
- Note
- Convergence not guaranteed
- Need decent initialization

Tracking via deformable models energy function is minimal

- Use final contour/model extracted at frame t as an initial solution for frame t+1
- Evolve initial contour to fit exact object boundary at frame t+1
- Repeat, initializing with most recent frame.

Shape energy function is minimal

- How to describe the shape of the human face?

Face AAM energy function is minimal

Objective Formulation energy function is minimal

- Millions of pixels
- Transform into a few parameterse.g. Man / woman, fat / skinny etc.

Point correspondence energy function is minimal

Shape representation: collection of points (or marks)

Given a set of curves or surfaces we have the point correspondence problem: how to define dense marks, that correspond across the set of shapes.

Face AAM energy function is minimal

- Training set
- 35 face images annotated with 58 landmarks

- Active Appearance Model
- RGB texture model sampled at 30.000 positions
- 28 model parameters span 95% variation

Training image

Annotation

Model mesh

Shape-compensation

Eigenfaces: the idea energy function is minimal

Think of a face as being a weighted combination of some “component” or “basis” faces. These basis faces are called eigenfaces

-8029 2900 1751 1445 4238 6193

…

Eigenfaces: representing faces energy function is minimal

- These basis faces can be differently weighted to represent any face
- So we can use different vectors of weights to represent different faces

-8029 -1183 2900 -2088 1751 -4336 1445 -669 4238 -4221 6193 10549

Learning Eigenfaces energy function is minimal

Q: How do we pick the set of basis faces?

A: We take a set of real training faces

Then we find (learn) a set of basis faces which best represent the differences between them

We’ll use a statistical criterion for measuring this notion of “best representation of the differences between the training faces”

We can then store each face as a set of weights for those basis faces

…

Using Eigenfaces: recognition & reconstruction energy function is minimal

- We can use the eigenfaces in two ways
- 1: we can store and then reconstruct a face from a set of weights
- 2: we can recognise a new picture of a familiar face

Principal Component analysis energy function is minimal

A sample of nobservations in the 2-D space

Goal: to account for the variation in a sample

in as few variables as possible, to some accuracy

Principal Component analysis energy function is minimal

- the 1stPC is a minimum distance fit to a line in spacealong the direction of most variance (eigen value=variance)

- the 2nd PCa minimum distance fit to a line
- in the plane perpendicular to the 1st PC

PCs are a series of linear least squares fits to a sample,

each orthogonal to all the previous. Combined they constitute a change of basis

Learning Eigenfaces energy function is minimal

- How do we learn them?
- We use a method called Principle Components Analysis (PCA)
- To understand this we will need to understand
- What an eigenvector is
- What covariance is

- But first we will look at what is happening in PCA qualitatively

Subspaces energy function is minimal

Imagine that our face is simply a (high dimensional) vector of pixels

We can think more easily about 2d vectors

Here we have data in two dimensions

But we only really need one dimension to represent it

Finding Subspaces energy function is minimal

Suppose we take a line through the space

And then take the projection of each point onto that line

This could represent our data in “one” dimension

Finding Subspaces energy function is minimal

- Some lines will represent the data in this way well, some badly
- This is because the projection onto some lines separates the data well, and the projection onto some lines separates it badly

Finding Subspaces energy function is minimal

- Rather than a line we can perform roughly the same trick with a vector
- Now we have to scale the vector to obtain any point on the line

Eigenvectors energy function is minimal

- An eigenvector is a vector that obeys the following rule:
Where is a matrix, is a scalar (called the eigenvalue)

e.g. one eigenvector of is since

so for this eigenvector of this matrix the eigenvalue is 4

Eigenvectors energy function is minimal

- We can think of matrices as performing transformations on vectors (e.g rotations, reflections)
- We can think of the eigenvectors of a matrix as being special vectors (for that matrix) that are scaled by that matrix
- Different matrices have different eigenvectors
- Only square matrices have eigenvectors
- Not all square matrices have eigenvectors
- An n by n matrix has at most n distinct eigenvectors
- All the distinct eigenvectors of a matrix are orthogonal (ie perpendicular)

Covariance energy function is minimal

- Which single vector can be used to separate these points as much as possible?
- This vector turns out to be a vector expressing the direction of the correlation
- Here I have two variables and
- They co-vary (y tends to change in roughly the same direction as x)

Covariance energy function is minimal

- The covariances can be expressed as a matrix
- The diagonal elements are the variances e.g. Var(x1)
- The covariance of two variables is:

Eigenvectors of the covariance matrix energy function is minimal

- The covariance matrix has eigenvectors
covariance matrix

eigenvectors

eigenvalues

- Eigenvectors with larger eigenvectors correspond to
directions in which the data varies more

- Finding the eigenvectors and eigenvalues of the
covariance matrix for a set of data is termed

principle components analysis

Expressing points using eigenvectors energy function is minimal

- Suppose you think of your eigenvectors as specifying a new vector space
- i.e. I can reference any point in terms of those eigenvectors
- A point’s position in this new coordinate system is what we earlier referred to as its “weight vector”
- For many data sets you can cope with fewer dimensions in the new space than in the old space

Eigenfaces energy function is minimal

- All we are doing in the face case is treating the face as a point in a high-dimensional space, and then treating the training set of face pictures as our set of points
- To train:
- We calculate the covariance matrix of the faces
- We then find the eigenvectors of that covariance matrix

- These eigenvectors are the eigenfaces or basis faces
- Eigenfaces with bigger eigenvalues will explain more of the variation in the set of faces, i.e. will be more distinguishing

Eigenfaces: image space to face space energy function is minimal

- When we see an image of a face we can transform it to face space
- There are k=1…n eigenfaces
- The ith face in image space is a vector
- The corresponding weight is
- We calculate the corresponding weight for every eigenface

Recognition in face space energy function is minimal

- Recognition is now simple. We find the euclidean distance d between our face and all the other stored faces in face space:
- The closest face in face space is the chosen match

Reconstruction energy function is minimal

- The more eigenfaces you have the better the reconstruction, but you can have high quality reconstruction even with a small number of eigenfaces
82 70 50

30 20 10

Summary energy function is minimal

- Statistical approach to visual recognition
- Also used for object recognition
- Problems
- Reference: M. Turk and A. Pentland (1991). Eigenfaces for recognition, Journal of Cognitive Neuroscience, 3(1): 71–86.

Active Appearance Models energy function is minimal

- Appearance models capture variability of objects
- In terms of shape (landmark points) and texture (image intensities)

- Ingredients
- Statistical analyses of an annotated training set

- Result
- A generative model synthesising complete images of objects

- Registration
- Adjusting the model to fit an image

- If this adjustment done automatically and fast, we have an ActiveAppearance Model (AAM)[Edwards, Taylor & Cootes, AFGR 1998], [Cootes, Edwards & Taylor, ECCV 1998]

Generalized Procrustes Analysis energy function is minimal

Modelling shape IOne shape

Principal Component Analysis

Shape parameterisation

Modelling Shape II energy function is minimal

PC1 vs PC2 energy function is minimal

Modelling Shape III energy function is minimal

- The three first shape modes shown with a static texture

Mode 1 – 38%

Mode 2 – 13%

Mode 3 – 9%

Modelling texture I energy function is minimal

- Grey-level images
- Colour images
- Principal Component Analysis

Modelling Texture II energy function is minimal

- The three first texture modes shown with a static shape

Mode 1 – 21%

Mode 2 – 10%

Mode 3 – 8%

AAMExplorer Demo energy function is minimal

Model optimisation I energy function is minimal

- AAMs use a simple and efficient iterative scheme
- Synthesize
- Calculate difference between synthetic and real image
- Estimate parameter update
- Update
- Iterate until convergence

Model optimisation II energy function is minimal

- Estimation of the update matrix R
- We wish to find
- Consider the 1st order Taylor expansion around p*
- Using this, we obtain the least-squares solution

Model optimisation IV energy function is minimal

- We already know the optimal solution (annotations)
- Updates are based on differences in a shape-normalised frame(hence “similar” optimisation problems)
- Limited support
- Works well for “mild” texture changes
- Extensions for moresevere texture variationhave been proposed

Example energy function is minimalAAM search

reconstructed face

unknown face

detected facial features

Applications of face image analysis energy function is minimal Lip-reading Eye-tracking Virtual characters Medical applications

- Biometric security
- Access systems

- Assisted speech recognition
- Automated lip-syncing in cartoons

- Human-computer interaction
- Attention analysis (maps, instruments, road signs, road cues)

- Hugo!

- Improve understanding of syndromic facial dysmorphologies, e.g. the Noonan syndrome

Virtual personal makeover energy function is minimal

- Applications of automated face analysis
- A kiosk system for testing cosmetic products developed by the Toronto-based company Approach Infinity Media
- Core image analysis technology supplied by IMM, DTU

“Our Personal Makeover tool allows you to fully customize your own unique style with thousands of different looks. Try new hair-styles, hair colors, thousands of different makeup styles, and lots of other fun accessories, all on your own image.”

[from www.ai-media.com]

Eye energy function is minimaltracking

- Down-scaling of eye-tracking systems to consumer hardware, i.e. low-priced web-cameras

Download Presentation

Connecting to Server..