variations
Download
Skip this Video
Download Presentation
Variations

Loading in 2 Seconds...

play fullscreen
1 / 84

Variations - PowerPoint PPT Presentation


  • 93 Views
  • Uploaded on

Variations. Dimensionality reduction. Computer Vision: Stages. Image formation Low-level Single image processing Multiple views Mid-level Grouping information Segmentation High-level Estimation, Recognition Classification. Why Model variations.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Variations' - svein


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
computer vision stages

Dimensionality reduction

Computer Vision: Stages
  • Image formation
  • Low-level
    • Single image processing
    • Multiple views
  • Mid-level
    • Grouping information
    • Segmentation
  • High-level
    • Estimation,
    • Recognition
    • Classification
why model variations
Why Model variations

Some objects have similar basic form but some variety in the contour shape as and perhaps also pixel values

today

Segmentation using snakes (from segmentation)

Modeling variations (PCA)

Eigen faces and active shape models

Combining shape and pixel values ( Active Appearance models)

Today
deformable contours
Deformable contours

a.k.a. active contours, snakes

Given: initial contour (model) near desired object

(Single frame)

[Snakes: Active contour models, Kass, Witkin, & Terzopoulos, ICCV1987]

Fig: Y. Boykov

deformable contours1
Deformable contours

a.k.a. active contours, snakes

Given: initial contour (model) near desired object

Goal: evolve the contour to fit exact object boundary

(Single frame)

[Snakes: Active contour models, Kass, Witkin, & Terzopoulos, ICCV1987]

Fig: Y. Boykov

deformable contours intuition
Deformable contours: intuition

Image from http://www.healthline.com/blogs/exercise_fitness/uploaded_images/HandBand2-795868.JPG

Figure from Shapiro & Stockman

deformable contours2

initial

final

intermediate

Deformable contours

a.k.a. active contours, snakes

  • Initialize near contour of interest
  • Iteratively refine: elastic band is adjusted so as to
  • be near image positions with high gradients, and
  • satisfy shape “preferences” or contour priors

Fig: Y. Boykov

deformable contours3

initial

final

Deformable contours

a.k.a. active contours, snakes

Like generalized Hough transform, useful for shape fitting; but

intermediate

Hough

Fixed model shape

Single voting pass can detect multiple instances

Snakes

Prior on shape types, but shape iteratively adjusted (deforms)

Requires initialization nearby

One optimization “pass” to fit a single contour

deformable contours4

initial

intermediate

final

Deformable contours

a.k.a. active contours, snakes

  • How is the current contour adjusted to find the new contour at each iteration?
  • Define a cost function (“energy” function) that says how good a possible configuration is.
  • Seek next configuration that minimizes that cost function.

What are examples of problems with energy functions that we have seen previously?

snakes energy function
Snakes energy function

The total energy (cost) of the current snake is defined as:

Internal energy: encourage prior shape preferences: e.g., smoothness, elasticity, particular known shape.

External energy (“image” energy): encourage contour to fit on places where image structures exist, e.g., edges.

A good fit between the current deformable contour and the target shape in the image will yield a low value for this cost function.

slide12

Parametric curve representation

(continuous case)

Fig from Y. Boykov

parametric curve representation discrete form
Parametric curve representation(discrete form)
  • Represent the curve with a set of n points
external energy intuition
External energy: intuition
  • Measure how well the curve matches the image data
  • “Attract” the curve toward different image features
    • Edges, lines, etc.
external image energy
External image energy

How do edges affect “snap” of rubber band?

Think of external energy from image as gravitational pull towards areas of high contrast

Magnitude of gradient

- (Magnitude of gradient)

external image energy1
External image energy
  • Image I(x,y)
  • Gradient images and
  • External energy at a point v(s) on the curve is
  • External energy for the whole curve:
internal energy intuition
Internal energy: intuition

A priori, we want to favor smooth shapes, contours with low curvature, contours similar to a known shape, etc. to balance what is actually observed (i.e., in the gradient image).

http://www3.imperial.ac.uk/pls/portallive/docs/1/52679.JPG

internal energy
Internal energy

For a continuous curve, a common internal energy term is the “bending energy”.

At some point v(s) on the curve, this is:

The more the curve bends  the larger this energy value is.

The weights α and β dictate how much influence each component has.

Elasticity,

Tension

Stiffness,

Curvature

Internal energy for whole curve:

dealing with missing data
Dealing with missing data
  • The smoothness constraint can deal with missing data:

[Figure from Kass et al. 1987]

total energy continuous form
Total energy(continuous form)

// bending energy

// total edge strength

under curve

slide21

Discrete energy function:external term

  • If the curve is represented by n points

Discrete image gradients

discrete energy function internal term
Discrete energy function:internal term
  • Curve is represented by n points

Elasticity,

Tension

Stiffness

Curvature

penalizing elasticity
Penalizing elasticity
  • Current elastic energy definition uses a discrete estimate of the derivative, and can be re-written as:

Possible problem with this definition?

This encourages a closed curve to shrink to a cluster.

penalizing elasticity1
Penalizing elasticity
  • To stop the curve from shrinking to a cluster of points, we can adjust the energy function to be:
  • This encourages chains of equally spaced points.

Average distance between pairs of points – updated at each iteration

function of the weights

large

small

medium

weight controls the penalty for internal elasticity

Function of the weights

Fig from Y. Boykov

optional specify shape prior
Optional: specify shape prior
  • If object is some smooth variation on a known shape, we can use a term that will penalize deviation from that shape (more about this later):

where are the points of the known shape.

Fig from Y. Boykov

summary elastic snake
Summary: elastic snake
  • A simple elastic snake is defined by
    • A set of n points,
    • An internal elastic energy term
    • An external edge based energy term
  • To use this to locate the outline of an object
    • Initialize in the vicinity of the object
    • Modify the points to minimize the total energy

How should the weights in the energy function be chosen?

energy minimization greedy

For each point, search window around it and move to where energy function is minimal

    • Typical window size, e.g., 5 x 5 pixels
  • Stop when predefined number of points have not changed in last iteration, or after max number of iterations
  • Note
    • Convergence not guaranteed
    • Need decent initialization
Energy minimization: greedy
deformable contours5
Deformable contours

Tracking Heart Ventricles

(multiple frames)

shape
Shape

How to describe the shape of the human face?

objective formulation
Objective Formulation
  • Millions of pixels
  • Transform into a few parameterse.g. Man / woman, fat / skinny etc.
key idea
Key Idea

Images are points in a high dimensional space

Images in the possible set are highly correlated.

So, compress them to a low-dimensional subspace that

captures key appearance characteristics of the visual DOFs.

Today we will use PCA

dimensionality reduction
Dimensionality Reduction

The set of faces is a “subspace” of the set

of images

  • Suppose it is K dimensional
  • We can find the best subspace using PCA (see later)
  • This is like fitting a “hyper-plane” to the set of faces

Any face is spanned by basis vectors:

eigenfaces the idea
Eigenfaces: the idea

Think of a face as being a weighted combination of some “component” or “basis” faces. These basis faces are called eigenfaces

-8029 2900 1751 1445 4238 6193

eigenfaces representing faces
Eigenfaces: representing faces

The basis faces can be differently weighted to represent any face

-8029 -1183 2900 -2088 1751 -4336 1445 -669 4238 -4221 6193 10549

learning the basis images
Learning the basis images

Learn a set of basis faces which best represent the differences between the examples

Store each face as a set of weights for those basis faces

eigenfaces
Eigenfaces

Eigenfaces look somewhat like generic faces.

recognition reconstruction
recognition & reconstruction

Store and reconstruct a face from a set of weights

Recognise a new picture of a familiar face

Representation

Synthesis

learning variations
Learning Variations

Use Principle Components Analysis (PCA)

Need to understand

  • What is an eigenvector
  • What is covariance
principal component analysis
Principal Component analysis

A sample of nobservations in the 2-D space

Goal: to account for the variation in a sample

in as few variables as possible, to some accuracy

subspaces
Subspaces

Imagine that our face is simply a (high dimensional) vector of pixels

We can think more easily about 2d vectors

Here we have data in two dimensions

But we only really need one dimension to represent it

finding subspaces
Finding Subspaces

Suppose we take a line through the space

And then take the projection of each point onto that line

This could represent our data in “one” dimension

finding subspaces1
Finding Subspaces

Some lines will represent the data in this way well, some badly

This is because the projection onto some lines separates the data well, while on others result in bad separation

finding subspaces2
Finding Subspaces

Rather than a line we can perform roughly the same trick with a vector

Scale the vector to obtain any point on the line

eigenvectors
Eigenvectors

Aneigenvectorof a matrix A is a vector such that:

Where is a matrix, is a scalar (called the eigenvalue)

example
Example

one eigenvector of A is

so for this eigenvector of this matrix the eigenvalue is 4

Matlab: [eigvecs, eigVals] = eigs(C);

facts about eigenvectors
Facts about Eigenvectors
  • The eigenvectors of a matrix are special vectors (for a given matrix) that are only scaled bythematrix
  • Different matrices have different eigenvectors
  • Only square (but not all) matrices have eigenvectors
  • AnN xNmatrix has at mostNdistinct eigenvectors
  • All the distinct eigenvectors of a matrix are orthogonal (ie perpendicular)
covariance
Covariance

The covariance of two variables is:

The diagonal elements are the variances e.g. Var(x1)

For data that have been centred around the mean

example1
Example

Matlab: C= cov(X);

principal component analysis1
Principal Component analysis

The 1stPC is a minimum distance fit to a line in spacealong the direction of most variance (eigenvalue=variance)

The 2nd PCa minimum distance fit to a line

in the plane perpendicular to the 1st PC

PCs are a series of linear least squares fits to a sample,

each orthogonal to all the previous. Combined they constitute a change of basis

A point’s position in this new coordinate system is what we earlier referred to as its “weight vector”

dimensionality reduction1

i =

K

NM

Dimensionality Reduction

eigenvalues

  • We can represent the points with only their v1 coordinates
    • since v2 coordinates are all essentially 0
    • The eigenvalues are directly related to the variance along a particular direction (ignore directions with low eigenvalue).
  • This makes it much cheaper to store and compare points
eigenfaces1
Eigenfaces
  • PCA extracts the eigenvectors of A
    • Gives a set of vectors v1, v2, v3, ...
    • Each one of these vectors is a direction in face space
eigenfaces summary
Eigenfaces - summary
  • Treat images as points in a high-dimensional space
  • Training:
    • Calculate the covariance matrix of the face
    • Calculate eigenvectors of the covariance matrix
    • Eigenfaces with bigger eigenvalues will explain more of the variation in the set of faces, i.e. will be more distinguishing. Chose a subset of the eigenvalues (eg 95% of the total variation).
  • These eigenvectors are the eigenfaces or basis faces
eigenfaces image space to face space
Eigenfaces: image space to face space

Generative process:

When we see an image of a face we can transform it to face space

Subset of eigenvectors

reconstruction
Reconstruction
  • The more eigenfaces you have the better the reconstruction, but you can have high quality reconstruction even with a small number of eigenface

82 70 50

30 20 10

recognition in face space
Recognition in face space

Recognition could be done by calculating the

Euclidean distance d between our face and all the other stored faces in face space:

The closest face in face space is the chosen match

summary
Summary
  • Statistical approach to visual recognition
  • Also used for object recognition
  • Reference: M. Turk and A. Pentland (1991). Eigenfaces for recognition, Journal of Cognitive Neuroscience, 3(1): 71–86.
active appearance models
Active Appearance Models
  • Appearance models capture variability of objects
    • In terms of shape (landmark points) and texture (image intensities)
  • Ingredients
    • Statistical analyses of an annotated training set
  • Result
    • A generative model synthesising complete images of objects
  • Registration
    • Adjusting the model to fit an image
  • If this adjustment done automatically and fast, we have an ActiveAppearance Model (AAM)[Edwards, Taylor & Cootes, AFGR 1998], [Cootes, Edwards & Taylor, ECCV 1998]
model building
Model building
  • Training set
    • 35 face images annotated with 58 landmarks
  • Active Appearance Model
    • RGB texture model sampled at 30.000 positions
    • 28 model parameters span 95% variation

Training image

Annotation

Model mesh

Shape-compensation

procrustes analysis
Procrustes analysis

One shape

Principal Component Analysis

Shape parameterisation

modelling shape iii
Modelling Shape III
  • The three first shape modes shown with a static texture

Mode 1 – 38%

Mode 2 – 13%

Mode 3 – 9%

modelling texture i
Modelling texture I
  • Grey-level images
  • Colorimages
modelling texture ii
Modelling Texture II
  • The three first texture modes shown with the mean static shape

Mode 1 – 21%

Mode 2 – 10%

Mode 3 – 8%

model optimisation i
Model optimisation I
  • AAMs use a simple and efficient iterative scheme
    • Synthesize
    • Calculate difference between synthetic and real image
    • Estimate parameter update
    • Update
    • Iterate until convergence
model optimisation iv
Model optimisation IV
  • We already know the optimal solution (annotations)
    • Updates are based on differences in a shape-normalised frame(hence “similar” optimisation problems)
    • Limited support
    • Works well for “mild” texture changes
    • Extensions for moresevere texture variationhave been proposed
example aam search
Example AAM search

reconstructed face

unknown face

detected facial features

applications of face image analysis
Applications of face image analysis
  • Biometric security
      • Access systems
  • Lip-reading
      • Assisted speech recognition
      • Automated lip-syncing in cartoons
  • Eye-tracking
      • Human-computer interaction
      • Attention analysis (maps, instruments, road signs, road cues)
  • Virtual characters
      • Hugo!
  • Medical applications
      • Improve understanding of syndromic facial dysmorphologies, e.g. the Noonan syndrome
eye t racking
Eye tracking
  • Down-scaling of eye-tracking systems to consumer hardware, i.e. low-priced web-cameras
suggested reading

“A Brief Introduction to Statistical Shape Analysis”Mikkel B. Stegmann, David Delgado Gomez http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=403

Suggested reading
efficient computation of eigenvectors
Efficient Computation of Eigenvectors

If B is MxN and M<<N then A=BTBis NxN >> MxM

  • M  number of images, N  number of pixels
  • use BBTinstead, eigenvector of BBTis easily

converted to that of BTB

(BBT) y = e y

=> BT(BBT) y = e (BTy)

=> (BTB)(BTy) = e (BTy)

=> BTyis the eigenvector ofBTB

ad