advanced computer vision
Download
Skip this Video
Download Presentation
Advanced Computer Vision

Loading in 2 Seconds...

play fullscreen
1 / 69

Advanced Computer Vision - PowerPoint PPT Presentation


  • 126 Views
  • Uploaded on

Advanced Computer Vision. Lecture 04. Project Teams. Team 1: Project 1 Pedestrian Detection, Version 2 (Low resolution video using a Non-Stationary Camera Chris Cowdery-Corvan Liangyi Fan Thomas Knack

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Advanced Computer Vision' - tamma


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
project teams
Project Teams
  • Team 1: Project 1 Pedestrian Detection, Version 2 (Low resolution video using a Non-Stationary Camera
    • Chris Cowdery-Corvan
    • Liangyi Fan
    • Thomas Knack
  • Team 2: Project 1 Pedestrian Detection, Version 2 (Low resolution video using a Non-Stationary Camera
    • Jerome Marhic
    • Maxime Knibbe
slide3
Team 3: Project 2 Scene Analysis
    • Brandon Garlock
    • James Loomis
  • Team 4: Project: TBD
    • Ewan LASSUDRIE
    • Preethi Rao VANTARAM
    • Adrian CORTEZ
    • Thomas BORDO
slide4
Team 5: Project 2 Scene Analysis
    • Andrew Stebbins
    • Daniel Jurin
    • Nathaniel Moseley
slide5
Caltech Pedestrian Dataset- video

http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/

  • An Experimental Study on Pedestrian Classification

S. Munder and D.M. Gavrila

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 11, NOVEMBER 2006

slide7
Representation of a function in terms of sinusoids
  • Ability to reconstruct the function in terms of sines and cosines
  • How good the reconstruct is depends on the number of sines and cosines used in the reconstruction
reconstruction
Reconstruction

f(x) is a periodic function with period 2 π

f(x) = a0/2 + Σ an cos(nx) + bnsin(nx)

summation over n=1 to n=∞

Where a0 , a0 , bn are Fourier coefficients

The functions, cos(nx), sin(n x) form an orthonormal set of functions on the space of periodic functions.

The Fourier coefficients are the coordinates of f in that basis.

coefficients
Coefficients
  • a0 = 1/ π∫ f(x) dx
  • an = 1/ π∫ f(x) cos(nx) dx
  • bn = 1/ π∫ f(x) sin(nx) dx

integration interval – π to + π

square wave reconstruction
Square wave reconstruction

http://cnx.org/content/m0041/

latest/

pulse reconstruction
Pulse Reconstruction

http://www.math.harvard.edu/archive/21b_fall_03/fourier/index.html

principal component analysis

Principal Component Analysis

References:

A tutorial on Principal Component Analysis, Smith, 2002

PCA Principal Component Analysis, www.eng.man.ac.uk/mech

principal component analysis1
Principal Component Analysis
  • Reduce dimensionality of the data
  • Maximize information retained in data
    • Compact description of data
  • First principal component explains greatest amount of variation in data
    • Second component explains the next greatest amount of information and is independent to first component
    • As many components as variables
rotation of existing axes
Rotation of Existing Axes
  • Can view PCA as rotation of original axes to new positions determined by original variables
  • There will be no correlation between new variables defined by rotation
  • First new variable contains the maximum about of variation – maximum information
  • Second new variable contain the maximum amount of variation not explained by the first variable
  • The second variable is orthogonal to the first
pca algorithm
PCA Algorithm
  • Subtract the mean from each dimension,
    • In this case, subtract the mean of the x values from all the individual x values
    • Same for the y mean
  • Results in a data set with mean of zero in each dimension
  • Calculate the covariance matrix

C =cov(data) where data is k x 2, k the number of points

  • Calculate eigenvectors and eigenvalues of covariance matrix

[V,D] = EIG(X) produces a diagonal matrix D of eigenvalues and a

full matrix V whose columns are the corresponding eigenvectors so

that X*V = V*D.

eigenfaces
Eigenfaces
  • Main Idea:

Represent a face by a linear combination of basis face_images

  • Roughly,

Face = Σcoeffi * face_imagei

Reference: http://www.pages.drexel.edu/~sis26/Eigenface%20Tutorial.htm

eigenfaces1
Eigenfaces
  • Set S of M faces
  • Transform images into a vector:

S = { Γ1, Γ2, Γ3,…, ΓM }

  • Find the mean of the image set:

Ψ = (1/M) ΣΓn for n=1 to M

  • Find the difference between the input image and the mean image:

Φi = Γi - Ψ

slide22
Find a set of orthonormal vectors, un, which describes the distribution of the data
  • un and λn are the eigenvector (eigenfaces) and eigenvalues of the covariance matrix C
slide23
Covariance matrix defined as:

C = AAT

A= {Φ1Φ2Φ3 ,…, Φn } (input image-mean image)

C = (1/M) ΣΦnΦnT

representing original images
Representing Original Images
  • Each image (minus the mean) in original set can be represented by a weighted sum of the eigenvectors
  • Φj = Σwjuj (where uj is an eigenvector, Φj is the image minus the mean)
  • The weights can be calculated by

wj = ujT Φj

recognition
Recognition
  • Transform new face to eigenface components
    • Subtract from the new image the mean image, Ψ, and multiply difference with each eigenvector

Γ – Ψ normalize the unknown image, then, project normalized image onto eigenspace to find the weights:

wk = ukT(Γ – Ψ)

W= [ w1, w2, w3, …,wM ] The unknown image is represented by the weight vector

    • Best face match is found by minimizing the Euclidean distance between new image weight vector and the weight vectors of the images in original database
image application eigenface
Image Application- Eigenface
  • Consider simplified case
    • Images are 3x3 pixels
    • Four subjects
    • Two images of each subject for training (total of 8 images)
step 1
Step 1:

Image1 =

0.2100 0.2000 0.1800

0.2200 0.1900 0.2300

0.1700 0.1900 0.2400

>> I1C = Image1(:)

I1C =

0.2100

0.2200

0.1700

0.2000

0.1900

0.1900

0.1800

0.2300

0.2400

  • Convert Images to Column Vectors
step 2
Step 2
  • Concatenate the column vectors for each image to form 9x8 matrix

inputs =

0.2100 0.2300 0.1500 0.1300 0.3400 0.3300 0.6500 0.6000

0.2000 0.1800 0.1600 0.1500 0.3000 0.2500 0.4500 0.4800

0.1800 0.1800 0.1300 0.1400 0.3200 0.2800 0.3600 0.3500

0.2200 0.2100 0.1700 0.1700 0.2200 0.3100 0.8200 0.8500

0.1900 0.2000 0.1600 0.1500 0.2800 0.2900 0.5500 0.6000

0.2300 0.1900 0.1500 0.1900 0.2600 0.2700 0.7500 0.7500

0.1700 0.2300 0.1700 0.1400 0.2700 0.2600 0.4500 0.4200

0.1900 0.2200 0.1600 0.1600 0.3200 0.3000 0.3800 0.3900

0.2400 0.1700 0.1800 0.1800 0.3400 0.2900 0.7200 0.7500

Subject 1 1 2 2 3 3 4 4

step 3
Step 3
  • Calculate mean of all subjects

mean(inputs\')\'

=

0.3300

0.2713

0.2425

0.3713

0.3025

0.3488

0.2638

0.2650

0.3588

step 4
Step 4
  • Subtract mean vector from each column of inputs

data_m =

-0.1200 -0.1000 -0.1800 -0.2000 0.0100 0.0000 0.3200 0.2700

-0.0713 -0.0913 -0.1113 -0.1213 0.0287 -0.0213 0.1787 0.2088

-0.0625 -0.0625 -0.1125 -0.1025 0.0775 0.0375 0.1175 0.1075

-0.1513 -0.1613 -0.2013 -0.2013 -0.1513 -0.0613 0.4488 0.4788

-0.1125 -0.1025 -0.1425 -0.1525 -0.0225 -0.0125 0.2475 0.2975

-0.1187 -0.1587 -0.1987 -0.1587 -0.0887 -0.0787 0.4013 0.4013

-0.0938 -0.0338 -0.0938 -0.1238 0.0062 -0.0038 0.1862 0.1563

-0.0750 -0.0450 -0.1050 -0.1050 0.0550 0.0350 0.1150 0.1250

-0.1188 -0.1888 -0.1788 -0.1788 -0.0187 -0.0688 0.3613 0.3912

step 5 cov matrix
Step 5 – Cov Matrix

C = data_m\' * data_m

C =

0.1015 0.1061 0.1445 0.1462 0.0254 0.0251 -0.2708 -0.2780

0.1061 0.1227 0.1554 0.1534 0.0332 0.0348 -0.2967 -0.3089

0.1445 0.1554 0.2095 0.2094 0.0346 0.0369 -0.3901 -0.4001

0.1462 0.1534 0.2094 0.2125 0.0313 0.0345 -0.3892 -0.3981

0.0254 0.0332 0.0346 0.0313 0.0416 0.0220 -0.0909 -0.0972

0.0251 0.0348 0.0369 0.0345 0.0220 0.0179 -0.0831 -0.0882

-0.2708 -0.2967 -0.3901 -0.3892 -0.0909 -0.0831 0.7502 0.7706

-0.2780 -0.3089 -0.4001 -0.3981 -0.0972 -0.0882 0.7706 0.7999

step 6
Step 6
  • Calculate eigenvectors and eigenvalues of C

eigenval =

2.1884

0.0528

0.0091

0.0033

0.0015

0.0005

0.0002

0.0000

eigenvect =

-0.2122 -0.1874 0.2788 0.2390 -0.2574 0.3749 0.6732 -0.3536

-0.2329 0.0231 -0.6474 -0.0239 0.2157 -0.4681 0.3672 -0.3536

-0.3056 -0.2854 0.0193 -0.0950 0.6379 0.4210 -0.3263 -0.3536

-0.3050 -0.3843 0.2625 0.0603 -0.4085 -0.4761 -0.4100 -0.3536

-0.0682 0.7470 0.4420 0.1482 0.2301 -0.2021 -0.0344 -0.3536

-0.0638 0.3626 -0.3513 -0.3818 -0.4997 0.4038 -0.2397 -0.3536

0.5846 -0.0758 -0.2417 0.6433 -0.0200 0.0977 -0.2128 -0.3536

0.6031 -0.1998 0.2378 -0.5900 0.1019 -0.1511 0.1829 -0.3536

Eigenvectors are the eigenfaces

demo eigencats
Demo - EigenCats

(25x25 pixels)

eigencats
Eigencats

Reshape eigencats to 25x25 ‘images’

slide35
Use eigencats as a basis set to reconstruct an unknown test image
  • Images that match images in training set have a small reconstruction error.
  • Images that do not match an image in the training set have a large reconstruction error and are not matched
cats12 jpg not in database
Cats12.jpg Not in database

Poor reconstruction

dog31 jpg not in database
Dog31.jpg Not in database

Poor reconstruction

dog71 jpg not in database
Dog71.jpg Not in database

Poor reconstruction

slide40
READ
  • Eigenfaces for recognition – Turk & Pentland
  • Face Recognition using Eigenfaces – Turk & Pentland
  • Eigenface - wikipedia
self organizing maps
Self Organizing Maps
  • The following slides are taken/modified from Renee Baltimore’s MS defense at RIT
self organizing maps network architecture
Self Organizing Maps: Network Architecture

Neighborhood R=1 of node k

Node k

Cortex node j

Weighted connection wij

Input neuron i

Kohonen Map Network Architecture

http://www.ai-junkie.com/ann/som/som1.html

network architecture cont
Network Architecture (cont.)

2 Layers: Input layer and 2D cortex of nodes

Each cortex node maintains a position in the map

Each nodes is associated with a weight vector equal in size to the input data

Each node is fully connected to the input layer

No connections between nodes in the output layer

Neighborhood of a nodes is defined as all nodes within a specified radius R=1, 2, 3…

slide44

Weight vectors are randomly initialized

  • Each data instance is presented to the network
  • The distance between that instance and each node’s weight vector is calculated via Euclidean distance:

d = ∑ (vi - wi)2 where v is the input data and w

i=1 to n

is the weight vector

  • The node with the closest weight vector (minimum d) is chosen as the winner
  • Weights of all nodes within a defined neighborhood of the winner are updated.
self organizing maps algorithm cont
Self-Organizing Maps: Algorithm (cont)
  • Weights are updated according to the equation:
    • w(t+1) = w(t) + Ѳ(v, t)α(t)(v(t)-w(t))
    • Where w is the weight, t is iteration, v is the input vector, θ is the influence of distance from winner (decreases with increase in distance), and α is the learning rate.
  • This is done over a number of iterations
  • Radius of neighborhood decreased at each iteration according to exponential decay:
    • rad(t) = rad0exp(-t/λ) where rad0 is the initial radius and λ is a decay constant
self organizing maps visualization
Self-Organizing Maps: Visualization

SOM trained to cluster colors [30]

http://www.ai-junkie.com/ann/som/som1.html

network topologies
Network Topologies
  • Variation of connections along opposition edges of the SOM
  • Allows for growth of neighborhood
  • Topologies: rectangle, cylinder, mobius strip, torus, Klein bottle, all decomposed to a rectangle

Rectangle Cylinder Mobius Strip Torus Klein Bottle

mobius strip
Mobius Strip

http://www.scifun.ed.ac.uk

torus
Torus

http://cis.jhu.edu/education/introPatternTheory

slide50

Klein Bottle -The bottle is a one-sided surface like the Möbius band . It is closed and has no border and neither an enclosed interior nor exterior.

implementation
Implementation
  • Self-organizing map algorithm
  • SOM algorithm for 5 topological variations
  • Analysis of SOM varying:
    • Data
    • Cortex sizes
    • Network topology
slide52
Data

Cohn-Kanade facial expressions database

AT&T Database of faces

Georgia Tech Face Database

Figure-Ground Dataset of Natural Images

Oliva & Torralba dataset of urban and natural scenes

Web gathered Images

experiments and results
Experiments and Results
  • High contrast patches
  • Natural and Outdoor scenes
  • Face images
high contrast patches
High Contrast Patches
  • 5000 random 3x3 patches extracted from each of 100 natural images (Figure-Ground Dataset)
  • Top 20% having the highest contrast retained
  • Retained patches normalized to zero mean and unit variance
  • SOM trained on 200 of these patches
high contrast patches1
High Contrast Patches

Inspection of Patches on rectangular SOM

natural and outdoor scenes
Natural and Outdoor Scenes

Images from Oliva and Torralba database

Images categorized: coast, forest, highway, city, mountain, open country, street, tall building

One image chosen from each category

Images converted to grayscale, resized from 256 x 256 to 128 x 128, and normalized

Images split up into 8 x 8 patches and patches of low contrast discarded

SOM trained on patches

natural and outdoor scenes1
Natural and Outdoor Scenes

coast forest highway city

coast forest highway city

Training Images

natural and outdoor scenes2
Natural and Outdoor Scenes

Resulting SOM (Klein Bottle Topology)

natural and outdoor scenes3
Natural and Outdoor Scenes

G

R

R

R

G

K

B

B

B

K

R

R

G

G

R

B

B

B

K

R

B

K

R

G

R

Rectangle Cylinder

Mobius Torus and Klein Bottle

False coloring of images:

natural and outdoor scenes4
Natural and Outdoor Scenes

SOM with corresponding Klein Bottle color quadrants

natural and outdoor scenes5
Natural and Outdoor Scenes

RED

BLACK

BLUE

GREEN

false coloring of training and test images1
False Coloring of Training and Test Images

forest

city

open country

tall building

face images
Face Images

Images from Cohn-Kanade Facial Expressions Database, AT&T Database of faces and Georgia Tech Face Database

Images converted to grayscale, cropped and resized to 25 x 25

Normalization within each image and across the dataset.

Faces of different facial expressions/poses: smiling, frowning, surprised, left profile, right profile

1 instance for each of 45 subjects chosen for training

face images1
Face Images
  • Training done on cortex sizes: 7 x 7, 15 x 15, 22 x 22, 30 x 30, 45 x 45 and 60 x 60

Face Images Use In Training

face images2
Face Images

22 x 22 SOM in Torus Topology

face images3
Face Images

7 x 7

15 x 15

22 x 22

30 x 30

45 x 45

60 x 60

New faces tested for maximum response nodes

face images4
Face Images

A-sad, (12,19)

A-happy, (11,6)

B-right profile, (16, 17)

B-left profile, (19, 21)

C-surprised, (9, 22)

C-happy, (11, 4)

Testing same subjects with different expressions

face images5
Face Images

Tests done on non-face images:

ad