Advanced computer vision
Download
1 / 69

Advanced Computer Vision - PowerPoint PPT Presentation


  • 126 Views
  • Uploaded on

Advanced Computer Vision. Lecture 04. Project Teams. Team 1: Project 1 Pedestrian Detection, Version 2 (Low resolution video using a Non-Stationary Camera Chris Cowdery-Corvan Liangyi Fan Thomas Knack

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Advanced Computer Vision' - tamma


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Project teams
Project Teams

  • Team 1: Project 1 Pedestrian Detection, Version 2 (Low resolution video using a Non-Stationary Camera

    • Chris Cowdery-Corvan

    • Liangyi Fan

    • Thomas Knack

  • Team 2: Project 1 Pedestrian Detection, Version 2 (Low resolution video using a Non-Stationary Camera

    • Jerome Marhic

    • Maxime Knibbe


Advanced computer vision


Advanced computer vision


Advanced computer vision

  • Caltech Pedestrian Dataset- video

    http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/

  • An Experimental Study on Pedestrian Classification

    S. Munder and D.M. Gavrila

    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 11, NOVEMBER 2006



Advanced computer vision

  • Representation of a function in terms of sinusoids

  • Ability to reconstruct the function in terms of sines and cosines

  • How good the reconstruct is depends on the number of sines and cosines used in the reconstruction


Reconstruction
Reconstruction

f(x) is a periodic function with period 2 π

f(x) = a0/2 + Σ an cos(nx) + bnsin(nx)

summation over n=1 to n=∞

Where a0 , a0 , bn are Fourier coefficients

The functions, cos(nx), sin(n x) form an orthonormal set of functions on the space of periodic functions.

The Fourier coefficients are the coordinates of f in that basis.


Coefficients
Coefficients

  • a0 = 1/ π∫ f(x) dx

  • an = 1/ π∫ f(x) cos(nx) dx

  • bn = 1/ π∫ f(x) sin(nx) dx

    integration interval – π to + π


Square wave reconstruction
Square wave reconstruction

http://cnx.org/content/m0041/

latest/


Pulse reconstruction
Pulse Reconstruction

http://www.math.harvard.edu/archive/21b_fall_03/fourier/index.html


Principal component analysis

Principal Component Analysis

References:

A tutorial on Principal Component Analysis, Smith, 2002

PCA Principal Component Analysis, www.eng.man.ac.uk/mech


Principal component analysis1
Principal Component Analysis

  • Reduce dimensionality of the data

  • Maximize information retained in data

    • Compact description of data

  • First principal component explains greatest amount of variation in data

    • Second component explains the next greatest amount of information and is independent to first component

    • As many components as variables


Rotation of existing axes
Rotation of Existing Axes

  • Can view PCA as rotation of original axes to new positions determined by original variables

  • There will be no correlation between new variables defined by rotation

  • First new variable contains the maximum about of variation – maximum information

  • Second new variable contain the maximum amount of variation not explained by the first variable

  • The second variable is orthogonal to the first



Pca algorithm
PCA Algorithm

  • Subtract the mean from each dimension,

    • In this case, subtract the mean of the x values from all the individual x values

    • Same for the y mean

  • Results in a data set with mean of zero in each dimension

  • Calculate the covariance matrix

    C =cov(data) where data is k x 2, k the number of points

  • Calculate eigenvectors and eigenvalues of covariance matrix

    [V,D] = EIG(X) produces a diagonal matrix D of eigenvalues and a

    full matrix V whose columns are the corresponding eigenvectors so

    that X*V = V*D.




Reduce to one dimension retain maximum information
Reduce to One DimensionRetain Maximum Information


Eigenfaces
Eigenfaces

  • Main Idea:

    Represent a face by a linear combination of basis face_images

  • Roughly,

    Face = Σcoeffi * face_imagei

    Reference: http://www.pages.drexel.edu/~sis26/Eigenface%20Tutorial.htm


Eigenfaces1
Eigenfaces

  • Set S of M faces

  • Transform images into a vector:

    S = { Γ1, Γ2, Γ3,…, ΓM }

  • Find the mean of the image set:

    Ψ = (1/M) ΣΓn for n=1 to M

  • Find the difference between the input image and the mean image:

    Φi = Γi - Ψ


Advanced computer vision


Advanced computer vision


Representing original images
Representing Original Images

  • Each image (minus the mean) in original set can be represented by a weighted sum of the eigenvectors

  • Φj = Σwjuj (where uj is an eigenvector, Φj is the image minus the mean)

  • The weights can be calculated by

    wj = ujT Φj


Recognition
Recognition

  • Transform new face to eigenface components

    • Subtract from the new image the mean image, Ψ, and multiply difference with each eigenvector

      Γ – Ψ normalize the unknown image, then, project normalized image onto eigenspace to find the weights:

      wk = ukT(Γ – Ψ)

      W= [ w1, w2, w3, …,wM ] The unknown image is represented by the weight vector

    • Best face match is found by minimizing the Euclidean distance between new image weight vector and the weight vectors of the images in original database


Image application eigenface
Image Application- Eigenface

  • Consider simplified case

    • Images are 3x3 pixels

    • Four subjects

    • Two images of each subject for training (total of 8 images)


Step 1
Step 1:

Image1 =

0.2100 0.2000 0.1800

0.2200 0.1900 0.2300

0.1700 0.1900 0.2400

>> I1C = Image1(:)

I1C =

0.2100

0.2200

0.1700

0.2000

0.1900

0.1900

0.1800

0.2300

0.2400

  • Convert Images to Column Vectors


Step 2
Step 2

  • Concatenate the column vectors for each image to form 9x8 matrix

inputs =

0.2100 0.2300 0.1500 0.1300 0.3400 0.3300 0.6500 0.6000

0.2000 0.1800 0.1600 0.1500 0.3000 0.2500 0.4500 0.4800

0.1800 0.1800 0.1300 0.1400 0.3200 0.2800 0.3600 0.3500

0.2200 0.2100 0.1700 0.1700 0.2200 0.3100 0.8200 0.8500

0.1900 0.2000 0.1600 0.1500 0.2800 0.2900 0.5500 0.6000

0.2300 0.1900 0.1500 0.1900 0.2600 0.2700 0.7500 0.7500

0.1700 0.2300 0.1700 0.1400 0.2700 0.2600 0.4500 0.4200

0.1900 0.2200 0.1600 0.1600 0.3200 0.3000 0.3800 0.3900

0.2400 0.1700 0.1800 0.1800 0.3400 0.2900 0.7200 0.7500

Subject 1 1 2 2 3 3 4 4


Step 3
Step 3

  • Calculate mean of all subjects

mean(inputs')'

=

0.3300

0.2713

0.2425

0.3713

0.3025

0.3488

0.2638

0.2650

0.3588


Step 4
Step 4

  • Subtract mean vector from each column of inputs

data_m =

-0.1200 -0.1000 -0.1800 -0.2000 0.0100 0.0000 0.3200 0.2700

-0.0713 -0.0913 -0.1113 -0.1213 0.0287 -0.0213 0.1787 0.2088

-0.0625 -0.0625 -0.1125 -0.1025 0.0775 0.0375 0.1175 0.1075

-0.1513 -0.1613 -0.2013 -0.2013 -0.1513 -0.0613 0.4488 0.4788

-0.1125 -0.1025 -0.1425 -0.1525 -0.0225 -0.0125 0.2475 0.2975

-0.1187 -0.1587 -0.1987 -0.1587 -0.0887 -0.0787 0.4013 0.4013

-0.0938 -0.0338 -0.0938 -0.1238 0.0062 -0.0038 0.1862 0.1563

-0.0750 -0.0450 -0.1050 -0.1050 0.0550 0.0350 0.1150 0.1250

-0.1188 -0.1888 -0.1788 -0.1788 -0.0187 -0.0688 0.3613 0.3912


Step 5 cov matrix
Step 5 – Cov Matrix

C = data_m' * data_m

C =

0.1015 0.1061 0.1445 0.1462 0.0254 0.0251 -0.2708 -0.2780

0.1061 0.1227 0.1554 0.1534 0.0332 0.0348 -0.2967 -0.3089

0.1445 0.1554 0.2095 0.2094 0.0346 0.0369 -0.3901 -0.4001

0.1462 0.1534 0.2094 0.2125 0.0313 0.0345 -0.3892 -0.3981

0.0254 0.0332 0.0346 0.0313 0.0416 0.0220 -0.0909 -0.0972

0.0251 0.0348 0.0369 0.0345 0.0220 0.0179 -0.0831 -0.0882

-0.2708 -0.2967 -0.3901 -0.3892 -0.0909 -0.0831 0.7502 0.7706

-0.2780 -0.3089 -0.4001 -0.3981 -0.0972 -0.0882 0.7706 0.7999


Step 6
Step 6

  • Calculate eigenvectors and eigenvalues of C

eigenval =

2.1884

0.0528

0.0091

0.0033

0.0015

0.0005

0.0002

0.0000

eigenvect =

-0.2122 -0.1874 0.2788 0.2390 -0.2574 0.3749 0.6732 -0.3536

-0.2329 0.0231 -0.6474 -0.0239 0.2157 -0.4681 0.3672 -0.3536

-0.3056 -0.2854 0.0193 -0.0950 0.6379 0.4210 -0.3263 -0.3536

-0.3050 -0.3843 0.2625 0.0603 -0.4085 -0.4761 -0.4100 -0.3536

-0.0682 0.7470 0.4420 0.1482 0.2301 -0.2021 -0.0344 -0.3536

-0.0638 0.3626 -0.3513 -0.3818 -0.4997 0.4038 -0.2397 -0.3536

0.5846 -0.0758 -0.2417 0.6433 -0.0200 0.0977 -0.2128 -0.3536

0.6031 -0.1998 0.2378 -0.5900 0.1019 -0.1511 0.1829 -0.3536

Eigenvectors are the eigenfaces


Demo eigencats
Demo - EigenCats

(25x25 pixels)


Eigencats
Eigencats

Reshape eigencats to 25x25 ‘images’


Advanced computer vision



Cats12 jpg not in database
Cats12.jpg Not in database image

Poor reconstruction


Dog31 jpg not in database
Dog31.jpg Not in database image

Poor reconstruction


Dog71 jpg not in database
Dog71.jpg Not in database image

Poor reconstruction


Advanced computer vision
READ image

  • Eigenfaces for recognition – Turk & Pentland

  • Face Recognition using Eigenfaces – Turk & Pentland

  • Eigenface - wikipedia


Self organizing maps
Self Organizing Maps image

  • The following slides are taken/modified from Renee Baltimore’s MS defense at RIT


Self organizing maps network architecture
Self Organizing Maps: Network Architecture image

Neighborhood R=1 of node k

Node k

Cortex node j

Weighted connection wij

Input neuron i

Kohonen Map Network Architecture

http://www.ai-junkie.com/ann/som/som1.html


Network architecture cont
Network Architecture (cont.) image

2 Layers: Input layer and 2D cortex of nodes

Each cortex node maintains a position in the map

Each nodes is associated with a weight vector equal in size to the input data

Each node is fully connected to the input layer

No connections between nodes in the output layer

Neighborhood of a nodes is defined as all nodes within a specified radius R=1, 2, 3…


Advanced computer vision

  • Weight vectors are randomly initialized image

  • Each data instance is presented to the network

  • The distance between that instance and each node’s weight vector is calculated via Euclidean distance:

    d = ∑ (vi - wi)2 where v is the input data and w

    i=1 to n

    is the weight vector

  • The node with the closest weight vector (minimum d) is chosen as the winner

  • Weights of all nodes within a defined neighborhood of the winner are updated.


Self organizing maps algorithm cont
Self-Organizing Maps: Algorithm (cont) image

  • Weights are updated according to the equation:

    • w(t+1) = w(t) + Ѳ(v, t)α(t)(v(t)-w(t))

    • Where w is the weight, t is iteration, v is the input vector, θ is the influence of distance from winner (decreases with increase in distance), and α is the learning rate.

  • This is done over a number of iterations

  • Radius of neighborhood decreased at each iteration according to exponential decay:

    • rad(t) = rad0exp(-t/λ) where rad0 is the initial radius and λ is a decay constant


Self organizing maps visualization
Self-Organizing Maps: Visualization image

SOM trained to cluster colors [30]

http://www.ai-junkie.com/ann/som/som1.html


Network topologies
Network Topologies image

  • Variation of connections along opposition edges of the SOM

  • Allows for growth of neighborhood

  • Topologies: rectangle, cylinder, mobius strip, torus, Klein bottle, all decomposed to a rectangle

    Rectangle Cylinder Mobius Strip Torus Klein Bottle


Mobius strip
Mobius Strip image

http://www.scifun.ed.ac.uk


Torus
Torus image

http://cis.jhu.edu/education/introPatternTheory


Advanced computer vision

Klein Bottle image-The bottle is a one-sided surface like the Möbius band . It is closed and has no border and neither an enclosed interior nor exterior.


Implementation
Implementation image

  • Self-organizing map algorithm

  • SOM algorithm for 5 topological variations

  • Analysis of SOM varying:

    • Data

    • Cortex sizes

    • Network topology


Advanced computer vision
Data image

Cohn-Kanade facial expressions database

AT&T Database of faces

Georgia Tech Face Database

Figure-Ground Dataset of Natural Images

Oliva & Torralba dataset of urban and natural scenes

Web gathered Images


Experiments and results
Experiments and Results image

  • High contrast patches

  • Natural and Outdoor scenes

  • Face images


High contrast patches
High Contrast Patches image

  • 5000 random 3x3 patches extracted from each of 100 natural images (Figure-Ground Dataset)

  • Top 20% having the highest contrast retained

  • Retained patches normalized to zero mean and unit variance

  • SOM trained on 200 of these patches


High contrast patches1
High Contrast Patches image

Inspection of Patches on rectangular SOM


Natural and outdoor scenes
Natural and Outdoor Scenes image

Images from Oliva and Torralba database

Images categorized: coast, forest, highway, city, mountain, open country, street, tall building

One image chosen from each category

Images converted to grayscale, resized from 256 x 256 to 128 x 128, and normalized

Images split up into 8 x 8 patches and patches of low contrast discarded

SOM trained on patches


Natural and outdoor scenes1
Natural and Outdoor Scenes image

coast forest highway city

coast forest highway city

Training Images


Natural and outdoor scenes2
Natural and Outdoor Scenes image

Resulting SOM (Klein Bottle Topology)


Natural and outdoor scenes3
Natural and Outdoor Scenes image

G

R

R

R

G

K

B

B

B

K

R

R

G

G

R

B

B

B

K

R

B

K

R

G

R

Rectangle Cylinder

Mobius Torus and Klein Bottle

False coloring of images:


Natural and outdoor scenes4
Natural and Outdoor Scenes image

SOM with corresponding Klein Bottle color quadrants


Natural and outdoor scenes5
Natural and Outdoor Scenes image

RED

BLACK

BLUE

GREEN


False coloring of training and test images
False Coloring of Training and Test Images image

coast

highway

mountain

street


False coloring of training and test images1
False Coloring of Training and Test Images image

forest

city

open country

tall building


Face images
Face Images image

Images from Cohn-Kanade Facial Expressions Database, AT&T Database of faces and Georgia Tech Face Database

Images converted to grayscale, cropped and resized to 25 x 25

Normalization within each image and across the dataset.

Faces of different facial expressions/poses: smiling, frowning, surprised, left profile, right profile

1 instance for each of 45 subjects chosen for training


Face images1
Face Images image

  • Training done on cortex sizes: 7 x 7, 15 x 15, 22 x 22, 30 x 30, 45 x 45 and 60 x 60

    Face Images Use In Training


Face images2
Face Images image

22 x 22 SOM in Torus Topology


Face images3
Face Images image

7 x 7

15 x 15

22 x 22

30 x 30

45 x 45

60 x 60

New faces tested for maximum response nodes


Face images4
Face Images image

A-sad, (12,19)

A-happy, (11,6)

B-right profile, (16, 17)

B-left profile, (19, 21)

C-surprised, (9, 22)

C-happy, (11, 4)

Testing same subjects with different expressions


Face images5
Face Images image

Tests done on non-face images: