principal components and factor analysis n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Principal Components and Factor Analysis PowerPoint Presentation
Download Presentation
Principal Components and Factor Analysis

Loading in 2 Seconds...

play fullscreen
1 / 59

Principal Components and Factor Analysis - PowerPoint PPT Presentation


  • 159 Views
  • Uploaded on

Principal Components and Factor Analysis. Dr. Adel Elomri adel.elomri@qu.edu.qa. Principal Components Analysis (PCA). Data Analysis and Presentation. We have too many observations and dimensions To reason about or obtain insights from To visualize

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Principal Components and Factor Analysis' - lee-winters


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
principal components and factor analysis
Principal Components and Factor Analysis

Dr. Adel Elomri

adel.elomri@qu.edu.qa

data analysis and presentation
Data Analysis and Presentation
  • We have too many observations and dimensions
    • To reason about or obtain insights from
    • To visualize
    • To find classification, clustering, pattern recognition
data analysis and presentation1
Data Analysis and Presentation
  • How many unique “sub-sets” are in the sample?
  • How are they similar / different?
  • What are the underlying factors that influence the samples?
  • Which time / temporal trends are (anti)correlated?
  • Which measurements are needed to differentiate?
  • How to best present what is “interesting”?
  • Which “sub-set” does this new sample rightfully belong?
data presentation
Data Presentation

Univariate

Bivariate

Trivariate

What if we have 5, or 8, or 10

Dimensions?

data analysis and presentation2
Data Analysis and Presentation

How to comment on this data?

Source: L’argus 2004, France

slide7

Data Presentation: The goal

  • We have too many observations and dimensions
    • To reason about or obtain insights from
    • To visualize
    • Too much noise in the data
    • Need to “reduce” them to a smaller set of factors
    • Better representation of data without losing much information
    • Can build more effective data analyses on the reduced-dimensional space: classification, clustering, pattern recognition
slide8

Principal Components Analysis (PCA)

  • Discover a new set of factors/dimensions/axes against which to represent, describe or evaluate the data
    • For more effective reasoning, insights, or better visualization
    • Reduce noise in the data
    • Typically a smaller set of factors: dimension reduction
    • Better representation of data without losing much information
    • Can build more effective data analyses on the reduced-dimensional space: classification, clustering, pattern recognition
  • Factors are combinations of observed variables
    • May be more effective bases for insights, even if physical meaning is obscure
    • Observed data are described in terms of these factors rather than in terms of original variables/dimensions
pca basic concept
PCA: Basic Concept
  • Areas of variance in data are where items can be best discriminated and key underlying phenomena observed
    • Areas of greatest “signal” in the data
  • If two items or dimensions are highly correlated or dependent
    • They are likely to represent highly related phenomena
    • If they tell us about the same underlying variance in the data, combining them to form a single measure is reasonable
      • Parsimony
      • Reduction in Error
  • So we want to combine related variables, and focus on uncorrelated or independent ones, especially those along which the observations have high variance
  • We want a smaller set of variables that explain most of the variance in the original data, in more compact and insightful form
pca basic concept1
PCA: Basic Concept
  • What if the dependences and correlations are not so strong or direct?
  • And suppose you have 3 variables, or 4, or 5, or 10000?
  • Look for the phenomena underlying the observed covariance/co-dependence in a set of variables
    • Once again, phenomena that are uncorrelated or independent, and especially those along which the data show high variance
  • These phenomena are called “factors” or “principal components” or “independent components,”
slide11
PCA:
  • The new variables/dimensions
    • Are linear combinations of the original ones
    • Are uncorrelated with one another
      • Orthogonal in original dimension space
    • Capture as much of the original variance in the data as possible
    • Are called Principal Components
slide12

PCA: The goal

  • PCA used to reduce dimensions of data without much loss of information.
  • Explain/summarize the underlying variance-covariance structure of a large set of variables through a few linear combinations of these variables.
  • Used in machine learning and in signal processing and image compression (among other things).
pca applications
Uses:

Data Visualization

Data Reduction

Data Classification

Trend Analysis

Factor Analysis

Noise Reduction

PCA: Applications
slide14
PCA:

All is about the way you look at the data

slide17

PCA: Example

Principle of PCA

slide18

Trick: Rotate Coordinate Axes

Suppose we have a population measured on p random variables X1,…,Xp. Note that these random variables represent the p-axes of the Cartesian coordinate system in which the population resides. Our goal is to develop a new set of p axes (linear combinations of the original p axes) in the directions of greatest variability:

X2

X1

This is accomplished by rotating the axes.

background for pca
Background for PCA
  • Suppose attributes are A1 and A2, and we have n training examples. x’s denote values of A1 and y’s denote values of A2 over the training examples.
  • Variance of an attribute:
background for pca1
Background for PCA
  • Covariance of two attributes:
  • If covariance is positive, both dimensions increase together. If negative, as one increases, the other decreases. Zero: independent of each other/non linear relationship.
background for pca2
Background for PCA
  • Covariance matrix
    • Suppose we have n attributes, A1, ..., An.
    • Covariance matrix:
slide23

Background for PCA

Covariance matrix

background for pca3
Background for PCA
  • Eigenvectors:
    • Let M be an nn matrix.
      • v is an eigenvector of M if M  v = v
      •  is called the eigenvalue associated with v
    • For any eigenvector v of Mand scalar a,
    • Thus you can always choose eigenvectors of length 1:
    • If M has any eigenvectors, it has n of them, and they are orthogonal to one another.
    • Thus eigenvectors can be used as a new basis for a n-dimensional vector space.
pca algebraic interpretation
PCA: Algebraic Interpretation
  • Given m points in a n dimensional space, for large n, how does one project on to a low dimensional space while preserving broad trends in the data and allowing it to be visualized?
pca algebraic interpretation 1d
PCA :Algebraic Interpretation – 1D
  • Given m points in a n dimensional space, for large n, how does one project on to a 1 dimensional space?
  • Choose a line that fits the data so the points are spread out well along the line
pca algebraic interpretation 1d1
PCA :Algebraic Interpretation – 1D
  • Formally, minimize sum of squares of distances to the line.
pca algebraic interpretation 1d2
PCA :Algebraic Interpretation – 1D
  • Minimizing sum of squares of distances to the line is the same as maximizing the sum of squares of the projections on that line, thanks to Pythagoras.
pca general
PCA: General

From k original variables: x1,x2,...,xk:

Produce k new variables: y1,y2,...,yk:

y1 = a11x1 + a12x2 + ... + a1kxk

y2 = a21x1 + a22x2 + ... + a2kxk

...

yk = ak1x1 + ak2x2 + ... + akkxk

pca general1
PCA: General

From k original variables: x1,x2,...,xk:

Produce k new variables: y1,y2,...,yk:

y1 = a11x1 + a12x2 + ... + a1kxk

y2 = a21x1 + a22x2 + ... + a2kxk

...

yk = ak1x1 + ak2x2 + ... + akkxk

such that:

yk's are uncorrelated (orthogonal)

y1 explains as much as possible of original variance in data set

y2 explains as much as possible of remaining variance

etc.

slide31

2nd Principal

Component, y2

1st Principal

Component, y1

pca scores

xi2

yi,1

yi,2

xi1

PCA Scores
pca another explanation
PCA: Another Explanation

From k original variables: x1,x2,...,xk:

Produce k new variables: y1,y2,...,yk:

y1 = a11x1 + a12x2 + ... + a1kxk

y2 = a21x1 + a22x2 + ... + a2kxk

...

yk = ak1x1 + ak2x2 + ... + akkxk

yk's are

Principal Components

such that:

yk's are uncorrelated (orthogonal)

y1 explains as much as possible of original variance in data set

y2 explains as much as possible of remaining variance

etc.

pca general2
PCA: General

{a11,a12,...,a1k} is 1st Eigenvector of correlation/covariance matrix, and coefficients of first principal component

{a21,a22,...,a2k} is 2nd Eigenvector of correlation/covariance matrix, and coefficients of 2nd principal component

{ak1,ak2,...,akk} is kth Eigenvector of correlation/covariance matrix, and coefficients of kth principal component

pca summary
PCA Summary
  • Rotates multivariate dataset into a new configuration which is easier to interpret
  • Purposes
    • simplify data
    • look at relationships between variables
    • look at patterns of units
pca example step 1
PCA Example –STEP 1
  • Subtract the mean

from each of the data dimensions. All the x values have x subtracted and y values have y subtracted from them. This produces a data set whose mean is zero.

Subtracting the mean makes variance and covariance calculation easier by simplifying their equations. The variance and co-variance values are not affected by the mean value.

pca example step 11
PCA Example –STEP 1

http://kybele.psych.cornell.edu/~edelman/Psych-465-Spring-2003/PCA-tutorial.pdf

DATA:

x y

2.5 2.4

0.5 0.7

2.2 2.9

1.9 2.2

3.1 3.0

2.3 2.7

2 1.6

1 1.1

1.5 1.6

1.1 0.9

ZERO MEAN DATA:

x y

.69 .49

-1.31 -1.21

.39 .99

.09 .29

1.29 1.09

.49 .79

.19 -.31

-.81 -.81

-.31 -.31

-.71 -1.01

pca example step 12
PCA Example –STEP 1

http://kybele.psych.cornell.edu/~edelman/Psych-465-Spring-2003/PCA-tutorial.pdf

pca example step 2
PCA Example –STEP 2
  • Calculate the covariance matrix

cov = .616555556 .615444444

.615444444 .716555556

  • since the non-diagonal elements in this covariance matrix are positive, we should expect that both the x and y variable increase together.
pca example step 3
PCA Example –STEP 3
  • Calculate the eigenvectors and eigenvalues of the covariance matrix

eigenvalues = .0490833989

1.28402771

eigenvectors = -.735178656 -.677873399

.677873399 -.735178656

pca example step 31
PCA Example –STEP 3

http://kybele.psych.cornell.edu/~edelman/Psych-465-Spring-2003/PCA-tutorial.pdf

  • eigenvectors are plotted as diagonal dotted lines on the plot.
  • Note they are perpendicular to each other.
  • Note one of the eigenvectors goes through the middle of the points, like drawing a line of best fit.
  • The second eigenvector gives us the other, less important, pattern in the data, that all the points follow the main line, but are off to the side of the main line by some amount.
pca example step 4
PCA Example –STEP 4
  • Reduce dimensionality and form feature vector

the eigenvector with the highest eigenvalue is the principle component of the data set.

In our example, the eigenvector with the larges eigenvalue was the one that pointed down the middle of the data.

Once eigenvectors are found from the covariance matrix, the next step is to order them by eigenvalue, highest to lowest. This gives you the components in order of significance.

pca example step 41
PCA Example –STEP 4

Now, if you like, you can decide to ignore the components of lesser significance.

You do lose some information, but if the eigenvalues are small, you don’t lose much

  • n dimensions in your data
  • calculate n eigenvectors and eigenvalues
  • choose only the first p eigenvectors
  • final data set has only p dimensions.
pca example step 42
PCA Example –STEP 4
  • Feature Vector

FeatureVector = (eig1 eig2 eig3 … eign)

We can either form a feature vector with both of the eigenvectors:

-.677873399 -.735178656

-.735178656 .677873399

or, we can choose to leave out the smaller, less significant component and only have a single column:

- .677873399

- .735178656

pca example step 5
PCA Example –STEP 5
  • Deriving the new data

FinalData = RowFeatureVector x RowZeroMeanData

RowFeatureVector is the matrix with the eigenvectors in the columns transposed so that the eigenvectors are now in the rows, with the most significant eigenvector at the top

RowZeroMeanData is the mean-adjusted data transposed, ie. the data items are in each column, with each row holding a separate dimension.

pca example step 51
PCA Example –STEP 5

FinalData transpose: dimensions along columns

x y

-.827970186 -.175115307

1.77758033 .142857227

-.992197494 .384374989

-.274210416 .130417207

-1.67580142 -.209498461

-.912949103 .175282444

.0991094375 -.349824698

1.14457216 .0464172582

.438046137 .0177646297

1.22382056 -.162675287

pca example step 52
PCA Example –STEP 5

http://kybele.psych.cornell.edu/~edelman/Psych-465-Spring-2003/PCA-tutorial.pdf

for more on pca
For More on PCA …
  • Text Book Chapter 20
  • Technical Note on PCA (see attached Pdf)
  • Case Studies and Examples of PCA (see attached Pdf)