II. The Multivariate
Download
1 / 118

II. The Multivariate Normal Distribution - PowerPoint PPT Presentation


  • 49 Views
  • Uploaded on

II. The Multivariate Normal Distribution.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' II. The Multivariate Normal Distribution' - danil


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

II. The Multivariate

Normal Distribution

“…it is not enough to know that a sample could have come from a normal population; we must be clear that it is at the same time improbable that it has come from a population differing so much from the normal as to invalidate the use of the ‘normal theory’ tests in further handling of the material.”

E. S. Pearson, 1930 (quoted on page 1 in Tests of Normality, Henry C. Thode, Jr., 2002)


A. Review of the Univariate Normal Distribution

Normal Probability Distribution - expresses the probabilities of outcomes for a continuous random variable x with a particular symmetric and unimodal distribution. This density function is given by

where  = mean

 = standard deviation

 = 3.14159…

e = 2.71828…


but the probability is given by

This looks like a difficult integration problem! Will I have to integrate this function every time I want to calculate probabilities for some normal random variable?


Characteristics of the normal probability distribution are:

- there are an infinite number of normal distributions, each defined by their unique combination of the mean  and standard deviation 

-  determines the central location and  determines the spread or width

- the distribution is symmetric about 

- it is unimodal

-  = Md = Mo

- it is asymptotic with respect to the horizontal axis

- the area under the curve is 1.0

- it is neither platykurtic nor leptokurtic

- it follows the empirical rule:




The Standard Normal Probability Distribution - the probability distribution associated with any normal random variable (usually denoted z) that has  = 0 and  = 1.

There are tables that can be used to obtain the results of the integration

for thestandard normal random variable.


Some of the tables work from the probability distribution associated with any normal random variable (usually denoted z) that has cumulative standard normal probability distribution (the probability that a random value selected from the standard normal random variable falls between – and some given value b > 0, i.e., P(-  z  b))

There are tables that give the results of the integration (Table 1 of the Appendices in J&W).


Cumulative Standard Normal Distribution (J&W Table 1) probability distribution associated with any normal random variable (usually denoted z) that has


Let’s focus on a small part of the Cumulative Standard Normal Probability Distribution Table

Example: for a standard normal random variable z, what is the probability that z is between - and 0.43?


Example: for a standard normal random variable z, what is the probability that z is between 0 and 2.0?

2.0


Again, looking at a small part of the Cumulative Standard Normal Probability Distribution Table, we find the probability that a standard normal random variable z is between - and 2.00?


Example: for a standard normal random variable z, what is the probability that z is between 0 and 2.0?

Area of Probability =

0.9772 – 0.5000 = 0.4772

Area of

Probability = 0.5000

{

}

2.0


What is the probability that z is at least 2.0? the probability that z is between 0 and 2.0?

Area of Probability =

1.0000 - 0.9772 = 0.0228

{

2.0


What is the probability that z is between -1.5 and 2.0? the probability that z is between 0 and 2.0?

Area of

Probability =

0.4772

}

-1.5

2.0


Again, looking at a small part of the Cumulative Standard Normal Probability Distribution Table, we find the probability that a standard normal random variable z is between - and 1.50?


What is the probability that z is between -1.5 and 2.0? Normal Probability Distribution Table, we find the probability that a standard normal random variable z is between -

Area of

Probability =

0.5000 - 0.0668

= 0.4332

Area of

Probability =

0.4772

}

}

-1.5

2.0


Notice we could find the probability that z is between -1.5 and 2.0 another way!

Area of

Probability =

1.0000 - 0.9332

= 0.0668

Area of

Probability =

0.9772

}

}

-1.5

2.0


There are often multiple ways to use the Cumulative Standard Normal Probability Distribution Table to find the probability that a standard normal random variable z is between two given values!

How do you decide which to use?

- Do what you understand (make yourself comfortable)

and

- DRAW THE PICTURE!!!


Notice we could also calculate the probability that z is between -1.5 and 2.0 yet another way!

Area of

Probability =

0.9332 - 0.5000

= 0.4332

Area of

Probability =

0.4772

}

}

-1.5

2.0


What is the probability that z is between -1.5 and -2.0? between -1.5 and 2.0 yet another way!

Area of

Probability =

0.5000 - 0.0228 = 0.4772

Area of

Probability =

0.4772 – 0.4332 = 0.0440

}

Area of

Probability =

0.4332

}

}

-2.0

-1.5


What is the probability that z is between -1.5 and 2.0 yet another way!exactly 1.5?

Area of

Probability =

0.9332

}

1.5

(why?)


Other tables work from the between -1.5 and 2.0 yet another way!half standard normal probability distribution (the probability that a random value selected from the standard normal random variable falls between 0 and some given value b > 0, i.e., P(0  z  b))

There are tables that give the results of the integration as well.


Standard Normal Distribution between -1.5 and 2.0 yet another way!


Let’s focus on a small part of the Standard Normal Probability Distribution Table

Example: for a standard normal random variable z, what is the probability that z is between 0 and 0.43?


Example: for a standard normal random variable z, what is the probability that z is between 0 and 2.0?

2.0


Again, looking at a small part of the Standard Normal Probability Distribution Table, we find the probability that a standard normal random variable z is between 0 and 2.00?


Example: for a standard normal random variable z, what is the probability that z is between 0 and 2.0?

Area of Probability = 0.4772

}

2.0


What is the probability that z is at least 2.0? the probability that z is between 0 and 2.0?

Area of Probability =

0.5000 - 0.4772 = 0.0228

{

2.0


What is the probability that z is between -1.5 and 2.0? the probability that z is between 0 and 2.0?

Area of

Probability =

0.4772

}

-1.5

2.0


Again, looking at a small part of the Standard Normal Probability Distribution Table, we find the probability that a standard normal random variable z is between 0 and –1.50?


What is the probability that z is between -1.5 and 2.0? Probability Distribution Table, we find the probability that a standard normal random variable z is between 0 and –1.50?

Area of

Probability =

0.4332

Area of

Probability =

0.4772

}

}

-1.5

2.0


What is the probability that z is between -1.5 and -2.0? Probability Distribution Table, we find the probability that a standard normal random variable z is between 0 and –1.50?

Area of

Probability =

0.4772

Area of

Probability =

0.4772 – 0.4332 = 0.0440

}

Area of

Probability =

0.4332

}

}

-2.0

-1.5


What is the probability that z is Probability Distribution Table, we find the probability that a standard normal random variable z is between 0 and –1.50? exactly 1.5?

Area of

Probability =

0.4332

}

1.5

(why?)


z-Transformation - mathematical means by which Probability Distribution Table, we find the probability that a standard normal random variable z is between 0 and –1.50? any normal random variable with a mean  and standard deviation  can be converted into a standard normal random variable.

- to make the mean equal to 0, we simply subtract  from each observation in the population

- to then make the standard deviation equal to 1, we divide the results in the first step by 

The resulting transformation is given by


Example: for a normal random variable x with a mean of 5 and a standard deviation of 3, what is the probability that x is between 5.0 and 7.0?

Area of

Probability

}

7.0

0.0

z


Using the z-transformation, we can restate the problem in the following manner:

then use the standard normal probability table to find the ultimate answer:


which graphically looks like this: the following manner:

Area of

Probability = 0.2486

}

7.0

0.0

0.67

z


Why is the normal probability distribution considered so important?

- many random variables are naturally normally distributed

- many distributions, such as the Poisson and the binomial, can be approximated by the normal distribution (Central Limit Theorem)

- the distribution of many statistics, such as the sample mean and the sample proportion, are approximately normally distributed if the sample is sufficiently large (also Central Limit Theorem)


B. important? The Multivariate Normal Distribution

The univariate normal distribution has a generalized form in p dimensions – the p-dimensional normal density function is

squared generalized distance from x to m

where -  xi  , i = 1,…,p.

This p-dimensional normal density function is denoted by Np(m,S) where

~ ~

~ ~


The simplest multivariate normal distribution is the bivariate (2 dimensional) normal distribution, which has the density function

squared generalized distance from x to m

where -  xi  , i = 1, 2.

This 2-dimensional normal density function is denoted by N2(m,S) where

~ ~

~ ~


We can easily find the inverse of the covariance matrix (by using Gauss-Jordan elimination or some other technique):

Now we use the previously established relationship

to establish that


By substitution we can now write the squared distance as using Gauss-Jordan elimination or some other technique):



Graphically, the bivariate normal probability density function looks like this:

contours

X2

X1

All points of equal density are called a contour, defined for p-dimensions as all x such that

~


The contours function looks like this:

form concentric ellipsoids centered at m with axes

~

X2

contour for constant c

f(X1, X2)

X1

where


The general form of contours for a bivariate normal probability distribution where the variables have equal variance (s11 = s22) is relative easy to derive:

First we need the eigenvalues of S

~


Next probability distribution where the variables have equal variance (we need the eigenvectors of S

~


- for a positive covariance probability distribution where the variables have equal variance (s12, the first eigenvalue and its associated eigenvector lie along the 450 line running through the centroid m:

~

X2

contour for constant

f(X1, X2)

X1

What do you suppose happens when the covariance is negative? Why?


- for a negative covariance probability distribution where the variables have equal variance (s12, the second eigenvalue and its associated eigenvector lie at right angles to the 450 line running through the centroid m:

~

X2

contour for constant

f(X1, X2)

X1

What do you suppose happens when the covariance is zero? Why?


What do you suppose happens when the two random variables X probability distribution where the variables have equal variance (1 and X2 are uncorrelated (i.e., r12 = 0):

f(X1)

f(X2)


- for covariance probability distribution where the variables have equal variance (s12 of zero the two eigenvalues and eigenvectors are equal (except for signs) - one runs along the 450 line running through the centroid m and the other is perpendicular:

~

X2

contour for constant

f(X1, X2)

X1

What do you suppose happens when the covariance is zero? Why?


Contours also have an important probability interpretation – the solid ellipsoid of x values satisfying:

~

has a probability 1 – a, i.e.,


C. Properties of the Multivariate Normal Distribution – the

For any multivariate normal random vector X

1. The density

~

has maximum value at

i.e., the mean is equal to the mode!


2. The density – the

is symmetric along its constant density contours and is centered at m, i.e., the mean is equal to the median!

3. Linear combinations of the components of X are normally distributed

4. All subsets of the components of X have a (multivariate) normal distribution

5. Zero covariance implies that the corresponding components of X are independently distributed

6. Conditional distributions of the components of X are (multivariate) normal

~

~

~

~

~


D. Some Important Results Regarding the Multivariate Normal Distribution

1. If X ~ Np(m,S), then any linear combination

~ ~ ~

Furthermore, if a’X ~ Np(m,S) for every a, then X ~ Np(m,S)

~

~ ~ ~ ~

~

~ ~


2. If X ~ Distribution Np(m,S), then any set of q linear combinations

~ ~ ~

Furthermore, if d is a conformable vector of constants, then X + d ~ Np(m + d,S)

~ ~ ~ ~ ~


3. If X ~ Distribution Np(m,S), then all subsets of X are (multivariate) normally distributed, i.e., for any partition

~ ~ ~

~

then X1 ~ Nq(m1, S11), X2 ~ Np-q(m2, S22)

~ ~ ~

~ ~ ~


4. If X Distribution1 ~ Nq1(m1,S11) and X2 ~ Nq2(m2,S22) are independent, then Cov(X1, X2) = S12 = 0

and if

~ ~ ~

~ ~ ~

~

~ ~

~

then X1 and X2 are independent iff S12 = 0

and if X1 ~ Nq1(m1,S11) and X2 ~ Nq2(m2,S22) and are independent, then

~ ~

~

~

~ ~ ~

~ ~ ~


5. If X ~ Distribution Np(m,S) and |S| > 0, then

~ ~ ~

~

and

the Np(m,S) distribution assigns probability 1 – a to the solid ellipsoid

~ ~


common covariance matrix Distribution

6. Let Xj ~ Np(mj,S), j = 1,…,n be mutually independent. Then

~ ~ ~

~

~

Furthermore, V1 and

~

~

~

are jointly normal with covariance matrix

so V1 and V2 are independent if b’c = 0!

~ ~

~ ~


E. Sampling From a Multivariate Normal Distribution and Maximum Likelihood Estimation

Let Xj ~ Np(m,S), j = 1,…,n represent a random sample.

Since the Xj‘s are mutually independent and each have distribution Np(m,S), their joint density is the product of their marginal densities, i.e.,

~ ~ ~

~

~ ~

as a function of m and S, this is the likelihood for fixed observations xj, j = 1,…,n

~ ~

~


Maximum Likelihood Estimation – estimation of parameter values by finding estimates that maximize the likelihood of the sample data on which they are based (select estimated values for parameters that best explain the sample data collected)

Maximum Likelihood Estimates – the estimates of parameter values that maximize the likelihood of the sample data on which they are based

For a multivariate normal distribution, we would like to obtain the maximum likelihood estimates of parameters m and S given the sample data X we have collected. To simplify our efforts we will need to utilize some properties of the trace to rewrite the likelihood function in another form.

~ ~

~


For a k x k symmetric matrix A and a k x 1 vector x: values by finding estimates that maximize the likelihood of the sample data on which they are based (select estimated values for parameters that best

- x’Ax = tr(x‘Ax) = tr(Axx’)

- tr(A) = where li, I = 1…, k are the eigenvalues of A

These two results can be used to simplify the joint density of n mutually independent random observations Xj‘s, each have distribution Np(m,S) – we first rewrite

~

~

~ ~ ~

~ ~ ~

~ ~ ~

~

~

~

~ ~


Then we rewrite values by finding estimates that maximize the likelihood of the sample data on which they are based (select estimated values for parameters that best

since the trace of the sum of matrices is equal to the sum of their individual traces


We can further state that values by finding estimates that maximize the likelihood of the sample data on which they are based (select estimated values for parameters that best

Because the crossproduct terms

are both matrices of zeros


Substitution of these two results yield an alternative expression of the joint density of a random sample from a p-dimensional population:

Substitution of the observed values x1,…,xn into the joint density yields the likelihood function for the corresponding sample X, which is often denoted as L(m, S).

~ ~

~

~ ~


So for observed values x expression of the joint density of a random sample from a p-dimensional population:1,…,xn that comprise random sample X drawn from a p-dimensional normally distributed population, the likelihood function is

~ ~

~


Finally, note that we can express the exponent of the likelihood function in many ways – one particular alternate expression will be particularly convenient:


which, by another substitution, yields the likelihood function

Again, keep in mind that we are pursuing estimates of m and S that maximize the likelihood function L(m, S) for a given random sample X.

~ ~

~ ~

~


This result will also be helpful in deriving the maximum likelihood estimates of m and S.

For a p x p symmetric positive definite matrix B and scalar b > 0, it follows that

~ ~

~

for all positive definite S of dimension p x p, with equality holding only for

~


Now we are ready for maximum likelihood estimation of likelihood estimates of m and S.

For a random sample X1,…,Xn from a normal population with mean m and covariance S, the maximum likelihood estimatorsm and S of m and S are

~ ~

~ ~

~

~

^ ^

~ ~

~ ~

Their observed values for observed data x1,…,xn

~ ~

are the maximum likelihood estimates of m and S.

~ ~


Note that the maximum of the likelihood is achieved at likelihood estimates of

and since

generalized variance

constant

we have that


It can be shown that maximum likelihood estimators (or MLEs) possess an invariance property – if q is the MLE of q, then the MLE of f(q) = f(q). Thus we can say

- the MLE of is

- the MLE of is

^

^

where

is the MLE of Var(Xi).


It can be also be shown that MLEs) possess an

are sufficient for the multivariate normal joint density

i.e., the density depends on the entire set of observations x1,…,xn only through

Thus, we refer to X and S as the sufficient statistics for the multivariate normal distribution.

Sufficient Statistics contain all information necessary to evaluate a particular density for a given sample.

_

~ ~

~ ~


_ MLEs) possess an

F. The Sampling Distributions of X and S

The assumption that X1,…,Xn constitute a random sample with mean m and covariance S completely determines the sampling distributions of X and S.

For a univariate normal distribution, X is normal with

~ ~

~ ~

~

~

~ ~

_

Analogously, for the multivariate (p  2) case (i.e., X is normal with mean m and covariance S), X is normal with

_

~

~

~

~


Similarly, for random sample X MLEs) possess an 1,…, Xn from a univariate normal distribution with mean m and variance s2

where

Analogously, for the multivariate (p  2) case (i.e., X is normal with mean m and covariance S), S is Wishart distributed (denoted Wm(| S) where

~

~

~

~

~

~

~


Some important properties of the Wishart distribution: MLEs) possess an

- The Wishart distribution exists only if n > p

- If

common covariance matrix

independently of

then

- and


- When it exists, the Wishart distribution has a density of

for a positive symmetric definite matrix A.

~


_ of

F. Large Sample Behavior of X and S

- The (Univariate) Central Limit Theorem – suppose that

~ ~

where the Vi have approximately equivalent variability. Then the distribution of X becomes relatively normal as the sample size increases no matter what form the underlying population distribution.

- Convergence in Probability – a random variable X is said to converge in probability to a given constant value c if, for any prescribed accuracy e,

P[- e < X – c < e] approaches 1 as n  


- The Law of Large Numbers – Let Y of1,…, Yn constitute independent observations from a population with mean E[Y] = m. Then

converges in probability to m as n increases without bound, i.e.,

P[- e < Y – m < e] approaches 1 as n  

_


Multivariate implications of the Law of Large Numbers include

P[- e < X – m < e] approaches 1 as n  

and

P[- e < S – S < e] approaches 1 as n  

or similarly

P[- e < Sn – S < e] approaches 1 as n  

_

~ ~ ~ ~

~ ~ ~ ~

These happen very quickly!

~ ~ ~ ~



- These results can be used to support the (Multivariate) Central Limit Theorem – Let X1,…, Xn constitute independent observations from any population with mean m and finite (nonsingular) covariance S. Then

~ ~

~

~

.

for n large relative to p.

This can be restated as

.

again for n large relative to p.


Because the sample covariance matrix S (or S Central Limit Theorem – Let Xn) converges to the population covariance matrix S so quickly (i.e., at relatively small values of n – p), we often substitute the sample covariance for the population covariance with little concern for the ramifications – so we have

~ ~

~

.

for n large relative to p.

This can be restated as

.

again for n large relative to p.


One final important result due to the CLT – by substitution

.

for n large relative to p.


G. Assessing the Assumption of Normality substitution

There are two general circumstances in multivariate statistics under which the assumption of multivariate normality is crucial:

- the technique to be used relies directly on the raw observations Xj

- the technique to be used relies directly on sample mean vector Xj(including those which rely on distances of the form n(X – m)’S-1(X – m))

In either of these situations, the quality of inferences to be made depends on how closely the true parent population resembles the assumed multivariate normal form!

~

~

~ ~ ~ ~ ~


Based on the properties of the Multivariate Normal Distribution, we know

- all linear combinations of the individual normal are normal

- the contours of the multivariate normal density are concentric ellipsoids

These facts suggest investigation of the following questions (in one or two dimensions):

- Do the marginal distributions of the elements of X appear normal? What about a few linear combinations?

- Do the bivariate scatterplots appear ellipsoidal?

- Are there any unusual looking observations (outliers)?

~


Tools frequently used for assessing univariate normality include

- the empirical rule

- dot plots (for small samples sets) and histograms or stem & leaf plots (for larger samples)

- goodness-of-fit tests such as the Chi-Square GOF Test and the Kolmogorov-Smirnov Test

- the test developed by Shapiro and Wilk [1965] called the Shapiro-Wilk test

- Q-Q plots (of the sample quantiles against the expected quantile for each observation given normality)


Example – suppose we had the following fifteen (ordered) sample observations on some random variable X:

Do these data support the assertion that they were drawn from a normal parent population?


In order to assess normality by the the empirical rule, we need to compute the generalized distance from the centroid (convert the data to a standard normal random variable) – for our data we have

so the corresponding standard normal values for our data are

Nine of the observations (or 60%) lie within one standard deviation of the mean, and all fifteen of the observations lie within two standard deviation of the mean – does this support the assertion that they were drawn from a normal parent population?


. need to compute the generalized distance from the centroid (convert the data to a standard normal random variable) – for our data we have

.. . . . . . ... . . . .

-1 0 1 2 3 4 5 6 7 8 9 10 11

A simple dot plot could look like this:

This doesn’t seem to tell us much (of course, fifteen data points isn’t much to go on).

How about a histogram?

This doesn’t seem to tell us much either!


We could use SAS to calculate the Shapiro-Wilk test statistic and corresponding p-value:

DATA stuff;

INPUT x;

LABEL x='Observed Values of X';

CARDS;

1.43

1.62

2.46

2.48

2.97

4.03

4.47

5.76

6.61

6.68

6.79

7.46

7.88

8.92

9.42

;

PROCUNIVARIATEDATA=stuff NORMAL;

TITLE4'Using PROC UNIVARIATE for tests of univariate normality';

VAR x;

RUN;


Tests for Normality statistic and corresponding p-value:

Test --Statistic--- -----p Value------

Shapiro-Wilk W 0.935851 Pr < W 0.3331

Kolmogorov-Smirnov D 0.159493 Pr > D >0.1500

Cramer-von Mises W-Sq 0.058767 Pr > W-Sq >0.2500

Anderson-Darling A-Sq 0.362615 Pr > A-Sq >0.2500

Stem Leaf # Boxplot

9 4 1 |

8 9 1 |

7 59 2 +-----+

6 678 3 | |

5 8 1 *--+--*

4 05 2 | |

3 0 1 | |

2 55 2 +-----+

1 46 2 |

----+----+----+----+

Normal Probability Plot

9.5+ +++*

| +*+

| *+*+

| **+*+

5.5+ *++

| +**+

| ++++

| +++* * *

1.5+ * +++*

+----+----+----+----+----+----+----+----+----+----+

-2 -1 0 +1 +2


Or a Q-Q plot statistic and corresponding p-value:

- put the observed values in ascending order - call these the x(j)

- calculate the continuity corrected cumulative probability level (j – 0.5)/n for the sample data

- find the standard normal quantiles (values of the N(0,1) distribution) that have a cumulative probability of level (j – 0.5)/n – call these the q(j), i.e., find z such that

- plot the pairs (q(j), x(j) ). If the points lie on/near a straight line, the observations support the contention that they could have been drawn from a normal parent population.



…and the resulting Q-Q plot looks like this: this:

There don’t appear to be great departures from the straight line drawn through the points, but it doesn’t fit terribly well, either…


Looney & Gulledge [1985] suggest calculating the Pearson’s correlation coefficient between q(j) and x(j) (a test has even been developed) – the formula for the correlation coefficient is

Critical points for the test of normality are given in Table 4.2 (page 182) of J&W (note we reject the hypothesis of normality if rQis less than the critical value).



Evaluation of the Pearson’s correlation coefficient between q(j) and x(j) yields

The sample size is n = 15, so critical points for the test of normality are 0.9503 at a = 0.10, 0.9389 at a = 0.05, and 0.9216 at a = 0.01. Thus we do not reject the hypothesis of normality at any a larger than 0.01.


When addressing the issue of multivariate normality, these tools aid in assessment of normality for the univariate marginal distributions. However, we should also consider bivariate marginal distributions (each of which should be normal if the overall joint distribution is multivariate normal).

The methods most commonly used for assessing bivariate normality are

- scatter plots

- Chi-Square Plots


Example – suppose we had the following fifteen (ordered) sample observations on some random variables X1 and X2:

Do these data support the assertion that they were drawn from a bivariate normal parent population?


The scatter plot of pairs (x sample observations on some random variables X1, x2) support the assertion that these data were drawn from a bivariate normal distribution (and that they have little or no correlation).


To create a Chi-Square plot, we will need to calculate the squared generalized distance from the centroid for each observation xj

~

For our bivariate data we have


…so the squared generalized distances from the centroid are

if we order the observations relative to their squared generalized distances 


We then find the corresponding percentile

of the Chi-Square distribution with p degrees of freedom.

Now we create a scatter plot of the pairs

(d2j1, qc,2[(j-.5)/n])

If these points lie on a straight line, the data support the assertion that they were drawn from a bivariate normal parent population.


These data don’t seem to support the assertion that they were drawn from a bivariate normal parent population…

possible outliers!


Some suggest also looking to see if roughly half the squared distances d2j are less than or equal to qc,p(0.50) (i.e., lie within the ellipsoid containing 50% of all potential p-dimensional observations).

For our example, 7 of our fifteen observations (about 46.67%) of all observations are less than qc,p(0.50) = 1.386 standardized units from the centroid (i.e., lie within the ellipsoid containing 50% of all potential p-dimensional observations).

Note that the Chi-Square plot can easily be extended to p > 2 dimensions.

Note also that some researchers also calculate the correlation between d2j1 and qc,p[(j-.5)/n]. For our example this is 0.8952.


H. Outlier Detection squared distances d

Detecting outliers (extreme or unusual observations) in p > 2 dimensions is very tricky. Consider the following situation:

X2

90% confidence ellipsoid

90% confidence interval for X2

X1

90% confidence interval for X1


  • - Look for bivariate outliers

    • generalized square distances

    • scatter plots (perhaps a scatter plot matrix)

    • Chi-Square plots and correlation

  • - Look for p-dimensional outliers

    • generalized square distances

    • Chi-Square plots and correlation

  • Note that NO STRATEGY guarantees detection of outliers!


  • Here are calculated standardized values (z squared distances dji’s) and squared generalized distances (d2j’s) for our previous data:

    This one looks a little unusual in p = 2 space


    I. Transformations to Near Normality squared distances d

    Transformations to make nonnormal data approximately normal are usually suggested by

    - theory

    - the raw data

    Some common transformations include

    Original Scale Transformed Scale

    Counts y

    Proportions p

    Correlations r

    ^


    For continuous random variables, an appropriate transformation can usually be found among the family of power – Box and Cox [1964] suggest an approach to finding an appropriate transformation from this family.

    Box and Cox consider the slightly modified family of power transformations


    For observations x transformation can usually be found among the family of power – Box and Cox [1964] suggest an approach to finding an appropriate transformation from this family.1,…,xn, the Box-Cox choice of appropriate power l for the normalizing transformation is that which maximizes

    where

    and


    We then evaluate transformation can usually be found among the family of power – Box and Cox [1964] suggest an approach to finding an appropriate transformation from this family.l (l) at many points on an short interval (say [-1,1] or [-2,2]), plot the pairs (l, l (l)) and look for a maximum point.

    l (l)

    l (l*)

    l*

    l

    Often a logical value of l near l* is chosen.


    Unfortunately, transformation can usually be found among the family of power – Box and Cox [1964] suggest an approach to finding an appropriate transformation from this family.l is very volatile as l changes (which create some other analytic problems to overcome). Thus we consider another transformation to avoid this additional problem:

    where

    is the geometric mean of the responses and is frequently calculated as the antilog of


    and is the nth power of the appropriate Jacobian of the transformation (which converts the responses (xi’s into ’s).

    From this point forward proceed substituting the ‘s for the ’s in the previous analysis.

    The l that results in minimum variance of this transformed variable also maximizes our previous criterion


    Note that: the transformation (which converts the responses (x

    - the value of l generated by the Box-Cox transformation is only optimal in a mathematical sense – use something close that has some meaning.

    - an approximate confidence interval for l can be found

    - other means for estimating l exist

    - if we are dealing with a response variable, transformations are often use to ‘stabilize’ the variance

    - for a p-dimensional sample, transformations are considered independently for each of the p variables

    - while the Box-Cox methodology may help convert each marginal distribution to near normality, it does not guarantee the resulting transformed set of p variables will have a multivariate normal distribution.


    ad