1 / 44

Hotelling’s T 2 test

Hotelling’s T 2 test. A graphical explanation. Hotelling’s T 2 statistic for the two sample problem. is the test statistic for testing:. Hotelling’s T 2 test. X 2. Popn A. Popn B. X 1. Univariate test for X 1. X 2. Popn A. Popn B. X 1. Univariate test for X 2. X 2. Popn A.

bette
Download Presentation

Hotelling’s T 2 test

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hotelling’s T2 test A graphical explanation

  2. Hotelling’s T2 statisticfor the two sample problem

  3. is the test statistic for testing:

  4. Hotelling’s T2 test X2 Popn A Popn B X1

  5. Univariate test for X1 X2 Popn A Popn B X1

  6. Univariate test for X2 X2 Popn A Popn B X1

  7. Univariate test for a1X1+ a2X2 X2 Popn A Popn B X1

  8. Mahalanobis distance A graphical explanation

  9. Euclidean distance

  10. Mahalanobis distance: S, a covariance matrix

  11. Hotelling’s T2 statisticfor the two sample problem

  12. Case I X2 Popn A Popn B X1

  13. Case II X2 Popn A Popn B X1

  14. In Case I the Mahalanobis distance between the mean vectors is larger than in Case II, even though the Euclidean distance is smaller. In Case I there is more separation between the two bivariate normal distributions

  15. Discrimination and Classification

  16. Discrimination Situation: We have two or more populations p1, p2, etc (possibly p-variate normal). The populations are known (or we have data from each population) We have data for a new case (population unknown) and we want to identify the which population for which the new case is a member.

  17. Examples

  18. The Basic Problem Suppose that the data from a new case x1, … , xphas joint density function either : p1:f(x1, … , xn) or p2:g(x1, … , xn) We want to make the decision to D1: Classify the case in p1 (f is the correct distribution) or D2: Classify the case in p2 (g is the correct distribution)

  19. The Two Types of Errors • Misclassifying the case inp1when it actually lies in p2. Let P[1|2] = P[D1|p2] = probability of this type of error • Misclassifying the case inp2when it actually lies in p1. Let P[2|1] = P[D2|p1] = probability of this type of error This is similar Type I and Type II errors in hypothesis testing.

  20. Note: A discrimination scheme is defined by splitting p –dimensional space into two regions. • C1 = the region were we make the decision D1.(the decision to classify the case inp1) • C2 = the region were we make the decision D2.(the decision to classify the case inp2)

  21. There can be several approaches to determining the regions C1 and C2. All concerned with taking into account the probabilities of misclassification P[2|1] and P[1|2] • Set up the regions C1 and C2 so that one of the probabilities of misclassification , P[2|1] say, is at some low acceptable value a. Accept the level of the other probability of misclassification P[1|2] = b.

  22. Set up the regions C1 and C2 so that the total probability of misclassification: P[Misclassification] = P[1] P[2|1] + P[2]P[1|2] is minimized P[1] = P[the case belongs to p1] P[2] = P[the case belongs to p2]

  23. Set up the regions C1 and C2 so that the total expected cost of misclassification: E[Cost of Misclassification] = c2|1P[1] P[2|1] + c1|2P[2]P[1|2] is minimized P[1] = P[the case belongs to p1] P[2] = P[the case belongs to p2] c2|1= the cost of misclassifying the case in p2 when the case belongs to p1. c1|2= the cost of misclassifying the case in p1 when the case belongs to p2.

  24. Set up the regions C1 and C2 The two types of error are equal: P[2|1] = P[1|2]

  25. Computer security: p1: Valid users p2: Imposters P[2|1] = P[identifying a valid user as an imposter] P[1|2] = P[identifying an imposter as a valid user ] P[1] = P[valid user] P[2] = P[imposter] c2|1= the cost of identifying the user as an imposter when the user is a valid user. c1|2= the cost of identifying the user as a valid user when the user is an imposter.

  26. This problem can be viewed as an Hypothesis testing problem H0:p1 is the correct population HA:p2 is the correct population P[2|1] = a P[1|2] = b Power = 1 - b

  27. The Neymann-Pearson Lemma Suppose that the data x1, … , xnhas joint density function f(x1, … , xn;q) where qis either q1 or q2. Let g(x1, … , xn) = f(x1, … , xn;q1) and h(x1, … , xn) = f(x1, … , xn;q2) We want to test H0: q= q1 (g is the correct distribution) against HA: q=q2 (h is the correct distribution)

  28. The Neymann-Pearson Lemma states that the Uniformly Most Powerful (UMP) test of size a is to reject H0 if: and accept H0 if: where ka is chosen so that the test is of size a.

  29. Proof: Let Cbe the critical region of any test of size a. Let We want to show that Note:

  30. hence and Thus

  31. and

  32. Thus and when we add the common quantity Q.E.D. to both sides.

  33. Fishers Linear Discriminant Function. Suppose that x1, … , xpis either data from a p-variate Normal distribution with mean vector: The covariance matrix S is the same for both populations p1 and p2.

  34. The Neymann-Pearson Lemma states that we should classify into populations p1 and p2 using: That is make the decision D1 : population is p1 if l≥ ka

  35. or or and Finally we make the decision D1 : population is p1 if where

  36. The function Is called Fisher’s linear discriminant function

  37. In the case where the populations are unknown but estimated from data Fisher’s linear discriminant function

  38. Example2 Annual financial data are collected for firms approximately 2 years prior to bankruptcy and for financially sound firms at about the same point in time. The data on the four variables • x1 = CF/TD = (cash flow)/(total debt), • x2 = NI/TA = (net income)/(Total assets), • x3 = CA/CL = (current assets)/(current liabilties, and • x4 = CA/NS = (current assets)/(net sales) are given in the following table.

  39. The data are given in the following table:

  40. Examples using SPSS

More Related