1 / 45

Discrimination and Classification

Discrimination and Classification. Discrimination. Situation: We have two or more populations p 1 , p 2 , etc (possibly p -variate normal). The populations are known (or we have data from each population)

zeringue
Download Presentation

Discrimination and Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Discrimination and Classification

  2. Discrimination Situation: We have two or more populations p1, p2, etc (possibly p-variate normal). The populations are known (or we have data from each population) We have data for a new case (population unknown) and we want to identify the which population for which the new case is a member.

  3. The Basic Problem Suppose that the data from a new case x1, … , xphas joint density function either : p1:g(x1, … , xn) or p2:h(x1, … , xn) We want to make the decision to D1: Classify the case in p1 (g is the correct distribution) or D2: Classify the case in p2 (h is the correct distribution)

  4. The Two Types of Errors • Misclassifying the case inp1when it actually lies in p2. Let P[1|2] = P[D1|p2] = probability of this type of error • Misclassifying the case inp2when it actually lies in p1. Let P[2|1] = P[D2|p1] = probability of this type of error This is similar Type I and Type II errors in hypothesis testing.

  5. Note: A discrimination scheme is defined by splitting p –dimensional space into two regions. • C1 = the region were we make the decision D1.(the decision to classify the case inp1) • C2 = the region were we make the decision D2.(the decision to classify the case inp2)

  6. There can be several approaches to determining the regions C1 and C2. All concerned with taking into account the probabilities of misclassification P[2|1] and P[1|2] • Set up the regions C1 and C2 so that one of the probabilities of misclassification , P[2|1] say, is at some low acceptable value a. Accept the level of the other probability of misclassification P[1|2] = b.

  7. Set up the regions C1 and C2 so that the total probability of misclassification: P[Misclassification] = P[1] P[2|1] + P[2]P[1|2] is minimized P[1] = P[the case belongs to p1] P[2] = P[the case belongs to p2]

  8. Set up the regions C1 and C2 so that the total expected cost of misclassification: E[Cost of Misclassification] = ECM = c2|1P[1] P[2|1] + c1|2P[2]P[1|2] is minimized P[1] = P[the case belongs to p1] P[2] = P[the case belongs to p2] c2|1= the cost of misclassifying the case in p2 when the case belongs to p1. c1|2= the cost of misclassifying the case in p1 when the case belongs to p2.

  9. The Optimal Classification Rule Suppose that the data x1, … , xphas joint density function f(x1, … , xp;q) where qis either q1 or q2. Let g(x1, … , xp) = f(x1, … , xn;q1) and h(x1, … , xp) = f(x1, … , xn;q2) We want to make the decision D1: q= q1 (g is the correct distribution) against D2: q=q2 (h is the correct distribution)

  10. then the optimal regions (minimizing ECM, expected cost of misclassification) for making the decisions D1 and D2 respectively are C1 and C2 and where

  11. ECM= E[Cost of Misclassification] = c2|1P[1] P[2|1] + c1|2P[2]P[1|2] Proof:

  12. Therefore Thus ECM is minimized if C1 contains all of the points (x1, …, xp) such that the integrand is negative

  13. Fishers Linear Discriminant Function. Suppose that x1, … , xpis either data from a p-variate Normal distribution with mean vector: The covariance matrix S is the same for both populations p1 and p2.

  14. The Neymann-Pearson Lemma states that we should classify into populations p1 and p2 using: That is make the decision D1 : population is p1 if l> k

  15. or or and

  16. Finally we make the decision D1 : population is p1 if where and Note: k = 1 and ln k = 0 if c1|2 = c2|1 and P[1] = P[2].

  17. The function Is called Fisher’s linear discriminant function

  18. In the case where the populations are unknown but estimated from data Fisher’s linear discriminant function

  19. Example2 Annual financial data are collected for firms approximately 2 years prior to bankruptcy and for financially sound firms at about the same point in time. The data on the four variables • x1 = CF/TD = (cash flow)/(total debt), • x2 = NI/TA = (net income)/(Total assets), • x3 = CA/CL = (current assets)/(current liabilties, and • x4 = CA/NS = (current assets)/(net sales) are given in the following table.

  20. The data are given in the following table:

  21. Examples using SPSS

  22. Classification or Cluster Analysis Have data from one or several populations

  23. Situation • Have multivariate (or univariate) data from one or several populations (the number of populations is unknown) • Want to determine the number of populations and identify the populations

  24. Example

  25. Hierarchical Clustering Methods The following are the steps in the agglomerative Hierarchical clustering algorithm for grouping N objects (items or variables). • Start with N clusters, each consisting of a single entity and an N X N symmetric matrix (table) of distances (or similarities) D = (dij). • Search the distance matrix for the nearest (most similar) pair of clusters. Let the distance between the "most similar" clusters U and V be dUV. • Merge clusters U and V. Label the newly formed cluster (UV). Update the entries in the distance matrix by • deleting the rows and columns corresponding to clusters U and V and • adding a row and column giving the distances between cluster (UV) and the remaining clusters.

  26. Repeat steps 2 and 3 a total of N-1 times. (All objects will be a single cluster a termination of this algorithm.) Record the identity of clusters that are merged and the levels (distances or similarities) at which the mergers take place.

  27. Different methods of computing inter-cluster distance

  28. Example To illustrate the single linkage algorithm, we consider the hypothetical distance matrix between pairs of five objects given below:

  29. Treating each object as a cluster, the clustering begins by merging the two closest items (3 & 5). To implement the next level of clustering we need to compute the distances between cluster (35) and the remaining objects: d(35)1 = min{3,11} = 3 d(35)2 = min{7,10} = 7 d(35)4 = min{9,8} = 8 The new distance matrix becomes:

  30. The new distance matrix becomes: The next two closest clusters ((35) & 1) are merged to form cluster (135). Distances between this cluster and the remaining clusters become:

  31. Distances between this cluster and the remaining clusters become: d(135)2 = min{7,9} = 7 d(135)4 = min{8,6} = 6 The distance matrix now becomes: Continuing the next two closest clusters (2 & 4) are merged to form cluster (24).

  32. Distances between this cluster and the remaining clusters become: d(135)(24) = min{d(135)2,d(135)4)= min{7,6} = 6 The final distance matrix now becomes: At the final step clusters (135) and (24) are merged to form the single cluster (12345) of all five items.

  33. The results of this algorithm can be summarized graphically on the following "dendogram"

  34. Dendograms for clustering the 11 languages on the basis of the ten numerals

  35. DendogramCluster Analysis of N=22 Utility companiesEuclidean distance, Average Linkage

  36. DendogramCluster Analysis of N=22 Utility companiesEuclidean distance, Single Linkage

More Related