1 / 27

Clustering and Testing in High-Dimensional Data

Clustering and Testing in High-Dimensional Data. M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania). The problem.

Download Presentation

Clustering and Testing in High-Dimensional Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering and Testing in High-Dimensional Data M. Radavičius, G. Jakimauskas,J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

  2. The problem Let X = XN be a sample of size N supposed to satisfy d-dimen-sional Gaussian mixture model (d is supposed to be large). Because of large dimension it is natural to project the sample to k-dimensional (k = 1, 2,…) linear subspaces using projection pursuit method (Huber (1985), Friedman (1987)) which gives the best selection of these subspaces. If distribution of standardized sample on the complement space becomes standard Gaussian, this linear subspace H is called discriminant subspace. E. g., if we have q Gaussian mixture components with equal covariance matrices then dimension of the discriminant subspace is q–1. Having an estimate of the discriminant subspace we can perform much easier classification using projected sample.

  3. The sequential procedure applied to the standardized sample is the following (k = 1, 2,…, until the hypothesis of discriminant subspace holds for some k): • Find the best k-dimensional linear subspace using projection pursuit method (Rudzkis and Radavičius (1999)). • Fit a Gaussian mixture model for the sample projected to the k-dimensional linear subspace (Rudzkis and Radavičius (1995)). • Test goodness-of-fit of the estimated d-dimensional model assuming that distribution on the complement space is standard Gaussian. If the test fails then increase k and go to the step 1. The problem in step 1 is to find basic vectors in high-dimension space (we do not cover this problem by now). The problem in step 3 (in common approach) is comparing some non-parametric density estimate with parametric one in high-dimensional space.

  4. We present a simple, data-driven and computationally efficient procedure for testinggoodness-of-fit. The procedure is based on well-known interpretation of testing goodness-of-fit as the classi-fication problem, a special sequential data partition procedure, randomization and resampling, elements of sequential testing. Monte-Carlo simulations are used to assess the performance of the procedure. This procedure can be applied to the testing of independence of components in high-dimensional data. We present some preliminary computer simulation results.

  5. Introduction Let Consider general classification problem of estimation ofa posteriori probabilities from the sample

  6. Under these assumptions we have Usually the EM algorithm is used to estimate the a posteriori probabilities. Denote then EM algorithm is a following iterative procedure: EM algoritm converges to some local maximum of the maximum likelihood function

  7. which usually is not equal to the global maximum Let for some subspace the following equality holds: where

  8. and the subspace H has a maximum dimension, then this subspace is called the discriminant subspace. We do not lose an information on the a posteriori probabilities when we project the sample to the discriminant subspace. We can get the estimate of the discriminant subspace using projection pursuit procedure (see e. g. , J. H. Friedman (1987), S. A. Aivazyan (1996), R. Rudzkis, M. Radavičius (1998)).

  9. Test statistics Let be a sample of the size N of i.i.d. random vectors with a common distribution function F on Rd. Let and be two disjoint classes of d-dimensional distributions. Consider a nonparametric hypothesis testing problem: Let Consider a mixture model

  10. of two populations WH and W with d.f. FH and F, respecti-vely. Fix p and let Y = Y(p) denote a random vector with the mixture distribution F(p). Let Z = Z(p) be the posterior proba- bility of the population W given Y, i.e. Here f and fH denote distribution densities of F and FH, respecti-vely. Let us introduce a loss function l(F, FH) = E(Z – p)2.

  11. Let X(H) = {X(H)(1), X(H)(2),…, X(H)(M)} be a sample of size M of i.i.d. vectors from WH. It is also supposed that X(H) is independent of X. Set be a sequence of partitions of Rd, possibly dependent on Y, and let be the corresponding sequence of s-algebras generated by these partitions. A computationally efficient choice of P is the sequential dyadic coordinate-wise partition minimizing at each step the mean square error.

  12. In view of the definition of the loss function a natural choice of the test statistics would be c2-type statistics for some which can be treated as a smoothing parameter. Here EMN stands for the expectation with respect to the empirical distribution F of Y. However, since the optimal value of k is unknown, we prefer the following definition of the test statistics: where ak and bk are centering and scaling parameters to be specified.

  13. We have selected the following test statistics: where sample X(H)) in the jth area of the kth partition Pk.

  14. Illustration of the sequential dyadic partitioning procedure Here we have an example (at some step) of sequential partitioningprocedure with two samples of two-dimen-sional data. The next partition is selected from all current squares and all divisions by each dimen-sion (in this case d=2) to achieve minimum mean square error of grouping.

  15. Preliminary simulation results The computer simulations have been performed using Monte-Carlo simulation method (typically 100 independent simulations). Sample sizes of X and X(H) were selected equal (typically N = M = 1000). The first problem is to evaluate using the computer simulation the test statistics Tk in case when the hypothesis H holds. Centering and scaling parameters of the test statistics were selected in such a way that distribution of the test statistics is approximately standard Gaussian for each k not very close to 1 and K. The computer simulation results show that for very wide range of dimensions, sample sizes and distributions behaviour of the test statistics in case when the hypothesis H holds is very similar.

  16. Fig. 1. Behaviour of Tkwhen the hypothesis holds Here we have sample sizeN=1000,dimensiond=100, and two samples of d-dimensional standard Gaussian distribution. We have maxima and minima of 100 realizations and corresponding maxima and minima except of 5 per cent largest values at each point.

  17. Fig. 2. Behaviour of Tkwhen the hypothesis does not hold Here we have sample sizeN=1000,dimensiond=10, q=3, Gaussian mixture with means (–4, –3, 0, 0, 0,…), (0, 6, 0, 0, 0,…), (4, –3, 0, 0, 0,…). The sample is projected to one-dimensional subspace. This is an extremely unfit situation.

  18. Fig. 3. Behaviour of Tk(control data) This is a control example for the data in Fig. 2 assuming that we project data to the true two-dimensional discriminant subspace.

  19. Fig. 4. Behaviour of Tkwhen the hypothesis does not hold Here we have sample sizeN=1000,dimensiond=10, q=3, Gaussian mixture with means (–4, –1, 0, 0, 0,…), (0, 2, 0, 0, 0,…), (4, –1, 0, 0, 0,…). The sample is projected to one-dimensional subspace.

  20. Fig. 5. Behaviour of Tk(control data) This is a control example for the data in Fig. 4 assuming that we project data to the true two-dimensional discriminant subspace.

  21. Fig. 6. Behaviour of Tkwhen the hypothesis does not hold Here we have sample sizeN=1000,dimensiond=10, q=3, Gaussian mixture with means (–4, –0.5, 0, 0, 0,…), (0, 1, 0, 0, 0,…), (4, –0.5, 0, 0, 0,…). The sample is projected to one-dimensional subspace.

  22. Fig. 7. Behaviour of Tk(control data) This is a control example for the data in Fig. 6 assuming that we project data to the true two-dimensional discriminant subspace.

  23. Fig. 8. Behaviour of Tkwhen the hypothesis does not hold Here we have sample sizeN=1000,dimensiond=20, and standard Cauchy distribution. Sample XH is simulated with independent components, number of independent components are d1 = d/2, d2 = d/2.

  24. Fig. 9. Behaviour of Tk(control data) This is a control example for the data in Fig. 8 assuming that the sample X(H) is simulated as sample with the same distribution as the sample X.

  25. Fig. 10. Behaviour of Tkwhen the hypothesis does not hold Here we have sample sizeN=1000,dimensiond=10, and Student distribution with 3 degrees of freedom . Number of independent components are d1 = 1, d2 = d–1.

  26. Fig. 11. Behaviour of Tk(control data) This is a control example for the data in Fig. 10 assuming that the sample X(H) is simulated as sample with the same distribution as the sample X.

  27. end.

More Related