1 / 23

Clustering Aggregation

Clustering Aggregation. Nir Geffen 021537980 Yotam Margolin 039719729 Supervisor Professor Zeev Volkovich. ORT BRAUDE COLLEGE – SE DEPT. 16.01.2012. Purposes. Our goal is to study the results of different clustering ensemble techniques and to present the distinction

sorcha
Download Presentation

Clustering Aggregation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering Aggregation Nir Geffen 021537980 Yotam Margolin 039719729 Supervisor Professor ZeevVolkovich ORT BRAUDE COLLEGE – SE DEPT. 16.01.2012

  2. Purposes Our goal is to study the results of different clustering ensemble techniques and to present the distinction between the cluster ensemble and clustering aggregation approaches via self learning methodology – implemented for image segmentation.

  3. Table of Contents • Introduction • What Does It Do? • Clustering • Spectral Clustering • Cluster Ensembles. • Consensus • HGPA • MCLA • Volkovich-Yahalom • Main Algorithm • SE Documents

  4. What does it do? VYCAA HGPA Comparison Preprocess MCLA

  5. Introduction – Clustering • Clustering is a method of the unsupervised learning aimed at partitioning a given data set into subsets named clusters, so that items belonging to the same cluster are similar to each other while items belonging to different clusters are not similar.

  6. Introduction – SpectralClustering • What is wrong with classic clustering? • Spectral Clustering • Eigenvectors and eigenvalues • Noise removed Clustered by Spectral Clustered by K-Means

  7. Introduction – Cluster Ensembles • As no clustering algorithm is agreed to be superior for any data set, a common practice is to construct several cluster solutions and to aggregate them. • We use the Consensus function approach to combine the resulting partitions into a new one, in order t increase the robustness of the clustering process. Spectral EM K-Means

  8. Consensus • Algorithms that solve the Cluster Ensemble problem, are also known as Consensus functions, most of which rely on Graph theory.

  9. Consensus II • Cluster-based Similarity Partitioning Algorithm (CSPA) • Simple. • Considered the brute-force. • Hyper Graph Partitioning Algorithm (HGPA), • Balanced. • Not always optimal. • Meta-CLustering Algorithm (MCLA) • high-end solution. • yields robust results.

  10. Consensus III • A criteria by which to choose any specific consensus function is ANMI. ANMI (or Average Normalized Mutual Information) is defined as the average of the NMI which the final Clustering shares with the solutions. • Mutual Information I(X,Y )≤min(H(X),H(Y )).

  11. HGPA – Steps • Create hyper-graph (hyper-edges are clusters from all clusterings). • Repeat K -1 times: • Obtain sub-set (Cluster) by min-cutting the hyper-graph while maintaining a vertex imbalance of at most 5%.

  12. MCLA – Steps • Create hyper-graph G. • Expand hyper edges. (Create meta-graph from G). • Collapse meta-graph (Cluster meta graph to K clusters). • Compete for Objects

  13. Volkovich-Yahalom - Purpose • Use various partitions of the same data set in order to define a new metric on the data set. • Using the new metric as an enhanced input for a clustering algorithm will produce better and more robust partitions. • This process can be utilizedrepeatedly, where in each step the metric is updated using the original data as well as the new cluster partition.

  14. Volkovich-Yahalom – Steps Given m partitions, and the original dataset. Returns a new clustering . • For each partition , calculate to be the new metric learned by clustering results. • Combine by statistical means to complete R. • Calculate S to be the square root of R. • Cluster original data by new metric S, and k the desired number of clusters.

  15. Main Algorithm - Steps • Produce r individual spectral partitions • Use MCLA to obtain Sc MCLA(x); • Use HGPA to obtain Sc HGPA(x); • Use Volkovich-Yahalom to obtain SC VYCAA(x); • By ANMI criterion, get the final decision Sc*(x) from Sc MCLA(x) and Sc HGPA(x) and Sc VYCAA (x).

  16. Gui 1 GUI – Main window

  17. Gui 2 GUI – Results window

  18. SE Documents SE Documents

  19. SE Documents SE Documents

  20. SE Documents SE Documents

  21. Test Plan • Unit test is our first line in the test plan(Test Driven Development) • Coding conventions . • Lint the code for errors such as dead-code or uninitialized pointers. • Usage and code coverage Test.

  22. References [1] Z. Volkovich, O. Yahalom, "Clustering aggregation via the self-learning approach" , work in preparation 2010-2012 [2] A. Strehl, J Ghosh, "Cluster Ensembles – a knowledge Reuse Framework for Combining Multiple Partitions", Journal of machine Learning Research 3 (2002) 583- 617. [3] Y. Ng, M, Jordan, Y Weish, "On spectral clustering: analysis and an algorithm", 2002, Advances in neural information processing systems 14: proceedings of the 2002 conference, Sec.2, 849. [4] X. Ma, W. Wan, L. Jiao, “Spectral Clustering Ensemble for Image Segmentation”, 2009 GEC '09 Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation. [5] I.S. Dhilon, Y. Guan, B. Kulis, “Kernel k-means, Spectral Clustering and Normalized Cuts”, 2004 http://www.cs.utexas.edu/~kulis/pubs/spectral_kdd.pdf [6] E. David, “Spectral Clustering”, 2008 Image Processing Seminar.

  23. THE END! Thank you for listening!

More Related