1 / 25

Advisor : Dr. Hsu Graduate : Jian-Lin Kuo Author : Silvia Nittel Kelvin T.Leung

國立雲林科技大學 National Yunlin University of Science and Technology. Scaling Clustering Algorithm for Massive Data Sets using Data Streams. Advisor : Dr. Hsu Graduate : Jian-Lin Kuo Author : Silvia Nittel Kelvin T.Leung Amy Braverman. Outline. N.Y.U.S.T. I.M.

dante
Download Presentation

Advisor : Dr. Hsu Graduate : Jian-Lin Kuo Author : Silvia Nittel Kelvin T.Leung

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 國立雲林科技大學National Yunlin University of Science and Technology • Scaling Clustering Algorithm for Massive Data Sets using Data Streams • Advisor :Dr. Hsu • Graduate:Jian-Lin Kuo • Author :Silvia Nittel • Kelvin T.Leung • Amy Braverman

  2. Outline • N.Y.U.S.T. • I.M. • Motivation • Objective • Introduction • Literature review • Implementing K-means Using Data Stram • Space & Time complexity • Parallelizing Partial/Merge K-means • Experimental Evaluation • Conclusions

  3. Motivation • N.Y.U.S.T. • I.M. • Computing data mining algorithms such as clustering techniques on massive data sets is still not feasible nor efficient today. • To cluster massive data sets or subsets, overall exection time and scalability are important issues.

  4. Objective • N.Y.U.S.T. • I.M. • It achieves an overall high performance computation for massive data.

  5. Introduction • N.Y.U.S.T. • I.M. • To improved data distribution and analysis, we substitute data sets with compressed conterparts. • We partition the data set into 1 degree * 1 degree grid cell to improve data distribution and analysis.

  6. Literature review • N.Y.U.S.T. • I.M. • K-means algorithm 1. Initialization:Select a set of k initial cluster centroids ramdomly, i.e. , 1 ≤ j ≤ k. 2. Distance Calculation:For each data point , 1 ≤ i ≤ n, computes its Euclidean distance to each centroid. 3. Centroid Recalculation:For each 1 ≤ j ≤ k, computing that is the actual mean of the cluster is the new centroid. ( where ) 4. Convergence Condition:Repeat (2) to (3) until convergence criteria is met. The convergence criteria is || MSE(n-1) – MSE(n) || ≤ 1* 10-9

  7. Literature review (cont.) • N.Y.U.S.T. • I.M. • Computing K-means via a serial algorithm 1. scanning a grid cell at a time, 2. compressing it, and 3. scanning the next grid cell. • All data points of one grid cell are kept in memory

  8. Literature review (cont.) • N.Y.U.S.T. • I.M. • The quality of the clustering process is indicated by the error function E which is defined as • In this case , - memory complexity O(N) - time complexity is O(GRINK)

  9. Literature review (cont.) • N.Y.U.S.T. • I.M. • Parallel implementations of K-means - Method A is a naive way of parallelizing k-means is to assign the clustering of one grid cell each to a processor. - Method B is to assign each run of k-means on one grid cell using one set of initial, randomly chosen k seeds to a processor.

  10. Literature review (cont.) • N.Y.U.S.T. • I.M. - Method C is divide the grid cell into disjunct subsets (clusters) assigened to different slaves by choosing a set of initial centroids. - It reduces both the computational and the memory bottleneck.

  11. Implementing K-means Using Data Stream • N.Y.U.S.T. • I.M. • It consists of the following steps: - Scan the temporal-spatial grid cells. 1) We assumed that the data had been scanned once, and sorted into one degree latitude and one degree longitude grid buckets used as data input. 2) All data points belonging to the grid cell have to be available.

  12. Implementing K-means Using Data Stream (cont.) • N.Y.U.S.T. • I.M. - Partial k-means on a subset of data points. 1) Instead of storing all data points v1,…,vn of a grid cell Cs in memory divide the data of Cs into p partitions P1,…PP. 2) All data points v1,…,vm of partition Pj can be stored into available memory. 3) Selecting a set of random k seeds for a partition Pj until the convergence criteria is met, and repeating for several sets of random k-seeds. 4) The partial k-means produces a set of weighted centroids cij is included in Pj {(c1j ,w1j), (c2j ,w2j),..., (ckj ,wkj)}.

  13. Implementing K-means Using Data Stream (cont.) • N.Y.U.S.T. • I.M. - Merge k-means the results of step 2. 1) It performs another k-means using the set of all centroids that were computed in the partial k-means for all partitions P1,…PP 2) Given a set S of MD-dimensional centroids {(c1 ,w1), (c2 ,w2),...(cm , wm)} where M is the sum of centroids of P1,…PP.

  14. Implementing K-means Using Data Stream (cont.) • N.Y.U.S.T. • I.M. - Merge K-means algorithm: 1) Initialization: Select the subset of k initial cluster centroids zi (the weight wi of zi is one of the k largest weights in S.). 2) Distance Calculation: For each data point ci, 1 < i < m, compute its Euclidean distance to each centroid zj, 1 < i < k, and then find the closest cluster centroid. 3) Centroid Recalculation: For each 1 ≤ j ≤ k, computing the actual, weighted mean of the cluster Cjthat is the new centroid. ( where ) 4) Convergence Condition:Repeat (2) to (3) until convergence criteria is met; e.g. is || MSE(n-1) – MSE(n) || ≤ 1* 10-9

  15. Implementing K-means Using Data Stream (cont.) • N.Y.U.S.T. • I.M.

  16. Space & Time complexity • N.Y.U.S.T. • I.M. • Partial k-means vs. Serial k-means where N is the number of data points, K is the number of centroids, I is the number of iterations to converge, O ( N ’ p ) = O ( N ) in the space complexity ( p is the number of partitions), and O ( N ’ K I ’ p ) << O ( N K I ) in the time complexity.

  17. Space & Time complexity (cont.) • N.Y.U.S.T. • I.M. • Merge k-means where K is the number of weighted centroid from each partition, p is the number of partitions, and I is the number of iterations to converge.

  18. Parallelizing Partial/Merge K-means • N.Y.U.S.T. • I.M. • Several options for parallelization can be considered. - Option1 is to clone the partial k-means to as many machines as possible, and compute all k-means algorithms on the data partitions in parallel, and merge the results on one of the machines. - Option2 is to send a data partition to several machines at the same time, and perform partial k-means with a different set of initial seeds on each machine in parallel. - Option3 is to break up the partial k-means into several finer grained operators.

  19. Experimental Evaluation • N.Y.U.S.T. • I.M. • The goal of the experimental evaluation is to - compare the scalability of the partial/merge k-means. ( 5 split/10 split case), - speed-up of the processing if the partial k-means operators are parallelized, and run on different machines. - the achieved quality of the clustering with a serial k-means that clusters all data points in the same iteration. - analyze the quality of the merge k-means operator with regard to the size, and number of data partitions.

  20. Experimental Evaluation (cont.) • N.Y.U.S.T. • I.M. • Experiment Environment - Conquest version that was implemented using JDK 1.3.1, - four Dell Optiplex GX260 PCs which is equipped with a 2.8 GHz Intel Pentium IV processor, 1 GB of RAM, a 80 GB hard disk, and Netgear GS508T GigaSwith.

  21. Experimental Evaluation (cont.) • N.Y.U.S.T. • I.M. • Data Sets - EOS MISR data set, - 1 deg * 1 deg grid cell with the following characteristics: 1) the number of data points per grid cell between 250, 2500, 5000, 20,000, 50,000, 75,000 points, 2) six attributes for each data point, 3) a fixed k for all configurations ( k = 40 )

  22. Experimental Evaluation (cont.) • N.Y.U.S.T. • I.M. • The computation time for the serial k-means is increasing exponentially with the number of data points per grid cell. • The overall execution time of the partial/merge k-means in most cases is significantly lower. Overall execution time, serial v.s partial / merge K-means

  23. Experimental Evaluation (cont.) • N.Y.U.S.T. • I.M. • Comparing 10-split vs. 5-split vs. serial

  24. Experimental Evaluation (cont.) • N.Y.U.S.T. • I.M. Minimum mean square error, serial vs. 5-split vs.10-split Partial K-means processing time, 5-split vs.10-split

  25. Conclusions • N.Y.U.S.T. • I.M. • The partial/merge stream-based k-means - is simpler to find an appropriate cluster representation. - provides a highly scalable, parallel approach, efficiency, and a significantly higher clustering quality.

More Related