1 / 13

Stochastic Unsupervised Learning on Unlabeled Data

Stochastic Unsupervised Learning on Unlabeled Data. Presented by Jianjun Xie – CoreLogic Collaborated with Chuanren Liu, Yong Ge and Hui Xiong – Rutgers, the State University of New Jersey. July 2, 2011. Our Story.

martha
Download Presentation

Stochastic Unsupervised Learning on Unlabeled Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stochastic Unsupervised Learning on Unlabeled Data Presented by Jianjun Xie – CoreLogic Collaborated with Chuanren Liu, Yong Ge and Hui Xiong – Rutgers, the State University of New Jersey July 2, 2011

  2. Our Story “Let’s set up a team to compete another data mining challenge” – a call with Rutgers Is it a competition on data preprocessing? Transfer the problem into a clustering problem: How many clusters we are shooting for? What distance measurement works better? Go with the stochastic K-means clustering.

  3. Dataset Recap Five real world data sets were extracted from different domains No labels were provided during unsupervised learning challenge The withheld labels are multi-class labels. Some records can belong to different labels at the same time Performance was measured by a global score, which is defined as Area Under Learning Curve A simple linear classifier (Hebbian learner) was used to calculate the learning curve Focus on small number of training samples by log2 scaling on x-axis of the learning curve

  4. Evolution of Our Approaches Simple Data Preprocessing Normalization: Z-scale (std=1, mean=0) TF-IDF on text recognition (TERRY dataset) PCA: PCA on raw data PCA on normalized data Normalized PCA vs. non-normalized PCA K-means Clustering Cluster on top N normalized PCs Cosine similarity vs. Euclidian distance

  5. Stochastic Clustering Process Given Data set X, number of cluster K, and iteration N For n=1, 2, …, N Randomly choose K seeds from X Perform K-means clustering, assign each record a cluster membership In Transform In into binary representation Combine the N binary representation together as the final result Example of binary representation of clusters Say cluster label = 1,2,3 Binary representation will be (1 0 0) (0 1 0) and (0 0 1) Our final approach

  6. Results of Our Approaches Dataset Harry – human action recognition

  7. Results Dataset Rita – object recognition

  8. Results Dataset Sylvester-- ecology

  9. Results Dataset Terry – text recognition

  10. Results Dataset Avicenna – Arabic manuscripts

  11. Summary on Results Overall rank 2nd. Pie Chart Title

  12. Discussions Stochastic clustering can generate better results than PCA in general Cosine similarity distance is better than Euclidian distance Normalized data is better than non-normalized data for k-means in general Number of clusters (K) is an important factor, but can be relaxed for this particular competition.

  13. Thank you ! Questions?

More Related