1 / 29

Web Search and Text Mining

Web Search and Text Mining. Lecture 14: Clustering Algorithm. Today’s Topic: Clustering. Document clustering Motivations Document representations Success criteria Clustering algorithms Partitional Hierarchical. What is clustering?.

payton
Download Presentation

Web Search and Text Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Web Search and Text Mining Lecture 14: Clustering Algorithm

  2. Today’s Topic: Clustering • Document clustering • Motivations • Document representations • Success criteria • Clustering algorithms • Partitional • Hierarchical

  3. What is clustering? • Clustering: the process of grouping a set of objects into classes of similar objects • The commonest form of unsupervised learning • Unsupervised learning = learning from raw data, as opposed to supervised data where a classification of examples is given • A common and important task that finds many applications in IR and other places

  4. Why cluster documents? • Whole corpus analysis/navigation • Better user interface • For improving recall in search applications • Better search results • For better navigation of search results • Effective “user recall” will be higher • For speeding up vector space retrieval • Faster search

  5. Yahoo! Hierarchy www.yahoo.com/Science … (30) agriculture biology physics CS space ... ... ... ... ... dairy botany cell AI courses crops craft magnetism HCI missions agronomy evolution forestry relativity

  6. For improving search recall • Cluster hypothesis - Documents with similar text are related • Therefore, to improve search recall: • Cluster docs in corpus a priori • When a query matches a doc D, also return other docs in the cluster containing D • Hope if we do this: The query “car” will also return docs containing automobile • Because clustering grouped together docs containing car with those containing automobile. Why might this happen?

  7. For better navigation of search results • For grouping search results thematically • clusty.com / Vivisimo

  8. Issues for clustering • Representation for clustering • Document representation • Vector space? Probabilistic/Multinomials? • Need a notion of similarity/distance • How many clusters? • Fixed a priori? In search results, space constraints • Completely data driven? • Avoid “trivial” clusters - too large or small • In an application, if a cluster's too large, then for navigation purposes you've wasted an extra user click without whittling down the set of documents much.

  9. What makes docs “related”? • Ideal: semantic similarity  NLP. • Practical: statistical similarity • We will use cosine similarity  dist of normalized doc vectors. • Docs as vectors.

  10. Clustering Algorithms • Partitional algorithms • Usually start with a random (partial) partitioning • Refine it iteratively • K means clustering • Model based clustering (finite mixture models) • Hierarchical algorithms • Bottom-up, agglomerative • Top-down, divisive

  11. Partitioning Algorithms • Partitioning method: Construct a partition of n documents into a set of K clusters • Given: a set of documents and the number K • Find: a partition of K clusters that optimizes the chosen partitioning criterion • Globally optimal: exhaustively enumerate all partitions (O(n^K) partitions) • Effective heuristic methods: K-means and K-medoids algorithms

  12. K-Means • Assumes documents are real-valued vectors. • Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c: • Reassignment of instances to clusters is based on distance to the current cluster centroids. • (Or one can equivalently phrase it in terms of similarities)

  13. K-Means Algorithm Select K random docs {c1, c2,… cK} as seeds. Until clustering converges For each doc xi: Assign xi to the cluster Cjsuch that dist(xi, cj) is minimal. (Update the seeds to the centroid of each cluster) For each cluster Cj cj = (Cj)

  14. Pick seeds Reassign clusters Compute centroids Reassign clusters x x Compute centroids x x x x K Means Example(K=2) Reassign clusters Converged!

  15. Termination conditions • Several possibilities, e.g., • A fixed number of iterations. • Doc partition unchanged. • Centroid positions don’t change. Does this mean that the docs in a cluster are unchanged?

  16. Convergence • Why should the K-means algorithm ever reach a fixed configuration? • A state in which clusters don’t change. • K-means convergence in terms of cost function value: both steps do not increase objective function value. • More: if one can show each step strictly reduce objective function  no cycles can occur  converge to a fixed configuration

  17. Lower case Convergence of K-Means • Define goodness measure of cluster k as sum of squared distances from cluster centroid: • Gk = Σi (xi – ck)2 (sum over all xi in cluster k) • G = Σk Gk • Reassignment monotonically decreases G since each vector is assigned to the closest centroid.

  18. Convergence of K-Means • Recomputation monotonically decreases each Gk since (mk is number of members in cluster k): • Σ (xi – a)2 reaches minimum for: • Σ –2(xi – a) = 0 • Σ xi = Σ a • mK a = Σ xi • a = (1/ mk) Σ xi = ck

  19. Relating two consecutive configurations • Illustration in class

  20. Time Complexity • Computing distance between two docs is O(m) where m is the dimensionality of the vectors. • Reassigning clusters: O(Kn) distance computations, or O(Knm). • Computing centroids: Each doc gets added once to some centroid: O(nm). • Assume these two steps are each done once for I iterations: O(IKnm). • However, fast algorithms exist (using KD-trees).

  21. NP-Hard Problem • Consider the problem of min G = Σk Gk for fixed k. It has been proved that even for k=2, the problem is NP-hard. The total number of partitions is k^n, n number of points. Kmeans is a local search method for min G, depending on the initial seed selection.

  22. Non-optimal stable configuration • Illustration using three points

  23. Seed Choice • Results can vary based on random seed selection. • Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings. • Select good seeds using a heuristic (e.g., doc least similar to any existing mean) • Try out multiple starting points • Initialize with the results of another clustering methods. Example showing sensitivity to seeds In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F} If you start with D and F you converge to {A,B,D,E} {C,F}

  24. How to Choose Initial Seeds? Kmeans++: D(x) the shortest distance to the current centers • Choose an initial center c1 uniformly from X • Choose the next center ci: choose ci=x’ from X with probability D(x’)^2/ sum D(x)^2 • Repeat the above until we have K centers • Apply Kmeans to the chosen K centers

  25. Theorem on Kmeans++ • Let G be the objective function value from Kmeans++, and G_opt the optimal objective function value, then E(G) <= 8(ln k +2)G_opt

  26. How Many Clusters? • Number of clusters K is given • Partition n docs into predetermined number of clusters • Finding the “right” number of clusters is part of the problem • Given docs, partition into an “appropriate” number of subsets. • E.g., for query results - ideal value of K not known up front - though UI may impose limits.

  27. K not specified in advance • G(K) decreases as K increases • Solve an optimization problem: penalize having lots of clusters • application dependent, e.g., compressed summary of search results list. • Tradeoff between having more clusters (better focus within each cluster) and having too many clusters

  28. BIC-type criteria • This a difficult problem • A common approach is to minimize some kind of BIC-type criteria G(# of clusters) + a*(dimension)*(# of clusters)*log(# of points)

  29. K-means issues, variations, etc. • Recomputing the centroid after every assignment (rather than after all points are re-assigned) can improve speed of convergence of K-means • Assumes clusters are spherical in vector space • Sensitive to coordinate changes, weighting etc. • Disjoint and exhaustive • Doesn’t have a notion of “outliers”

More Related