Machine Learning Lecture 9: Clustering. Moshe Koppel Slides adapted from Raymond J. Mooney. Clustering. Partition unlabeled examples into disjoint subsets of clusters , such that: Examples within a cluster are very similar Examples in different clusters are very different
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
vertebrate
invertebrate
fish reptile amphib. mammal worm insect crustacean
Hierarchical ClusteringStart with all instances in their own cluster.
Until there is only one cluster:
Among the current clusters, determine the two
clusters, ci and cj, that are most similar.
Replace ci and cj with a single cluster ci cj
Let d be the distance measure between instances.
Select k random instances {s1, s2,… sk} as seeds.
Until clustering converges or other stopping criterion:
For each instance xi:
Assign xi to the cluster cjsuch that d(xi, sj) is minimal.
(Update the seeds to the centroid of each cluster)
For each cluster cj
sj = (cj)
Reassign clusters
Compute centroids
Reasssign clusters
x
x
x
Compute centroids
x
x
x
K Means Example(K=2)Reassign clusters
Converged!
Each step decreases objective function:
is minimized when µ is centroid of X.
+
+
+
+
+
EMInitialize:
Assign random probabilistic labels to unlabeled data
Unlabeled Examples
+
+
+
+
+
EMInitialize:
Give softlabeled training data to a probabilistic learner
Prob.
Classifier
+
+
+
+
+
EMInitialize:
Produce a probabilistic classifier
Prob.
Classifier
+
+
+
+
+
EME Step:
Relabel unlabled data using the trained classifier
Prob.
Classifier
+
+
+
+
+
EMM step:
Retrain classifier on relabeled data
Continue EM iterations until probabilistic labels on unlabeled data converge.
Randomly assign examples probabilistic category labels.
Use standard naïveBayes training to learn a probabilistic model
with parameters from the labeled data.
Until convergence or until maximum number of iterations reached:
EStep: Use the naïve Bayes model to compute P(ci  E) for
each category and example, and relabel each example
using these probability values as soft category labels.
MStep: Use standard naïveBayes training to reestimate the
parameters using these new probabilistic category labels.
+


+
+
Prob. Learner
Prob.
Classifier
+
+
+
+
+
SemiSupervised EMUnlabeled Examples
+


+
+
Prob. Learner
Prob.
Classifier
+
+
+
+
+
SemiSupervised EMUnlabeled Examples
+


+
+
Prob. Learner
Prob.
Classifier
+
+
+
+
+
SemiSupervised EMContinue retraining iterations until probabilistic labels on unlabeled data converge.