Loading in 5 sec....

K-medoid-style Clustering Algorithms for Supervised Summary GenerationPowerPoint Presentation

K-medoid-style Clustering Algorithms for Supervised Summary Generation

- 139 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' K-medoid-style Clustering Algorithms for Supervised Summary Generation' - patience-bernard

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### K-medoid-style Clustering Algorithms for Supervised Summary Generation

Determine the element s in S for which q(s) is minimal (if there is more than one minimal element, randomly pick one). IF q(s)<q(curr) THEN curr:=s

Nidal Zeidat & Christoph F. Eick

Dept. of Computer Science

University of Houston

Talk Outline Generation

- What is Supervised Clustering?
- Representative-based Clustering Algorithms
- Benefits of Supervised Clustering
- Algorithms for Supervised Clustering
- Empirical Results
- Conclusion and Areas of Future Work

1. (Traditional) Clustering Generation

- Partition a set of objects into groups of similar objects. Each group is called cluster
- Clustering is used to “detect classes” in data set (“unsupervised learning”)
- Clustering is based on a fitness function that relies on a distance measure and usually tries to minimize distance between objects within a cluster.

Supervised Clustering Generation

- Assumes that clustering is applied to classified examples.
- The goal of supervised clustering is to identify class-uniform clusters that have a high probability density. prefers clusters whose members belong to single class (low impurity)
- We would, also, like to keep the number of clusters low (small number of clusters).

Supervised Clustering … (continued) Generation

Attribute 1

Attribute 1

Attribute 2

Attribute 2

Traditional Clustering

Supervised Clustering

A Fitness Function for Supervised Clustering Generation

q(X) := Impurity(X) + β*Penalty(k)

k: number of clusters used

n: number of examples the dataset

c: number of classes in a dataset.

β: Weight for Penalty(k), 0< β ≤2.0

2. Representative-Based Generation Supervised Clustering (RSC)

- Aims at finding a set of objects among all objects (called representatives) in the data set that best represent the objects in the data set. Each representative corresponds to a cluster.
- The remaining objects in the data set are, then, clustered around these representatives by assigning objects to the cluster of the closest representative.
Remark: The popular k-medoid algorithm, also called PAM, is a representative-based clustering algorithm.

Representative Based Generation Supervised Clustering … (Continued)

2

Attribute1

1

3

Attribute2

4

Objective of RSC: Find a subset OR of O such that the clustering X obtained

by using the objects in OR as representatives minimizes q(X).

Why do we use Representative-Based Clustering Algorithms? Generation

- Representatives themselves are useful:
- can be used for summarization
- can be used for dataset compression

- Smaller search space if compared with algorithms, such as k-means.
- Less sensitive to outliers
- Can be applied to datasets that contain nominal attributes (not feasible to compute means)

3. Applications of Supervised Clustering Generation

- Enhance classification algorithms.
- Use SC for Dataset Editing to enhance NN-classifiers [ICDM04]
- Improve Simple Classifiers [ICDM03]

- Learn Sub-classes / Summary Generation
- Distance Function Learning
- Dataset Compression/Reduction
- For Measuring the Difficulty of a Classification Task

Representative Based Supervised Clustering Generation Dataset Editing

Attribute1

Attribute1

A

B

D

C

F

E

Attribute2

Attribute2

a. Dataset clustered using supervised clustering.

b. Dataset edited using cluster representatives.

Representative Based Supervised Clustering Generation Enhance Simple Classifiers

Attribute1

Attribute2

Representative Based Supervised Clustering Generation Learning Sub-classes

Attribute1

Ford Trucks

:Ford

:GMC

GMC Trucks

GMC Van

Ford Vans

Ford Trucks

Attribute2

GMC Van

4. Clustering Algorithms Currently Investigated Generation

- Partitioning Around Medoids (PAM). Traditional
- Supervised Partitioning Around Medoids (SPAM).
- Single Representative Insertion/Deletion Steepest Decent Hill Climbing with Randomized Restart (SRIDHCR).
- Top Down Splitting Algorithm (TDS).
- Supervised Clustering using Evolutionary Computing (SCEC).

- REPEAT Generationr TIMES
- curr := a randomly created set of representatives (with size between c+1 and 2*c)
- WHILE NOT DONE DO
- Create new solutions S by adding a single non-representative to curr and by removing a single representative from curr.
- Determine the element s in S for which q(s) is minimal (if there is more than one minimal element, randomly pick one).
- IF q(s)<q(curr) THEN curr:=s
- ELSE IF q(s)=q(curr) AND |s|>|curr| THEN Curr:=s
- ELSE terminate and return curr as the solution for this run.

- Report the best out of the r solutions found.

Algorithm SRIDHCR

- Build Initial Solution Generationcurr: (given # of clusters k)
- Determine the medoid of the most frequent class in the dataset. Insert that object m into curr.
- For k-1 times, add an object v in the dataset to curr (that is not already in curr) that gives the lowest value for q(X) for curr {v}.

- Improve Initial Solution curr:
- DO FOREVER
- FOR ALL representative objects r in curr DO
- FOR ALL non-representatives objects oin datasetDO
- Create a new solution v by clustering the dataset around the representative set curr-{r}{o} and insert v into S.
- Calculate q(v) for this clustering.

- ELSE TERMINATE returning curr as the final solution.

Algorithm SPAM

Differences between SPAM and SRIDHCR Generation

- SPAM tries to improve the current solution by replacing a representative by a non-representative, whereas SRIDHCR improves the current solution by removing a representative/by inserting a non-representative.
- SPAM is run keeping the number of clusters k fixed, whereas SRIDHCR searches for a “good” value of k, therefore exploring a larger solution space. However, in the case of SRIDHCR which choices for k are good is somewhat restricted by the selection of the parameter b.
- SRIDHCR is run r times starting from a random initial solution, SPAM is only run once.

5. Performance Measures for the Experimental Evaluation Generation

The investigated algorithms were evaluated based on the following performance measures:

- Cluster Purity (Majority %).
- Value of the fitness function q(X).
- Average dissimilarity between all objects and their representatives (cluster tightness).
- Wall-Clock Time (WCT). Actual time, in seconds, that the algorithm took to finish the clustering task.

Table 5: Comparative Performance of the Different Algorithms, β=0.1

Table 6: Algorithms, β=0.1 Average Comparative Performance of the Different Algorithms, β=0.4

Why is SRIDHCR performing so much better than SPAM? Algorithms, β=0.1

- SPAM is relatively slow compared with a single run of SRIDHCR allowing for 5-30 restarts of SRIDHCR using the same resources. This enables SRIDHCR to conduct a more balanced exploration of the search space.
- Fitness landscape induced by q(X) contains many plateau-like structures (q(X1)=q(X2)) and many local minima and SPAM seems to get stuck more easily.
- The fact that SPAM uses a fixed k-value does not seem beneficiary for finding good solutions, e.g.: SRIDHCR might explore {u1,u2,u3,u4}…{u1,u2,u3,u4,v1,v2} … {u3,u4,v1,v2}, whereas SPAM might terminate with the sub-optimal solution {u1,u2,u3,u4}, if neither the replacement of u1 through v1 nor the replacement of u2 by v2 enhances q(X).

Table 7: Ties distribution Algorithms, β=0.1

Figure 2: How Purity and k Change as β Increases Algorithms, β=0.1

6. Conclusions Algorithms, β=0.1

- As expected, supervised clustering algorithms produced significantly better cluster purity than traditional clustering. Improvements range between 7% and 19% for different data sets.
- Algorithms that too greedily explore the search space, such as SPAM, do not seem to be very suitable for supervised clustering. In general, algorithms that explore the search space more randomly seem to be more suitable for supervised clustering.
- Supervised clustering can be used to enhance classifiers, dataset summarization, and generate better distance functions.

Future Work Algorithms, β=0.1

- Continue work on supervised clustering algorithms
- Find better solutions
- Faster
- Explain some observations

- Using supervised clustering for summary generation/learning subclasses
- Using supervised clustering to find “compressed” nearest neighbor classifiers.
- Using supervised clustering to enhance simple classifiers
- Distance function learning

Download Presentation

Connecting to Server..