performance guarantees for hierarchical clustering l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Performance guarantees for hierarchical clustering PowerPoint Presentation
Download Presentation
Performance guarantees for hierarchical clustering

Loading in 2 Seconds...

play fullscreen
1 / 27

Performance guarantees for hierarchical clustering - PowerPoint PPT Presentation


  • 177 Views
  • Uploaded on

Performance guarantees for hierarchical clustering. Sanjoy Dasgupta University of California, San Diego Philip Long Genomics Institute of Singapore. 1- clustering. 2- clustering. 3- clustering. 4- clustering. 5- clustering. 1 2 3 4 5.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Performance guarantees for hierarchical clustering' - Albert_Lan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
performance guarantees for hierarchical clustering

Performance guarantees for hierarchical clustering

Sanjoy Dasgupta University of California, San Diego

Philip Long Genomics Institute of Singapore

hierarchical clustering

1-clustering

2-clustering

3-clustering

4-clustering

5-clustering

1 2 3 4 5

Hierarchical clustering

Recursive partitioning of a data set

5

1

4

2

3

popular form of data analysis
Popular form of data analysis
  • No need to specify number of clusters
  • Can view data at many levels of granularity, all at the same time
  • Simple heuristics for constructing hierarchical clusterings
applications
Applications
  • Has long been used by biologists and social scientists
  • A standard part of the statistician’s toolbox since the 60s or 70s
  • Recently: common tool for analyzing gene expression data
performance guarantees
Performance guarantees

There are many simple greedy schemes for constructing hierarchical clusterings.

But are these resulting clusterings any good?

Or are they pretty arbitrary?

one basic problem
One basic problem

In fact, the whole enterprise of hierarchical clustering could use some more justification.

e.g.

an existence question
An existence question

Must there always exist a hierarchical clustering which is close to optimal at every level of granularity, simultaneously?

[I.e., such that for allk, the induced k-clustering is close to the best k-clustering?]

what is the best k clustering
What is the best k-clustering?

The k-clustering problem.

Input: data points in a metric space; k

Output: a partition of the points into k clusters C1,…, Ck with centers m1, ..., mk

Goal: minimize cost of the clustering

cost functions for clustering
Cost functions for clustering

Two cost functions which are commonly used:

Maximum radius(k-center)

max {d(x,mi):i = 1…k, x in Ci}

Average radius(k-median)

avg {d(x,mi):i = 1…k, x in Ci}

Both yield NP-hard optimization problems, but have constant-factor approximation algorithms.

our main result
Our main result

Adopt the maximum-radius cost function.

Our algorithm returns a hierarchical clustering such that for everyk, the induced k-clustering is guaranteed to be within a factor eight of optimal.

standard heuristics
Standard heuristics
  • The standard heuristics for hierarchical clustering are greedy and work bottom-up:

single-linkage, average-linkage, complete-linkage

  • Their k-clusterings can be off by a factor of:

-- at least log2 k (average-, complete-linkage);

-- at least k (single-linkage).

  • Our algorithm is similar in efficiency and simplicity, but works top-down.
a heuristic for k clustering
A heuristic for k-clustering

[Hochbaum and Shmoys, 1985]

Eg. k = 4.

3

2

R

1

4

This 4-clustering has cost R  2 OPT4

algorithm step one
Algorithm: step one

Number all points by farthest-first traversal.

3

2

8

7

R3

10

R2

R6

R4

9

6

5

R5

1

4

For all k, the k-clustering defined by centers {1,2,…,k}

has radius Rk+1 2 OPTk. (Note: R2 R3 …  Rn.)

a possible hierarchical clustering
A possible hierarchical clustering

1

R2

R3

3

2

2

3

8

7

R4

R6

R7

R8

10

9

4

6

7

8

6

R5

5

R10

1

5

10

4

Hierarchical clustering specified by parent function:

p(j) = closest point to j in {1,2,…,j-1}.

Note: Rk = d(k, p(k))

R9

9

algorithm step two
Algorithm: step two

Divide points into levels of granularity.

Set R = R2; and fix some b > 1.

The jth level has points {i: R/bj Ri > R/bj+1}.

3

2

8

7

10

9

6

5

1

4

algorithm step two cont d

1

2

3

4

6

7

8

5

9

10

Algorithm: step two, cont’d

3

2

8

7

10

9

6

5

1

4

Different parent function:

p*(j) = closest point to j at lower level of granularity

algorithm summary
Algorithm: summary
  • Number the points by farthest-first traversal; note the values Ri = d(i, {1,2,…, i-1}).
  • Choose R = a R2.
  • L(0) = {1}; for j > 0, L(j) = {i: R/bj-1 Ri > R/bj}.
  • If point i is in L(j),

p*(i) = closest point to i in L(0), …., L(j-1).

Theorem: Fix a=1, b=2. If the data points lie in a metric space, then for all k simultaneously, the induced k-clustering is within a factor eight of optimal.

randomization trick
Randomization trick

Pick a from the distribution bU[0,1] . Set b = e.

Then for all k, the induced k-clustering has expected cost at most 2e  5.44 times optimal.

Thanks to Rajeev Motwani for suggesting this.

standard agglomerative heuristics
Standard agglomerative heuristics
  • Initially each point is its own cluster.
  • Repeatedly merge the two “closest” clusters.

Need to define distance between clusters…

Single-linkage: distance between closest pair of points

Average-linkage: distance between centroids

Complete-linkage: distance between farthest pair

single linkage clustering
Single-linkage clustering

Chaining effect.

1 - jd

1

2

3

j

j+1

n

The k-clustering will have diameter about n-k, instead of n/k.

Therefore: off by a factor of k.

average linkage clustering
Average-linkage clustering

Points in d-dimensional space, d = log2 k, under an l1 metric.

Final radius should be 1, instead is d.

Therefore: off by a factor of log2 k.

complete linkage clustering
Complete-linkage clustering

Can similarly construct a bad case…

Off by a factor of at least log2 k.

summary
Summary

There is a basic existence question about hierarchical clustering which needs to be addressed:

must there always exist a hierarchical clustering in which, for each k, the induced k-clustering is close to optimal?

It turns out the answer is yes.

summary cont d
Summary, cont’d

In fact, there is a simple, fast algorithm to construct such hierarchical clusterings.

Meanwhile, the standard agglomerative heuristics do not always produce close-to-optimal clusterings.

where next
Where next?
  • Reduce the approximation factor.
  • Other cost functions for clustering.
  • For average- and complete-linkage, is the log k lower bound also an upper bound?
  • Local improvement procedures for hierarchical clustering?