Data mining concepts and techniques cluster analysis li xiong
Download
1 / 190

Data Mining: Concepts and Techniques Cluster Analysis Li Xiong - PowerPoint PPT Presentation


  • 125 Views
  • Uploaded on

Data Mining: Concepts and Techniques Cluster Analysis Li Xiong. Slide credits: Jiawei Han and Micheline Kamber Tan, Steinbach, Kumar. Chapter 7. Cluster Analysis. Overview Partitioning methods Hierarchical methods Density-based methods Other Methods Outlier analysis Summary.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Data Mining: Concepts and Techniques Cluster Analysis Li Xiong' - gregory-pallas


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Data mining concepts and techniques cluster analysis li xiong
Data Mining:Concepts and TechniquesCluster AnalysisLi Xiong

Slide credits: Jiawei Han and Micheline Kamber

Tan, Steinbach, Kumar

Data Mining: Concepts and Techniques


Chapter 7 cluster analysis
Chapter 7. Cluster Analysis

  • Overview

  • Partitioning methods

  • Hierarchical methods

  • Density-based methods

  • Other Methods

  • Outlier analysis

  • Summary

Data Mining: Concepts and Techniques


What is cluster analysis

Inter-cluster distances are maximized

Intra-cluster distances are minimized

What is Cluster Analysis?

  • Finding groups of objects (clusters)

    • Objects similar to one another in the same group

    • Objects different from the objects in other groups

  • Unsupervised learning

Data Mining: Concepts and Techniques


Clustering applications
Clustering Applications

  • Marketing research

  • Social network analysis

Li Xiong


Clustering applications1
Clustering Applications

  • WWW: Documents and search results clustering

Data Mining: Concepts and Techniques


Clustering applications2
Clustering Applications

  • Earthquake studies

Li Xiong


Clustering applications3

Time:

Time X

Time Y

Time Z

Gene 1

10

8

10

Gene 2

10

0

9

Gene 3

4

8.6

3

Gene 4

7

8

3

Gene 5

1

2

3

Clustering Applications

  • Biology: plants and animals

  • Bioinformatics: microarray data, genes and sequences

Li Xiong


Requirements of clustering
Requirements of Clustering

  • Scalability

  • Ability to deal with different types of attributes

  • Ability to handle dynamic data

  • Ability to deal with noise and outliers

  • Ability to deal with high dimensionality

  • Minimal requirements for domain knowledge to determine input parameters

  • Incorporation of user-specified constraints

  • Interpretability and usability

Data Mining: Concepts and Techniques


Quality what is good clustering

Inter-cluster distances are maximized

Intra-cluster distances are minimized

Quality: What Is Good Clustering?

  • Agreement with “ground truth”

  • A good clustering will produce high quality clusters with

    • Homogeneity - high intra-class similarity

    • Separation - low inter-class similarity

Data Mining: Concepts and Techniques



Similarity or dissimilarity between data objects
Similarity or Dissimilarity between Data Objects

  • Euclidean distance

  • Manhattan distance

  • Minkowski distance

  • Weighted

Li Xiong


Other similarity or dissimilarity metrics
Other Similarity or Dissimilarity Metrics

  • Pearson correlation

  • Cosine measure

  • KL divergence, Bregman divergence, …

Li Xiong


Different attribute types
Different Attribute Types

  • To compute

    • f is continuous

      • Normalization if necessary

      • Logarithmic transformation for ratio-scaled values

    • f is ordinal

      • Mapping by rank

    • f is categorical

      • Mapping function

        = 0 if xif = xjf , or 1 otherwise

      • Hamming distance (edit distance) for strings

Data Mining: Concepts and Techniques


Clustering approaches
Clustering Approaches

  • Partitioning approach:

    • Construct various partitions and then evaluate them by some criterion, e.g., minimizing the sum of square errors

    • Typical methods: k-means, k-medoids, CLARANS

  • Hierarchical approach:

    • Create a hierarchical decomposition of the set of data (or objects) using some criterion

    • Typical methods: Diana, Agnes, BIRCH, ROCK, CAMELEON

  • Density-based approach:

    • Based on connectivity and density functions

    • Typical methods: DBSACN, OPTICS, DenClue

  • Others

Data Mining: Concepts and Techniques


Chapter 7 cluster analysis1
Chapter 7. Cluster Analysis

  • Overview

  • Partitioning methods

  • Hierarchical methods

  • Density-based methods

  • Other Methods

  • Outlier analysis

  • Summary

Data Mining: Concepts and Techniques


Partitioning algorithms basic concept
Partitioning Algorithms: Basic Concept

  • Partitioning method: Construct a partition of a database D of n objects into a set of k clusters, s.t., the sum of squared distance is minimized

  • Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion

    • Global optimal: exhaustively enumerate all partitions

    • Heuristic methods: k-means and k-medoids algorithms

    • k-means (MacQueen’67): Each cluster is represented by the center of the cluster

    • k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster

Data Mining: Concepts and Techniques


K means clustering lloyd algorithm
K-Means Clustering: Lloyd Algorithm

  • Given k, and randomly choose k initial cluster centers

  • Partition objects into k nonempty subsets by assigning each object to the cluster with the nearest centroid

  • Update centroid, i.e. mean point of the cluster

  • Go back to Step 2, stop when no more new assignment

Data Mining: Concepts and Techniques


The k means clustering method

10

9

8

7

6

5

4

3

2

1

0

0

1

2

3

4

5

6

7

8

9

10

The K-Means Clustering Method

  • Example

10

9

8

7

6

5

Update the cluster means

Assign each objects to most similar center

4

3

2

1

0

0

1

2

3

4

5

6

7

8

9

10

reassign

reassign

K=2

Arbitrarily choose K object as initial cluster center

Update the cluster means

Data Mining: Concepts and Techniques


K means clustering details
K-means Clustering – Details

  • Initial centroids are often chosen randomly.

  • The centroid is (typically) the mean of the points in the cluster.

  • ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc.

  • Most of the convergence happens in the first few iterations.

    • Often the stopping condition is changed to ‘Until relatively few points change clusters’

  • Complexity is

    n is # objects, k is # clusters, and t is # iterations.

O(tkn)


Comments on the k means method
Comments on the K-Means Method

  • Strength

    • Simple and works well for “regular” disjoint clusters

    • Relatively efficient and scalable (normally, k, t << n)

  • Weakness

    • Need to specify k, the number of clusters, in advance

    • Depending on initial centroids, may terminate at a local optimum

      • Potential solutions

    • Unable to handle noisy data and outliers

    • Not suitable for clusters of

      • Different sizes

      • Non-convex shapes

Data Mining: Concepts and Techniques




Limitations of k means differing sizes
Limitations of K-means: Differing Sizes

K-means (3 Clusters)

Original Points


Limitations of k means non convex shapes
Limitations of K-means: Non-convex Shapes

Original Points

K-means (2 Clusters)


Overcoming k means limitations
Overcoming K-means Limitations

Original Points K-means Clusters


Overcoming k means limitations1
Overcoming K-means Limitations

Original Points K-means Clusters


Variations of the k means method
Variations of the K-Means Method

  • A few variants of the k-means which differ in

    • Selection of the initial k means

    • Dissimilarity calculations

    • Strategies to calculate cluster means

  • Handling categorical data: k-modes (Huang’98)

    • Replacing means of clusters with modes

    • Using new dissimilarity measures to deal with categorical objects

    • Using a frequency-based method to update modes of clusters

    • A mixture of categorical and numerical data: k-prototype method

Data Mining: Concepts and Techniques


What is the problem of the k means method

10

9

8

7

6

5

4

3

2

1

0

0

1

2

3

4

5

6

7

8

9

10

10

9

8

7

6

5

4

3

2

1

0

0

1

2

3

4

5

6

7

8

9

10

What Is the Problem of the K-Means Method?

  • The k-means algorithm is sensitive to outliers !

    • Since an object with an extremely large value may substantially distort the distribution of the data.

  • K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster.

Data Mining: Concepts and Techniques


The k medoids clustering method
The K-MedoidsClustering Method

PAM (Kaufman and Rousseeuw, 1987)

  • Arbitrarily select k objects as medoid

  • Assign each data object in the given data set to most similar medoid.

  • Randomly select nonmedoid object O’

  • Compute total cost, S, of swapping a medoid object to O’ (cost as total sum of absolute error)

  • If S<0, then swap initial medoid with the new one

  • Repeat until there is no change in the medoid.

    k-medoids and (n-k) instances pair-wise comparison

Data Mining: Concepts and Techniques


A typical k medoids algorithm pam

10

10

9

9

8

8

7

7

6

6

5

5

4

4

3

3

2

2

1

1

0

0

0

1

2

3

4

5

6

7

8

9

10

0

1

2

3

4

5

6

7

8

9

10

A Typical K-Medoids Algorithm (PAM)

Total Cost = 20

10

9

8

Arbitrary choose k object as initial medoids

Assign each remaining object to nearest medoids

7

6

5

4

3

2

1

0

0

1

2

3

4

5

6

7

8

9

10

K=2

Randomly select a nonmedoid object,Orandom

Total Cost = 26

Do loop

Until no change

Compute total cost of swapping

Swapping O and Oramdom

If quality is improved.

Data Mining: Concepts and Techniques


What is the problem with pam
What Is the Problem with PAM?

  • Pam is more robust than k-means in the presence of noise and outliers

  • Pam works efficiently for small data sets but does not scale well for large data sets.

    • Complexity? O(k(n-k)2t)

      n is # of data,k is # of clusters, t is # of iterations

  • Sampling based method,

    CLARA(Clustering LARge Applications)

Data Mining: Concepts and Techniques


Clara clustering large applications 1990
CLARA (Clustering Large Applications) (1990)

  • CLARA (Kaufmann and Rousseeuw in 1990)

  • It draws multiple samples of the data set, applies PAM on each sample, and gives the best clustering as the output

  • Strength: deals with larger data sets than PAM

  • Weakness:

    • Efficiency depends on the sample size

    • A good clustering based on samples will not necessarily represent a good clustering of the whole data set if the sample is biased

Data Mining: Concepts and Techniques


Clarans randomized clara 1994
CLARANS (“Randomized” CLARA) (1994)

  • CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han’94)

  • The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids

    • PAM examines neighbors for local minimum

    • CLARA works on subgraphs of samples

    • CLARANS examines neighbors dynamically

      • If local optimum is found, starts with new randomly selected node in search for a new local optimum

Data Mining: Concepts and Techniques


Chapter 7 cluster analysis2
Chapter 7. Cluster Analysis

  • Overview

  • Partitioning methods

  • Hierarchical methods and graph-based methods

  • Density-based methods

  • Other Methods

  • Outlier analysis

  • Summary

Data Mining: Concepts and Techniques


Hierarchical clustering
Hierarchical Clustering

  • Produces a set of nested clusters organized as a hierarchical tree

  • Can be visualized as a dendrogram

    • A tree like diagram representing a hierarchy of nested clusters

    • Clustering obtained by cutting at desired level


Strengths of hierarchical clustering
Strengths of Hierarchical Clustering

  • Do not have to assume any particular number of clusters

  • May correspond to meaningful taxonomies


Hierarchical clustering1
Hierarchical Clustering

  • Two main types of hierarchical clustering

    • Agglomerative:

      • Start with the points as individual clusters

      • At each step, merge the closest pair of clusters until only one cluster (or k clusters) left

    • Divisive:

      • Start with one, all-inclusive cluster

      • At each step, split a cluster until each cluster contains a point (or there are k clusters)


Agglomerative clustering algorithm
Agglomerative Clustering Algorithm

  • Compute the proximity matrix

  • Let each data point be a cluster

  • Repeat

  • Merge the two closest clusters

  • Update the proximity matrix

  • Until only a single cluster remains


Starting situation

p1

p2

p3

p4

p5

. . .

p1

p2

p3

p4

p5

.

.

.

Starting Situation

  • Start with clusters of individual points and a proximity matrix

Proximity Matrix


Intermediate situation

C1

C2

C3

C4

C5

C1

C2

C3

C4

C5

Intermediate Situation

C3

C4

Proximity Matrix

C1

C5

C2


How to define inter cluster similarity

Similarity?

p1

p2

p3

p4

p5

. . .

p1

p2

p3

p4

p5

.

.

.

How to Define Inter-Cluster Similarity

Proximity Matrix


Distance between clusters
Distance Between Clusters

  • Single Link: smallest distance between points

  • Complete Link: largest distance between points

  • Average Link:average distance between points

  • Centroid:distance between centroids


Hierarchical clustering min

5

1

3

5

2

1

2

3

6

4

4

Hierarchical Clustering: MIN

Nested Clusters

Dendrogram


Mst minimum spanning tree
MST (Minimum Spanning Tree)

  • Start with a tree that consists of any point

  • In successive steps, look for the closest pair of points (p, q) such that one point (p) is in the current tree but the other (q) is not

  • Add q to the tree and put an edge between p and q


Min vs max vs group average

5

1

5

3

1

4

1

2

5

2

5

2

1

5

5

2

2

2

3

6

3

6

3

6

3

3

1

4

4

1

4

4

4

Min vs. Max vs. Group Average

MIN

MAX

Group Average


Strength of min

Two Clusters

Strength of MIN

Original Points

  • Can handle non-elliptical shapes


Limitations of min

Two Clusters

Limitations of MIN

Original Points

  • Sensitive to noise and outliers


Strength of max

Two Clusters

Strength of MAX

Original Points

  • Less susceptible to noise and outliers


Limitations of max

Two Clusters

Limitations of MAX

Original Points

  • Tends to break large clusters

  • Biased towards globular clusters


Hierarchical clustering group average
Hierarchical Clustering: Group Average

  • Compromise between Single and Complete Link

  • Strengths

    • Less susceptible to noise and outliers

  • Limitations

    • Biased towards globular clusters


Hierarchical clustering major weaknesses
Hierarchical Clustering: Major Weaknesses

  • Do not scale well (N: number of points)

    • Space complexity:

    • Time complexity:

  • Cannot undo what was done previously

  • Quality varies in terms of distance measures

    • MIN (single link): susceptible to noise/outliers

    • MAX/GROUP AVERAGE: may not work well with non-globular clusters

O(N2)

O(N3)

  • O(N2 log(N)) for some cases/approaches


Recent hierarchical clustering methods
Recent Hierarchical Clustering Methods

  • BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters

  • CURE(1998): uses representative points for inter-cluster distance

  • ROCK (1999): clustering categorical data by neighbor and link analysis

  • CHAMELEON (1999): hierarchical clustering using dynamic modeling

Data Mining: Concepts and Techniques


Birch
Birch

  • Birch: Balanced Iterative Reducing and Clustering using Hierarchies (Zhang, Ramakrishnan & Livny, SIGMOD’96)

  • Main ideas:

    • Use in-memory clustering feature to summarize data/cluster

      • Minimize database scans and I/O cost

    • Use hierarchical clustering for microclustering and other clustering methods (e.g. partitioning) for macroclustering

      • Fix the problems of hierarchical clustering

  • Features:

    • Scales linearly: single scan and improves the quality with a few additional scans

    • handles only numeric data, and sensitive to the order of the data record.

Data Mining: Concepts and Techniques


Cluster statistics
Cluster Statistics

Given a cluster of instances

Centroid:

Radius: average distance from member points to centroid

Diameter: average pair-wise distance within a cluster


Intra cluster distance
Intra-Cluster Distance

Given two clusters

Centroid Euclidean distance:

Centroid Manhattan distance:

Average distance:


Clustering feature cf
Clustering Feature (CF)

CF = (5, (16,30),(54,190))

(3,4)

(2,6)

(4,5)

(4,7)

(3,8)


Properties of clustering feature
Properties of Clustering Feature

  • CF entry is more compact

    • Stores significantly less then all of the data points in the sub-cluster

  • A CF entry has sufficient information to calculate statistics about the cluster and intra-cluster distances

  • Additivity theorem allows us to merge sub-clusters incrementally & consistently


Hierarchical cf tree
Hierarchical CF-Tree

  • A CF tree is a height-balanced tree that stores the clustering features for a hierarchical clustering

    • A nonleaf node in a tree has descendants or “children”

    • The nonleaf nodes store sums of the CFs of their children

  • A CF tree has two parameters

    • Branching factor: specify the maximum number of children.

    • threshold: max diameter of sub-clusters stored at the leaf nodes

Data Mining: Concepts and Techniques


The cf tree structure

CF1

CF2

CF3

CF6

child1

child2

child3

child6

The CF Tree Structure

Root

Non-leaf node

CF1

CF2

CF3

CF5

child1

child2

child3

child5

Leaf node

Leaf node

prev

CF1

CF2

CF6

next

prev

CF1

CF2

CF4

next

Data Mining: Concepts and Techniques


Cf tree insertion
CF-Tree Insertion

  • Traverse down from root, find the appropriate leaf

    • Follow the "closest"-CF path, w.r.t. intra-cluster distance measures

  • Modify the leaf

    • If the closest-CF leaf cannot absorb, make a new CF entry.

    • If there is no room for new leaf, split the parent node

  • Traverse back & up

    • Updating CFs on the path or splitting nodes



The algorithm birch
The Algorithm: BIRCH

  • Phase 1: Scan database to build an initial in-memory CF-tree

    • Subsequent phases become fast, accurate, less order sensitive

  • Phase 2: Condense data (optional)

    • Rebuild the CF-tree with a larger T

  • Phase 3: Global clustering

    • Use existing clustering algorithm on CF entries

    • Helps fix problem where natural clusters span nodes

  • Phase 4: Cluster refining (optional)

    • Do additional passes over the dataset & reassign data points to the closest centroid from phase 3


CURE

  • CURE: An Efficient Clustering Algorithm for Large Databases (1998)  Sudipto Guha, Rajeev Rastogi, Kyuscok Shim

  • Main ideas:

    • Use representative points for inter-cluster distance

    • Random sampling and partitioning

  • Features:

    • Handles non-spherical shapes and arbitrary sizes better


Cure cluster points
CURE: Cluster Points

  • Uses a number of points to represent a cluster

  • Representative points are found by selecting a constant number of points from a cluster and then “shrinking” them toward the center of the cluster

    • How to shrink?

  • Cluster similarity is the similarity of the closest pair of representative points from different clusters


Experimental results c ure
Experimental Results: CURE

Picture from CURE, Guha, Rastogi, Shim.


Experimental results cure
Experimental Results: CURE

(centroid)

(single link)

Picture from CURE, Guha, Rastogi, Shim.



Clustering categorical data the rock algorithm
Clustering Categorical Data: The ROCK Algorithm

  • ROCK: RObust Clustering using linKs

    • S. Guha, R. Rastogi & K. Shim, ICDE’99

  • Major ideas

    • Use links to measure similarity/proximity

    • Sampling-based clustering

  • Features:

    • More meaningful clusters

    • Emphasizes interconnectivity but ignores proximity

Data Mining: Concepts and Techniques


Similarity measure in rock
Similarity Measure in ROCK

  • Market basket data clustering

  • Jaccard co-efficient-based similarity function:

  • Example: Two groups (clusters) of transactions

    • C1. <a, b, c, d, e>

      • {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d}, {a, c, e}, {a, d, e}, {b, c, d}, {b, c, e}, {b, d, e}, {c, d, e}

    • C2. <a, b, f, g>

      • {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}

    • Let T1 = {a, b, c}, T2 = {c, d, e}, T3 = {a, b, f}

  • Jaccard co-efficient may lead to wrong clustering result

Data Mining: Concepts and Techniques


Link measure in rock
Link Measure in ROCK

  • Neighbor:

  • Links: # of common neighbors

  • Example:

    • link(T1, T2) = 4, since they have 4 common neighbors

      • {a, c, d}, {a, c, e}, {b, c, d}, {b, c, e}

    • link(T1, T3) = 3, since they have 3 common neighbors

      • {a, b, d}, {a, b, e}, {a, b, g}

Data Mining: Concepts and Techniques


Rock algorithm
Rock Algorithm

  • Obtain a sample of points from the data set

  • Compute the link value for each set of points, from the original similarities (computed by Jaccard coefficient)

  • Perform an agglomerative hierarchical clustering on the data using the “number of shared neighbors” as similarity measure

  • Assign the remaining points to the clusters that have been found

Data Mining: Concepts and Techniques


Chameleon hierarchical clustering using dynamic modeling 1999
CHAMELEON: Hierarchical Clustering Using Dynamic Modeling (1999)

  • CHAMELEON: by G. Karypis, E.H. Han, and V. Kumar’99

  • Basic ideas:

    • A graph-based clustering approach

    • A two-phase algorithm:

      • Partitioning: cluster objects into a large number of relatively small sub-clusters

      • Agglomerative hierarchical clustering: repeatedly combine these sub-clusters

    • Measures the similarity based on a dynamic model

      • interconnectivityand closeness (proximity)

  • Features:

    • Handles clusters of arbitrary shapes, sizes, and density

    • Scales well

Data Mining: Concepts and Techniques


Graph based clustering
Graph-Based Clustering (1999)

  • Uses the proximity graph

    • Start with the proximity matrix

    • Consider each point as a node in a graph

    • Each edge between two nodes has a weight which is the proximity between the two points

  • Fully connected proximity graph

    • MIN (single-link) and MAX (complete-link)

  • Sparsification

    • Clusters are connected components in the graph

    • CHAMELEON


Overall framework of chameleon
Overall Framework of CHAMELEON (1999)

Construct

Sparse Graph

Partition the Graph

Data Set

Merge Partition

Final Clusters

Data Mining: Concepts and Techniques


Chameleon steps
Chameleon: Steps (1999)

  • Preprocessing Step: Represent the Data by a Graph

    • Given a set of points, construct the k-nearest-neighbor (k-NN) graph to capture the relationship between a point and its k nearest neighbors

    • Concept of neighborhood is captured dynamically (even if region is sparse)

  • Phase 1: Use a multilevel graph partitioning algorithm on the graph to find a large number of clusters of well-connected vertices

    • Each cluster should contain mostly points from one “true” cluster, i.e., is a sub-cluster of a “real” cluster


Chameleon steps1
Chameleon: Steps … (1999)

  • Phase 2: Use Hierarchical Agglomerative Clustering to merge sub-clusters

    • Two clusters are combined if the resulting cluster shares certain properties with the constituent clusters

    • Two key properties used to model cluster similarity:

      • Relative Interconnectivity: Absolute interconnectivity of two clusters normalized by the internal connectivity of the clusters

      • Relative Closeness: Absolute closeness of two clusters normalized by the internal closeness of the clusters


Cluster merging limitations of current schemes
Cluster Merging: Limitations of Current Schemes (1999)

  • Existing schemes are static in nature

    • MIN or CURE:

      • merge two clusters based on their closeness (or minimum distance)

    • GROUP-AVERAGE or ROCK:

      • merge two clusters based on their average connectivity


Limitations of current merging schemes
Limitations of Current Merging Schemes (1999)

(a)

(b)

(c)

(d)

Closeness schemes will merge (a) and (b)

Average connectivity schemes will merge (c) and (d)


Chameleon clustering using dynamic modeling
Chameleon: Clustering Using Dynamic Modeling (1999)

  • Adapt to the characteristics of the data set to find the natural clusters

  • Use a dynamic model to measure the similarity between clusters

    • Main property is the relative closeness and relative inter-connectivity of the cluster

    • Two clusters are combined if the resulting cluster shares certain properties with the constituent clusters

    • The merging scheme preserves self-similarity


Chameleon clustering complex objects
CHAMELEON (Clustering Complex Objects) (1999)

Data Mining: Concepts and Techniques


Chapter 7 cluster analysis3
Chapter 7. (1999)Cluster Analysis

  • Overview

  • Partitioning methods

  • Hierarchical methods

  • Density-based methods

  • Other methods

  • Cluster evaluation

  • Outlier analysis

  • Summary

Data Mining: Concepts and Techniques


Density based clustering methods
Density-Based Clustering Methods (1999)

  • Clustering based on density

  • Major features:

    • Clusters of arbitrary shape

    • Handle noise

    • Need density parameters as termination condition

  • Several interesting studies:

    • DBSCAN: Ester, et al. (KDD’96)

    • OPTICS: Ankerst, et al (SIGMOD’99).

    • DENCLUE: Hinneburg & D. Keim (KDD’98)

    • CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based)

Data Mining: Concepts and Techniques


Dbscan basic concepts

Core point (1999)

border point

noise point

DBSCAN: Basic Concepts

  • Density = number of points within a specified radius

  • core point: has high density

  • border point: has less density, but in the neighborhood of a core point

  • noise point: not a core point or a border point.


Dbscan definitions

p (1999)

MinPts = 5

Eps = 1 cm

q

DBScan: Definitions

  • Two parameters:

    • Eps: radius of the neighbourhood

    • MinPts: Minimum number of points in an Eps-neighbourhood of that point

  • NEps(p): {q belongs to D | dist(p,q) <= Eps}

  • core point: |NEps (q)| >= MinPts

Data Mining: Concepts and Techniques


Dbscan definitions1

p (1999)

p1

q

p

q

p

MinPts = 5

Eps = 1 cm

q

o

DBScan: Definitions

  • Directly density-reachable: p belongs to NEps(q)

  • Density-reachable: if there is a chain of points p1, …, pn, p1 = q, pn = p such that pi+1 is directly density-reachable from pi

  • Density-connected: if there is a point o such that both, p and q are density-reachable from o w.r.t. Eps and MinPts

Data Mining: Concepts and Techniques


Dbscan cluster definition

Outlier (1999)

Border

Eps = 1cm

MinPts = 5

Core

DBSCAN: Cluster Definition

  • A cluster is defined as a maximal set of density-connected points

Data Mining: Concepts and Techniques


Dbscan the algorithm
DBSCAN: The Algorithm (1999)

  • Arbitrary select a point p

  • Retrieve all points density-reachable from p w.r.t. Eps and MinPts.

  • If p is a core point, a cluster is formed.

  • If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database.

  • Continue the process until all of the points have been processed.

Data Mining: Concepts and Techniques


Dbscan determining eps and minpts
DBSCAN: Determining EPS and MinPts (1999)

  • Basic idea:

    • For points in a cluster, their kth nearest neighbors are at roughly the same distance

    • Noise points have the kth nearest neighbor at farther distance

  • Plot sorted distance of every point to its kth nearest neighbor


Dbscan sensitive to parameters
DBSCAN: Sensitive to Parameters (1999)

Data Mining: Concepts and Techniques


Chapter 7 cluster analysis4
Chapter 7. (1999)Cluster Analysis

  • Overview

  • Partitioning methods

  • Hierarchical methods

  • Density-based methods

  • Other methods

    • Clustering by mixture models: mixed Gaussian model

    • Conceptual clustering: COBWEB

    • Neural network approach: SOM

  • Cluster evaluation

  • Outlier analysis

  • Summary

Data Mining: Concepts and Techniques


Model based clustering
Model-Based Clustering (1999)

  • Attempt to optimize the fit between the given data and some mathematical model

  • Typical methods

    • Statistical approach

      • EM (Expectation maximization)

    • Machine learning approach

      • COBWEB

    • Neural network approach

      • SOM (Self-Organizing Feature Map)

Data Mining: Concepts and Techniques


Clustering by mixture model
Clustering by Mixture Model (1999)

  • Assume data are generated by a mixture of probabilistic model

    • Each cluster can be represented by a probabilistic model, like a Gaussian (continuous) or a Poisson (discrete) distribution.

Data Mining: Concepts and Techniques


Expectation maximization em
Expectation Maximization (EM) (1999)

  • Starts with an initial estimate of the parameters of the mixture model

  • Iteratively refine the parameters using EM method

    • Expectation step: computes expectation of the likelihood of each data point Xi belonging to cluster Ci

    • Maximization step: computes maximum likelihood estimates of the parameters

Data Mining: Concepts and Techniques


Conceptual clustering
Conceptual Clustering (1999)

  • Conceptual clustering

    • Generates a concept description for each concept (class)

    • Produces a hierarchical category or classification scheme

    • Related to decision tree learning and mixture model learning

  • COBWEB (Fisher’87)

    • A popular and simple method of incremental conceptual learning

    • Creates a hierarchical clustering in the form of a classification tree

    • Each node refers to a concept and contains a probabilistic description of that concept

Data Mining: Concepts and Techniques


Cobweb classification tree
COBWEB Classification Tree (1999)

Data Mining: Concepts and Techniques


Cobweb learning the classification tree
COBWEB: Learning the Classification Tree (1999)

  • Incrementally builds the classification tree

  • Given a new object

    • Search for the best node at which to incorporate the object or add a new node for the object

    • Update the probabilistic description at each node

  • Merging and splitting

  • Use a heuristic measure - Category Utility – to guide construction of the tree

Data Mining: Concepts and Techniques


Cobweb comments
COBWEB: Comments (1999)

  • Limitations

    • The assumption that the attributes are independent of each other is often too strong because correlation may exist

    • Not suitable for clustering large database – skewed tree and expensive probability distributions

Data Mining: Concepts and Techniques


Neural network approach
Neural Network Approach (1999)

  • Neural network approach for unsupervised learning

    • Involves a hierarchical architecture of several units (neurons)

  • Two modes

    • Training: builds the network using input data

    • Mapping: automatically classifies a new input vector.

  • Typical methods

    • SOM (Soft-Organizing feature Map)

    • Competitive learning

  • Data Mining: Concepts and Techniques


    Self organizing feature map som
    Self-Organizing Feature Map (SOM) (1999)

    • SOMs, also called topological ordered maps, or Kohonen Self-Organizing Feature Map (KSOMs)

    • Produce a low-dimensional (typically two) representation of the high-dimensional input data, called a map

      • The distance and proximity relationship (i.e., topology) are preserved as much as possible

    • Visualization tool for high-dimensional data

    • Clustering method for grouping similar objects together

    • Competitive learning

      • believed to resemble processing that can occur in the brain

    Data Mining: Concepts and Techniques


    Learning som
    Learning SOM (1999)

    • Network structure – a set of units associated with a weight vector

    • Training – competitive learning

      • The unit whose weight vector is closest to the current object becomes the winning unit

      • The winner and its neighbors learn by having their weights adjusted

    • Demo: http://www.sis.pitt.edu/~ssyn/som/demo.html

    Data Mining: Concepts and Techniques


    Web document clustering using som
    Web Document Clustering Using SOM (1999)

    • The result of SOM clustering of 12088 Web articles

    • The picture on the right: drilling down on the keyword “mining”

    Data Mining: Concepts and Techniques


    Chapter 7 cluster analysis5
    Chapter 7. (1999)Cluster Analysis

    • Overview

    • Partitioning methods

    • Hierarchical methods

    • Density-based methods

    • Other methods

    • Cluster evaluation

    • Outlier analysis

    Data Mining: Concepts and Techniques


    Cluster evaluation
    Cluster Evaluation (1999)

    • Determine clustering tendency of data, i.e. distinguish whether non-random structure exists

    • Determine correct number of clusters

    • Evaluate how well the cluster results fit the data without external information

    • Evaluate how well the cluster results are compared to externally known results

    • Compare different clustering algorithms/results


    Clusters found in random data

    DBSCAN (1999)

    K-means

    Complete Link

    Clusters found in Random Data

    Random Points


    Measures of cluster validity
    Measures of Cluster Validity (1999)

    • Unsupervised (internal indices): Used to measure the goodness of a clustering structure without respect to external information.

      • Sum of Squared Error (SSE)

  • Supervised (external indices): Used to measure the extent to which cluster labels match externally supplied class labels.

    • Entropy

  • Relative: Used to compare two different clustering results

    • Often an external or internal index is used for this function, e.g., SSE or entropy


  • Internal measures cohesion and separation

    separation (1999)

    Cohesion

    Internal Measures: Cohesion and Separation

    • Cluster Cohesion: how closely related are objects in a cluster

    • Cluster Separation: how distinct or well-separated a cluster is from other clusters

    • Example: Squared Error

      • Cohesion: within cluster sum of squares (SSE)

      • Separation: between cluster sum of squares


    Internal measures sse
    Internal Measures: SSE (1999)

    • SSE is good for comparing two clusterings

    • Can also be used to estimate the number of clusters


    Internal measures sse1
    Internal Measures: SSE (1999)

    • Another example of a more complicated data set

    SSE of clusters found using K-means


    Statistical framework for sse
    Statistical Framework for SSE (1999)

    • Statistics framework for cluster validity

      • More “atypical” -> likely valid structure in the data

      • Use values resulting from random data as baseline

    • Example

      • Clustering: SSE = 0.005

      • SSE of three clusters in 500 sets of random data points


    External measures
    External Measures (1999)

    • Compare cluster results with “ground truth” or manually clustering

    • Classification-oriented measures: entropy, purity, precision, recall, F-measures

    • Similarity-oriented measures: Jaccard scores


    External measures classification oriented measures
    External Measures: Classification-Oriented Measures (1999)

    • Entropy: the degree to which each cluster consists of objects of a single class

    • Precision: the fraction of a cluster that consists of objects of a specified class

    • Recall: the extent to which a cluster contains all objects of a specified class


    External measure similarity oriented measures
    External Measure: Similarity-Oriented Measures (1999)

    Given a reference clustering T and clustering S

    • f00: number of pair of points belonging to different clusters in both T and S

    • f01: number of pair of points belonging to different cluster in T but same cluster in S

    • f10: number of pair of points belonging to same cluster in T but different cluster in S

    • f11: number of pair of points belonging to same clusters in both T and S

    T

    S

    Li Xiong


    Chapter 7 cluster analysis6
    Chapter 7. (1999)Cluster Analysis

    • Overview

    • Partitioning methods

    • Hierarchical methods

    • Density-based methods

    • Other methods

    • Cluster evaluation

    • Outlier analysis

    Data Mining: Concepts and Techniques


    What is outlier discovery
    What Is Outlier Discovery? (1999)

    • What are outliers?

      • The set of objects are considerably dissimilar from the remainder of the data

    • Problem: Define and find outliers in large data sets

    • Applications:

      • Credit card fraud detection

      • Telecom fraud detection

      • Customer segmentation

      • Medical analysis

    Data Mining: Concepts and Techniques


    Outlier discovery statistical approaches
    Outlier Discovery: Statistical Approaches (1999)

    • Assume a model underlying distribution that generates data set (e.g. normal distribution)

    • Use discordancy tests depending on

      • data distribution

      • distribution parameter (e.g., mean, variance)

      • number of expected outliers

    • Drawbacks

      • most tests are for single attribute

      • In many cases, data distribution may not be known

    Data Mining: Concepts and Techniques


    Outlier discovery distance based approach
    Outlier Discovery: Distance-Based Approach (1999)

    • Introduced to counter the main limitations imposed by statistical methods

      • We need multi-dimensional analysis without knowing data distribution

    • Distance-based outlier: A DB(p, D)-outlier is an object O in a dataset T such that at least a fraction p of the objects in T lies at a distance greater than D from O

    • Algorithms for mining distance-based outliers

      • Index-based algorithm

      • Nested-loop algorithm

      • Cell-based algorithm

    Data Mining: Concepts and Techniques


    Density based local outlier detection
    Density-Based Local Outlier Detection (1999)

    • Distance-based outlier detection is based on global distance distribution

    • It encounters difficulties to identify outliers if data is not uniformly distributed

    • Ex. C1 contains 400 loosely distributed points, C2 has 100 tightly condensed points, 2 outlier points o1, o2

    • Distance-based method cannot identify o2 as an outlier

    • Need the concept of local outlier

    • Local outlier factor (LOF)

      • Assume outlier is not crisp

      • Each point has a LOF

    Data Mining: Concepts and Techniques


    Outlier discovery deviation based approach
    Outlier Discovery: Deviation-Based Approach (1999)

    • Identifies outliers by examining the main characteristics of objects in a group

    • Objects that “deviate” from this description are considered outliers

    • Sequential exception technique

      • simulates the way in which humans can distinguish unusual objects from among a series of supposedly like objects

    • OLAP data cube technique

      • uses data cubes to identify regions of anomalies in large multidimensional data

    Data Mining: Concepts and Techniques


    Chapter 7 cluster analysis7
    Chapter 7. (1999)Cluster Analysis

    • What is Cluster Analysis?

    • Types of Data in Cluster Analysis

    • A Categorization of Major Clustering Methods

    • Partitioning Methods

    • Hierarchical Methods

    • Density-Based Methods

    • Grid-Based Methods

    • Model-Based Methods

    • Clustering High-Dimensional Data

    • Constraint-Based Clustering

    • Outlier Analysis

    • Summary

    Data Mining: Concepts and Techniques


    Summary
    Summary (1999)

    • Cluster analysis groups objects based on their similarity and has wide applications

    • Measure of similarity can be computed for various types of data

    • Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods

    • Outlier detection and analysis are very useful for fraud detection, etc. and can be performed by statistical, distance-based or deviation-based approaches

    • There are still lots of research issues on cluster analysis

    Data Mining: Concepts and Techniques


    Problems and challenges
    Problems and Challenges (1999)

    • Considerable progress has been made in scalable clustering methods

      • Partitioning: k-means, k-medoids, CLARANS

      • Hierarchical: BIRCH, ROCK, CHAMELEON

      • Density-based: DBSCAN, OPTICS, DenClue

      • Grid-based: STING, WaveCluster, CLIQUE

      • Model-based: EM, Cobweb, SOM

      • Frequent pattern-based: pCluster

      • Constraint-based: COD, constrained-clustering

    • Current clustering techniques do not address all the requirements adequately, still an active area of research

    Data Mining: Concepts and Techniques


    References 1
    References (1) (1999)

    • R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. SIGMOD'98

    • M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.

    • M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the clustering structure, SIGMOD’99.

    • P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scientific, 1996

    • Beil F., Ester M., Xu X.: "Frequent Term-Based Text Clustering", KDD'02

    • M. M. Breunig, H.-P. Kriegel, R. Ng, J. Sander. LOF: Identifying Density-Based Local Outliers. SIGMOD 2000.

    • M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases. KDD'96.

    • M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing techniques for efficient class identification. SSD'95.

    • D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139-172, 1987.

    • D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. VLDB’98.

    Data Mining: Concepts and Techniques


    References 2
    References (2) (1999)

    • V. Ganti, J. Gehrke, R. Ramakrishan. CACTUS Clustering Categorical Data Using Summaries. KDD'99.

    • D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. In Proc. VLDB’98.

    • S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases. SIGMOD'98.

    • S. Guha, R. Rastogi, and K. Shim. ROCK: A robust clustering algorithm for categorical attributes. In ICDE'99, pp. 512-521, Sydney, Australia, March 1999.

    • A. Hinneburg, D.l A. Keim: An Efficient Approach to Clustering in Large Multimedia Databases with Noise. KDD’98.

    • A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.

    • G. Karypis, E.-H. Han, and V. Kumar. CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic Modeling.COMPUTER, 32(8): 68-75, 1999.

    • L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley & Sons, 1990.

    • E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’98.

    • G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley and Sons, 1988.

    • P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.

    • R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94.

    Data Mining: Concepts and Techniques


    References 3
    References (3) (1999)

    • L. Parsons, E. Haque and H. Liu, Subspace Clustering for High Dimensional Data: A Review , SIGKDD Explorations, 6(1), June 2004

    • E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets. Proc. 1996 Int. Conf. on Pattern Recognition,.

    • G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering approach for very large spatial databases. VLDB’98.

    • A. K. H. Tung, J. Han, L. V. S. Lakshmanan, and R. T. Ng. Constraint-Based Clustering in Large Databases, ICDT'01.

    • A. K. H. Tung, J. Hou, and J. Han. Spatial Clustering in the Presence of Obstacles, ICDE'01

    • H. Wang, W. Wang, J. Yang, and P.S. Yu.  Clustering by pattern similarity in large data sets,  SIGMOD’ 02.

    • W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining, VLDB’97.

    • T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very large databases. SIGMOD'96.

    Data Mining: Concepts and Techniques


    Www cs uiuc edu hanj
    www.cs.uiuc.edu/~hanj (1999)

    Thank you !!!

    Data Mining: Concepts and Techniques


    Clustering rich applications and multidisciplinary efforts
    Clustering: Rich Applications and Multidisciplinary Efforts (1999)

    • Pattern Recognition

    • Spatial Data Analysis

      • Create thematic maps in GIS by clustering feature spaces

      • Detect spatial clusters or for other spatial mining tasks

    • Image Processing

    • Economic Science (especially market research)

    • WWW

      • Document clustering

      • Cluster Weblog data to discover groups of similar access patterns

    Data Mining: Concepts and Techniques


    Major clustering approaches ii
    Major Clustering Approaches (II) (1999)

    • Grid-based approach:

      • based on a multiple-level granularity structure

      • Typical methods: STING, WaveCluster, CLIQUE

    • Model-based:

      • A model is hypothesized for each of the clusters and tries to find the best fit of that model to each other

      • Typical methods:EM, SOM, COBWEB

    • Frequent pattern-based:

      • Based on the analysis of frequent patterns

      • Typical methods: pCluster

    • User-guided or constraint-based:

      • Clustering by considering user-specified or application-specific constraints

      • Typical methods: COD (obstacles), constrained clustering

    Data Mining: Concepts and Techniques


    Measure the quality of clustering
    Measure the Quality of Clustering (1999)

    • Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, typically metric: d(i, j)

    • There is a separate “quality” function that measures the “goodness” of a cluster.

    • The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal ratio, and vector variables.

    • Weights should be associated with different variables based on applications and data semantics.

    • It is hard to define “similar enough” or “good enough”

      • the answer is typically highly subjective.

    Data Mining: Concepts and Techniques


    Interval valued variables
    Interval-valued variables (1999)

    • Standardize data

      • Calculate the mean absolute deviation:

        where

      • Calculate the standardized measurement (z-score)

    • Using mean absolute deviation is more robust than using standard deviation

    Data Mining: Concepts and Techniques


    Similarity and dissimilarity between objects cont
    Similarity and Dissimilarity Between Objects (Cont.) (1999)

    • If q = 2, d is Euclidean distance:

      • Properties

        • d(i,j) 0

        • d(i,i)= 0

        • d(i,j)= d(j,i)

        • d(i,j) d(i,k)+ d(k,j)

    • Also, one can use weighted distance, parametric Pearson product moment correlation, or other disimilarity measures

    Data Mining: Concepts and Techniques


    Data structures
    Data Structures (1999)

    • Data matrix

      • (two modes)

    • Dissimilarity matrix

      • (one mode)

    Data Mining: Concepts and Techniques


    Type of data in clustering analysis
    Type of data in clustering analysis (1999)

    • Interval-scaled variables

    • Binary variables

    • Nominal, ordinal, and ratio variables

    • Variables of mixed types

    Data Mining: Concepts and Techniques


    Similarity or dissimilarity metrics
    Similarity or Dissimilarity Metrics (1999)

    • Distances are normally used to measure the similarity or dissimilarity between two data objects

    • Some popular ones include: Minkowski distance:

      where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and q is a positive integer

    • If q = 1, d is Manhattan distance

    Data Mining: Concepts and Techniques


    Similarity and dissimilarity between objects cont1
    Similarity and Dissimilarity Between Objects (Cont.) (1999)

    • If q = 2, d is Euclidean distance:

      • Properties

        • d(i,j) 0

        • d(i,i)= 0

        • d(i,j)= d(j,i)

        • d(i,j) d(i,k)+ d(k,j)

    • Also, one can use weighted distance, parametric Pearson product moment correlation, or other disimilarity measures

    Data Mining: Concepts and Techniques


    Binary variables

    Object (1999)j

    Object i

    Binary Variables

    • A contingency table for binary data

    • Distance measure for symmetric binary variables:

    • Distance measure for asymmetric binary variables:

    • Jaccard coefficient (similarity measure for asymmetric binary variables):

    Data Mining: Concepts and Techniques


    Dissimilarity between binary variables
    Dissimilarity between Binary Variables (1999)

    • Example

      • gender is a symmetric attribute

      • the remaining attributes are asymmetric binary

      • let the values Y and P be set to 1, and the value N be set to 0

    Data Mining: Concepts and Techniques


    Nominal variables
    Nominal Variables (1999)

    • A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, green

    • Method 1: Simple matching

      • m: # of matches, p: total # of variables

    • Method 2: use a large number of binary variables

      • creating a new binary variable for each of the M nominal states

    Data Mining: Concepts and Techniques


    Ordinal variables
    Ordinal Variables (1999)

    • An ordinal variable can be discrete or continuous

    • Order is important, e.g., rank

    • Can be treated like interval-scaled

      • replace xif by their rank

      • map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by

      • compute the dissimilarity using methods for interval-scaled variables

    Data Mining: Concepts and Techniques


    Ratio scaled variables
    Ratio-Scaled Variables (1999)

    • Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as AeBt or Ae-Bt

    • Methods:

      • treat them like interval-scaled variables—not a good choice! (why?—the scale can be distorted)

      • apply logarithmic transformation

        yif = log(xif)

      • treat them as continuous ordinal data treat their rank as interval-scaled

    Data Mining: Concepts and Techniques


    Distance between attributes
    Distance between Attributes (1999)

    • A database may contain all the six types of variables

      • symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio

    • One may use a weighted formula to combine their effects

      • f is binary or nominal:

        dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise

      • f is interval-based: use the normalized distance

      • f is ordinal or ratio-scaled

        • compute ranks rif and

        • and treat zif as interval-scaled

    Data Mining: Concepts and Techniques


    Vector objects
    Vector Objects (1999)

    • Vector objects: keywords in documents, gene features in micro-arrays, etc.

    • Broad applications: information retrieval, biologic taxonomy, etc.

    • Cosine measure

    • A variant: Tanimoto coefficient

    Data Mining: Concepts and Techniques


    Typical alternatives to calculate the distance between clusters
    Typical Alternatives to Calculate the Distance between Clusters

    • Single link: smallest distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = min(tip, tjq)

    • Complete link: largest distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = max(tip, tjq)

    • Average: avg distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = avg(tip, tjq)

    • Centroid: distance between the centroids of two clusters, i.e., dis(Ki, Kj) = dis(Ci, Cj)

    • Medoid: distance between the medoids of two clusters, i.e., dis(Ki, Kj) = dis(Mi, Mj)

      • Medoid: one chosen, centrally located object in the cluster

    Data Mining: Concepts and Techniques


    Centroid radius and diameter of a cluster for numerical data sets
    Centroid, Radius and Diameter of a ClustersCluster (for numerical data sets)

    • Centroid: the “middle” of a cluster

    • Radius: square root of average distance from any point of the cluster to its centroid

    • Diameter: square root of average mean squared distance between all pairs of points in the cluster

    Data Mining: Concepts and Techniques


    Pam partitioning around medoids 1987
    PAM (Partitioning Around Medoids) (1987) Clusters

    • PAM (Kaufman and Rousseeuw, 1987), built in Splus

    • Use real object to represent the cluster

      • Select k representative objects arbitrarily

      • For each pair of non-selected object h and selected object i, calculate the total swapping cost TCih

      • For each pair of i and h,

        • If TCih < 0, i is replaced by h

        • Then assign each non-selected object to the most similar representative object

      • repeat steps 2-3 until there is no change

    Data Mining: Concepts and Techniques


    Pam clustering total swapping cost tc ih j c jih

    j Clusters

    t

    t

    j

    h

    i

    h

    i

    h

    j

    i

    i

    h

    j

    t

    t

    PAM Clustering: Total swapping cost TCih=jCjih

    Data Mining: Concepts and Techniques


    Hierarchical clustering2

    Step 0 Clusters

    Step 1

    Step 2

    Step 3

    Step 4

    agglomerative

    (AGNES)

    a

    a b

    b

    a b c d e

    c

    c d e

    d

    d e

    e

    divisive

    (DIANA)

    Step 3

    Step 2

    Step 1

    Step 0

    Step 4

    Hierarchical Clustering

    • Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition

    Data Mining: Concepts and Techniques


    Agnes agglomerative nesting
    AGNES (Agglomerative Nesting) Clusters

    • Introduced in Kaufmann and Rousseeuw (1990)

    • Implemented in statistical analysis packages, e.g., Splus

    • Use the Single-Link method and the dissimilarity matrix.

    • Merge nodes that have the least dissimilarity

    • Go on in a non-descending fashion

    • Eventually all nodes belong to the same cluster

    Data Mining: Concepts and Techniques


    Dendrogram: Clusters Shows How the Clusters are Merged

    • Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram.

    • A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster.

    Data Mining: Concepts and Techniques


    Diana divisive analysis
    DIANA (Divisive Analysis) Clusters

    • Introduced in Kaufmann and Rousseeuw (1990)

    • Implemented in statistical analysis packages, e.g., Splus

    • Inverse order of AGNES

    • Eventually each node forms a cluster on its own

    Data Mining: Concepts and Techniques


    Sparsification in the clustering process
    Sparsification in the Clustering Process Clusters

    • Keep the connections to the most similar (nearest) neighbors of a point

    • Reduces the impact of noise and outliers and sharpens the distinction between clusters.

    • Facilitates the use of graph partitioning algorithms


    Characteristics of spatial data sets
    Characteristics of Spatial Data Sets Clusters

    • Clusters are defined as densely populated regions of the space

    • Clusters have arbitrary shapes, orientation, and non-uniform sizes

    • Difference in densities across clusters and variation in density within clusters

    • Existence of special artifacts (streaks) and noise

    The clustering algorithm must address the above characteristics and also require minimal supervision.



    Optics a cluster ordering method 1999
    OPTICS: A Cluster-Ordering Method (1999) Clusters

    • OPTICS: Ordering Points To Identify the Clustering Structure

      • Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)

      • Produces a special order of the database wrt its density-based clustering structure

      • This cluster-ordering contains info equiv to the density-based clusterings corresponding to a broad range of parameter settings

      • Good for both automatic and interactive cluster analysis, including finding intrinsic clustering structure

      • Can be represented graphically or using visualization techniques

    Data Mining: Concepts and Techniques


    Optics some extension from dbscan
    OPTICS: Some Extension from DBSCAN Clusters

    • Index-based:

      • k = number of dimensions

      • N = 20

      • p = 75%

      • M = N(1-p) = 5

    • Complexity: O(kN2)

  • Core Distance

  • Reachability Distance

  • D

    p1

    o

    p2

    o

    Max (core-distance (o), d (o, p))

    r(p1, o) = 2.8cm. r(p2,o) = 4cm

    MinPts = 5

    e = 3 cm

    Data Mining: Concepts and Techniques


    Reachability-distance Clusters

    undefined

    Cluster-order

    of the objects

    Data Mining: Concepts and Techniques


    Density based clustering optics its applications
    Density-Based Clustering: OPTICS & Its Applications Clusters

    Data Mining: Concepts and Techniques


    Denclue technical essence
    Denclue: Technical Essence Clusters

    • Uses grid cells but only keeps information about grid cells that do actually contain data points and manages these cells in a tree-based access structure

    • Influence function: describes the impact of a data point within its neighborhood

    • Overall density of the data space can be calculated as the sum of the influence function of all data points

    • Clusters can be determined mathematically by identifying density attractors

    • Density attractors are local maximal of the overall density function

    Data Mining: Concepts and Techniques


    Density attractor
    Density Attractor Clusters

    Data Mining: Concepts and Techniques


    Center defined and arbitrary
    Center-Defined and Arbitrary Clusters

    Data Mining: Concepts and Techniques


    Different aspects of cluster validation
    Different Aspects of Cluster Validation Clusters

    • Determining theclustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data.

    • Comparing the results of a cluster analysis to externally known results, e.g., to externally given class labels.

    • Evaluating how well the results of a cluster analysis fit the data without reference to external information.

      - Use only the data

    • Comparing the results of two different sets of cluster analyses to determine which is better.

    • Determining the ‘correct’ number of clusters.

      For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters.



    Framework for cluster validity
    Framework for Cluster Validity Clusters

    • Need a framework to interpret any measure.

      • For example, if our measure of evaluation has the value, 10, is that good, fair, or poor?

    • Statistics provide a framework for cluster validity

      • The more “atypical” a clustering result is, the more likely it represents valid structure in the data

      • Can compare the values of an index that result from random data or clusterings to those of a clustering result.

        • If the value of the index is unlikely, then the cluster results are valid

      • These approaches are more complicated and harder to understand.

    • For comparing the results of two different sets of cluster analyses, a framework is less necessary.

      • However, there is the question of whether the difference between two index values is significant


    Denclue using statistical density functions
    DENCLUE: Using Statistical Density Functions Clusters

    • DENsity-based CLUstEring by Hinneburg & Keim (KDD’98)

    • Using statistical density functions:

    • Major features

      • Solid mathematical foundation

      • Good for data sets with large amounts of noise

      • Allows a compact mathematical description of arbitrarily shaped clusters in high-dimensional data sets

      • Significant faster than existing algorithm (e.g., DBSCAN)

      • But needs a large number of parameters

    Data Mining: Concepts and Techniques


    Clustering high dimensional data
    Clustering High-Dimensional Data Clusters

    • Clustering high-dimensional data

      • Many applications: text documents, DNA micro-array data

      • Major challenges:

        • Many irrelevant dimensions may mask clusters

        • Distance measure becomes meaningless—due to equi-distance

        • Clusters may exist only in some subspaces

    • Methods

      • Feature transformation: only effective if most dimensions are relevant

        • PCA & SVD useful only when features are highly correlated/redundant

      • Feature selection: wrapper or filter approaches

        • useful to find a subspace where the data have nice clusters

      • Subspace-clustering: find clusters in all the possible subspaces

        • CLIQUE, ProClus, and frequent pattern-based clustering

    Data Mining: Concepts and Techniques


    The curse of dimensionality graphs adapted from parsons et al kdd explorations 2004
    The Curse of Dimensionality Clusters(graphs adapted from Parsons et al. KDD Explorations 2004)

    • Data in only one dimension is relatively packed

    • Adding a dimension “stretch” the points across that dimension, making them further apart

    • Adding more dimensions will make the points further apart—high dimensional data is extremely sparse

    • Distance measure becomes meaningless—due to equi-distance

    Data Mining: Concepts and Techniques


    Why subspace clustering adapted from parsons et al sigkdd explorations 2004
    Why Subspace Clustering? Clusters(adapted from Parsons et al. SIGKDD Explorations 2004)

    • Clusters may exist only in some subspaces

    • Subspace-clustering: find clusters in all the subspaces

    Data Mining: Concepts and Techniques


    Clique clustering in quest
    CLIQUE (Clustering In QUEst) Clusters

    • Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98)

    • Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space

    • CLIQUE can be considered as both density-based and grid-based

      • It partitions each dimension into the same number of equal length interval

      • It partitions an m-dimensional data space into non-overlapping rectangular units

      • A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter

      • A cluster is a maximal set of connected dense units within a subspace

    Data Mining: Concepts and Techniques


    Clique the major steps
    CLIQUE: The Major Steps Clusters

    • Partition the data space and find the number of points that lie inside each cell of the partition.

    • Identify the subspaces that contain clusters using the Apriori principle

    • Identify clusters

      • Determine dense units in all subspaces of interests

      • Determine connected dense units in all subspaces of interests.

    • Generate minimal description for the clusters

      • Determine maximal regions that cover a cluster of connected dense units for each cluster

      • Determination of minimal cover for each cluster

    Data Mining: Concepts and Techniques


    Vacation(week) Clusters

    7

    6

    5

    4

    3

    2

    1

    age

    0

    20

    30

    40

    50

    60

    Vacation

    30

    50

    Salary

    age

    Salary (10,000)

    7

    6

    5

    4

    3

    2

    1

    age

    0

    20

    30

    40

    50

    60

     = 3

    Data Mining: Concepts and Techniques


    Strength and weakness of clique
    Strength and Weakness of ClustersCLIQUE

    • Strength

      • automatically finds subspaces of thehighest dimensionality such that high density clusters exist in those subspaces

      • insensitive to the order of records in input and does not presume some canonical data distribution

      • scales linearly with the size of input and has good scalability as the number of dimensions in the data increases

    • Weakness

      • The accuracy of the clustering result may be degraded at the expense of simplicity of the method

    Data Mining: Concepts and Techniques


    Frequent pattern based approach
    Frequent Pattern-Based Approach Clusters

    • Clustering high-dimensional space (e.g., clustering text documents, microarray data)

      • Projected subspace-clustering: which dimensions to be projected on?

        • CLIQUE, ProClus

      • Feature extraction: costly and may not be effective?

      • Using frequent patterns as “features”

        • “Frequent” are inherent features

        • Mining freq. patterns may not be so expensive

    • Typical methods

      • Frequent-term-based document clustering

      • Clustering by pattern similarity in micro-array data (pClustering)

    Data Mining: Concepts and Techniques


    Clustering by pattern similarity p clustering
    Clustering by Pattern Similarity ( Clustersp-Clustering)

    • Right: The micro-array “raw” data shows 3 genes and their values in a multi-dimensional space

      • Difficult to find their patterns

    • Bottom: Some subsets of dimensions form nice shift and scaling patterns

    Data Mining: Concepts and Techniques


    Why p clustering
    Why Clustersp-Clustering?

    • Microarray data analysis may need to

      • Clustering on thousands of dimensions (attributes)

      • Discovery of both shift and scaling patterns

    • Clustering with Euclidean distance measure? — cannot find shift patterns

    • Clustering on derived attribute Aij = ai – aj? — introduces N(N-1) dimensions

    • Bi-cluster using transformed mean-squared residue score matrix (I, J)

      • Where

      • A submatrix is a δ-cluster if H(I, J) ≤ δ for some δ > 0

    • Problems with bi-cluster

      • No downward closure property,

      • Due to averaging, it may contain outliers but still within δ-threshold

    Data Mining: Concepts and Techniques


    P clustering clustering by pattern similarity
    p- ClustersClustering: Clustering by Pattern Similarity

    • Given object x, y in O and features a, b in T, pCluster is a 2 by 2 matrix

    • A pair (O, T) is in δ-pCluster if for any 2 by 2 matrix X in (O, T), pScore(X) ≤ δ for some δ > 0

    • Properties of δ-pCluster

      • Downward closure

      • Clusters are more homogeneous than bi-cluster (thus the name: pair-wise Cluster)

    • Pattern-growth algorithm has been developed for efficient mining

    • For scaling patterns, one can observe, taking logarithmic on will lead to the pScore form

    Data Mining: Concepts and Techniques


    Chapter 6 cluster analysis
    Chapter 6. ClustersCluster Analysis

    • What is Cluster Analysis?

    • Types of Data in Cluster Analysis

    • A Categorization of Major Clustering Methods

    • Partitioning Methods

    • Hierarchical Methods

    • Density-Based Methods

    • Grid-Based Methods

    • Model-Based Methods

    • Clustering High-Dimensional Data

    • Constraint-Based Clustering

    • Outlier Analysis

    • Summary

    Data Mining: Concepts and Techniques


    Grid based clustering method
    Grid-Based Clustering Method Clusters

    • Using multi-resolution grid data structure

    • Several interesting methods

      • STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997)

      • WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB’98)

        • A multi-resolution clustering approach using wavelet method

      • CLIQUE: Agrawal, et al. (SIGMOD’98)

        • On high-dimensional data (thus put in the section of clustering high-dimensional data

    Data Mining: Concepts and Techniques


    Sting a statistical information grid approach
    STING: A Statistical Information Grid Approach Clusters

    • Wang, Yang and Muntz (VLDB’97)

    • The spatial area area is divided into rectangular cells

    • There are several levels of cells corresponding to different levels of resolution

    Data Mining: Concepts and Techniques


    The sting clustering method
    The STING Clustering Method Clusters

    • Each cell at a high level is partitioned into a number of smaller cells in the next lower level

    • Statistical info of each cell is calculated and stored beforehand and is used to answer queries

    • Parameters of higher level cells can be easily calculated from parameters of lower level cell

      • count, mean, s, min, max

      • type of distribution—normal, uniform, etc.

    • Use a top-down approach to answer spatial data queries

    • Start from a pre-selected layer—typically with a small number of cells

    • For each cell in the current level compute the confidence interval

    Data Mining: Concepts and Techniques


    Comments on sting
    Comments on STING Clusters

    • Remove the irrelevant cells from further consideration

    • When finish examining the current layer, proceed to the next lower level

    • Repeat this process until the bottom layer is reached

    • Advantages:

      • Query-independent, easy to parallelize, incremental update

      • O(K), where K is the number of grid cells at the lowest level

    • Disadvantages:

      • All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected

    Data Mining: Concepts and Techniques


    Wavecluster clustering by wavelet analysis 1998
    WaveCluster: Clustering by Wavelet Analysis (1998) Clusters

    • Sheikholeslami, Chatterjee, and Zhang (VLDB’98)

    • A multi-resolution clustering approach which applies wavelet transform to the feature space

    • How to apply wavelet transform to find clusters

      • Summarizes the data by imposing a multidimensional grid structure onto data space

      • These multidimensional spatial data objects are represented in a n-dimensional feature space

      • Apply wavelet transform on feature space to find the dense regions in the feature space

      • Apply wavelet transform multiple times which result in clusters at different scales from fine to coarse

    Data Mining: Concepts and Techniques


    Wavelet transform
    Wavelet Transform Clusters

    • Wavelet transform: A signal processing technique that decomposes a signal into different frequency sub-band (can be applied to n-dimensional signals)

    • Data are transformed to preserve relative distance between objects at different levels of resolution

    • Allows natural clusters to become more distinguishable

    Data Mining: Concepts and Techniques


    The wavecluster algorithm
    The WaveCluster Algorithm Clusters

    • Input parameters

      • # of grid cells for each dimension

      • the wavelet, and the # of applications of wavelet transform

    • Why is wavelet transformation useful for clustering?

      • Use hat-shape filters to emphasize region where points cluster, but simultaneously suppress weaker information in their boundary

      • Effective removal of outliers, multi-resolution, cost effective

    • Major features:

      • Complexity O(N)

      • Detect arbitrary shaped clusters at different scales

      • Not sensitive to noise, not sensitive to input order

      • Only applicable to low dimensional data

    • Both grid-based and density-based

    Data Mining: Concepts and Techniques


    Quantization transformation
    Quantization Clusters& Transformation

    • First, quantize data into m-D grid structure, then wavelet transform

      • a) scale 1: high resolution

      • b) scale 2: medium resolution

      • c) scale 3: low resolution

    Data Mining: Concepts and Techniques


    Chapter 7 cluster analysis8
    Chapter 7. ClustersCluster Analysis

    • What is Cluster Analysis?

    • Types of Data in Cluster Analysis

    • A Categorization of Major Clustering Methods

    • Partitioning Methods

    • Hierarchical Methods

    • Density-Based Methods

    • Grid-Based Methods

    • Model-Based Methods

    • Clustering High-Dimensional Data

    • Constraint-Based Clustering

    • Outlier Analysis

    • Summary

    Data Mining: Concepts and Techniques


    Why constraint based cluster analysis
    Why Constraint-Based Cluster Analysis? Clusters

    • Need user feedback: Users know their applications the best

    • Less parameters but more user-desired constraints, e.g., an ATM allocation problem: obstacle & desired clusters

    Data Mining: Concepts and Techniques


    A classification of constraints in cluster analysis
    A Classification of Constraints in Cluster Analysis Clusters

    • Clustering in applications: desirable to have user-guided (i.e., constrained) cluster analysis

    • Different constraints in cluster analysis:

      • Constraints on individual objects (do selection first)

        • Cluster on houses worth over $300K

      • Constraints on distance or similarity functions

        • Weighted functions, obstacles (e.g., rivers, lakes)

      • Constraints on the selection of clustering parameters

        • # of clusters, MinPts, etc.

      • Semi-supervised: giving small training sets as “constraints” or hints

        • Pair-wise constraints

    Data Mining: Concepts and Techniques


    Clustering with user specified constraints
    Clustering with User-Specified Constraints Clusters

    • Example: Locating k delivery centers, each serving at least m valued customers and n ordinary ones

    • Proposed approach

      • Find an initial “solution” by partitioning the data set into k groups and satisfying user-constraints

      • Iteratively refine the solution by micro-clustering relocation (e.g., moving δμ-clusters from cluster Ci to Cj) and “deadlock” handling (break the microclusters when necessary)

      • Efficiency is improved by micro-clustering

    • How to handle more complicated constraints?

      • E.g., having approximately same number of valued customers in each cluster?! — Can you solve it?

    Data Mining: Concepts and Techniques


    Clustering with obstacle objects
    Clustering With Obstacle Objects Clusters

    • K-medoids is more preferable since k-means may locate the ATM center in the middle of a lake

    • Visibility graph and shortest path

    • Triangulation and micro-clustering

    • Two kinds of join indices (shortest-paths) worth pre-computation

      • VV index: indices for any pair of obstacle vertices

      • MV index: indices for any pair of micro-cluster and obstacle indices

    Data Mining: Concepts and Techniques


    An example clustering with obstacle objects
    An Example: Clustering With Obstacle Objects Clusters

    Not Taking obstacles into account

    Taking obstacles into account

    Data Mining: Concepts and Techniques


    Internal measures based on proximity graph
    Internal Measures Based on Proximity Graph Clusters

    • A proximity graph based approach can also be used for cohesion and separation.

      • Cluster cohesion is the sum of the weight of all links within a cluster.

      • Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster.

    cohesion

    separation


    ad