Clustering algorithms
Download
1 / 57

clustering algorithms - PowerPoint PPT Presentation


  • 264 Views
  • Updated On :

Clustering Algorithms . Padhraic Smyth Department of Computer Science CS 175, Fall 2007. Timeline. Today Discussion of project presentations and final report Overview of clustering algorithms and how they can be used with image data Tuesday December 4 th No lecture (out of town)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'clustering algorithms' - Thomas


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Clustering algorithms l.jpg

Clustering Algorithms

Padhraic Smyth

Department of Computer Science

CS 175, Fall 2007


Timeline l.jpg
Timeline

  • Today

    • Discussion of project presentations and final report

    • Overview of clustering algorithms and how they can be used with image data

  • Tuesday December 4th

    • No lecture (out of town)

  • Thursday Dec 6th: Student Presentations:

    • About 4 minutes per student, + questions

  • Wednesday Dec 12th: Final project reports due 12 noon to EEE

    • Instructions on format provided on the class Web site


Project presentations l.jpg
Project Presentations

  • Thursday next week

    • Each student will make a 4 minute presentation + 1 minute for questions from the professor and/or students

    • 13 students * 5 minutes = 65 minutes + setup time

  • IMPORTANT:

    • We will go in a fixed order (alphabetical by last name – next slide)

    • Be here on time!

    • Your slides must be uploaded BEFORE 9am THURSDAY (day of presentations)

    • Powerpoint or PDF is acceptable


Order of student presentations l.jpg
Order of Student Presentations

  • Austgen

  • Duran

  • Hall

  • Hooper

  • Kong

  • Lipeles

  • Newton

  • Nguyen (Nam)

  • Nguyen (Son)

  • Nilsen

  • Salanga

  • Schmitt

  • Sheldon

  • Rodriguez


Guidelines for presentations l.jpg
Guidelines for Presentations

  • Your slides should at least contain the following elements:

    • clear statement of what task/problem you are addressing

    • Outline of the technical approach you are taking

      • You will not have time to go into details

      • Provide a high-level description of your methods

        • e.g., a figure or flow chart

      • Show an example (e.g., of template matching, edge map, etc)

    • Describe your results so far

      • Visual examples, tables of accuracy numbers, etc

      • Its ok if your project is not yet finished: describe what you have

  • General tips

    • Speak clearly and loudly – face the audience

    • Practice beforehand – know what is in your slides

    • Be creative – use figures rather than text where possible


Presentation grading l.jpg
Presentation Grading

  • 5% of your grade

  • You will get at least 2.5% for just showing up 

  • Remainder of the grade will depend on

    • How much work/effort did you put into your slides?

    • How clear are your slides and your presentation?

    • Creativity, e.g., a clever way to illustrate visually how your feature-extractor/detector/classifier is working

  • Questions?


Final project reports l.jpg
Final Project Reports

  • Due noon Wednesday December 12th (to EEE)

  • See class Web page for detailed instructions

    • You will submit a 5 to 10 page report and your code

  • Worth 35% of your total grade

    • Make sure to spend time on the report

    • Much of your grade will depend on how well and how clearly your report is written

    • Lower grades will go to

      • Poorly written reports and/or poorly executed project

    • High grade will need

      • Well-written report AND well-executed project

    • If your system is not performing accurately, don’t panic! Carefully describe what you did, and try to identify why your system is not performing accurately (look at errors, etc). If you write a good report and document what you did, you can still get a high grade.



Putting 2 vectors on the same scale l.jpg
Putting 2 Vectors on the same Scale

  • Two vectors x and y

  • mx = mean(x), sx = standard_deviation(x)

  • Let x’ = (x – mx)/sx

    • x’ values now have mean 0 and standard deviation 1

    • Why?

      • Mean(x’) = mean( (x-mx)/sx ) = (mean(x) – mx)/sx = 0

      • Same type of argument for standard deviation

  • Can apply the same normalization to y to get y’

    • y ‘ = (y – my)/sy


Applying this idea to template matching l.jpg
Applying this Idea to Template Matching

  • x = template - > x’ = normalized template

  • y = patch of image being matched to template y’ = normalized patch

  • Normalized template matching

    • Replace template x by normalized template x’

      • Has mean pixel intensity 0 and standard deviation 1

      • Only needs to be done once (at start of function)

    • Replace each image patch y by normalized image patch y’

      • Has mean pixel intensity 0 and standard deviation 1

      • Likely to lead to better matching

      • However: the patch normalization has to be done at every patch in the image (so will slow down the template-matching code)






Modified template matching code l.jpg
Modified Template-Matching Code

% Reshape template

reshtemp = reshape(template,1,tmrows*tmcols);

% Remove mean of template and divide by standard deviation:

reshtemp = (reshtemp - mean(reshtemp))./std(reshtemp);

% Template now has mean 0 and standard deviation 1;

…….

for x=1:xspan

for y=1:yspan

% Take a piece of the image where the template is

bite = image(y:y+tmrows-1,x:x+tmcols-1);

% Reshape

reshbite = reshape(bite,1,tmrows*tmcols);

% Remove mean of “bite” and divide by standard deviation:

sbite = std(reshbite);

if sbite>0

reshbite = (reshbite - mean(reshbite))./(std(reshbite));

end

% “patch” now has mean 0 and standard deviation 1;



Unsupervised learning or clustering l.jpg
Unsupervised Learning or Clustering

  • In “supervised learning” each data point had a class label

  • in many problems there are no class labels

    • this is “unsupervised learning”

    • human learning: how do we form categories of objects?

      • Humans are good at creating groups/categories/clusters from data

    • in image analysis finding groups in data is very useful

      • e.g., can find pixels with similar intensities

        • > automatically finds regions in images

      • e.g., can find images that are similar -> can automatically find classes/clusters of images


Example data in 2 clusters l.jpg
Example: Data in 2 Clusters

Feature 2

Feature 1


The clustering problem l.jpg
The Clustering Problem

  • Let x = (x1, x2,…, xd,) be a d-dimensional feature vector

  • Let D be a set of x vectors,

    • D = { x(1), x(2), ….. x(N) }

  • Given data D, group the N vectors into K groups such that the grouping is “optimal”

  • One definition of “optimal”:

    • Let mean_k be the mean (centroid) of the Kth group

    • Let d_i be the distance from vector x(i) to the closest mean

      • so each data point x(i) is assigned to one of the K means


Optimal clustering l.jpg
Optimal Clustering

  • Let mean_k be the mean (centroid) of the kth cluster

    • mean_k is the average vector of all vectors x “assigned to” cluster k

      • mean_k = (1/n) Sx(i),

      • where the sum is over x(i) assigned to cluster k

  • One definition of “optimal”:

    • Let d_i be the distance from vector x(i) to the closest mean

      • so each data point x(i) is assigned to one of the K means

      • Q_k = quality of cluster k = S d_i ,

        • where the sum is over x(i) assigned to cluster k

        • the Q_k’s measure how “compact” each cluster is

      • We want to minimize the total sum of the Q_k’s


  • The total squared error objective function l.jpg
    The Total Squared Error Objective function

    • Let d_i = distance from feature vector x(i) to the closest mean = squared Euclidean distance between x(i) and mean_k

    • Now Q_k = sum of squared distances for points in cluster k

    • Total Squared Error (TSE)

      • TSE = Total Squared_Error = S Q_k

        • where sum is over all K clusters (and each Q_k is itself a sum)

      • TSE measures how “compact” a clustering is


    Example data in 2 clusters22 l.jpg
    Example: Data in 2 Clusters

    Feature 2

    Feature 1


    Compact clustering low tse l.jpg
    “Compact” Clustering: Low TSE

    Feature 2

    Cluster Center 2

    Cluster Center 1

    Feature 1


    Compact clustering low tse24 l.jpg
    “Compact” Clustering: Low TSE

    Feature 2

    Cluster Center 2

    Cluster Center 1

    Feature 1

    Here we have 2 clusters, and TSE = Q1 + Q2


    Non compact clustering high tse l.jpg
    “Non-Compact” Clustering: High TSE

    Feature 2

    Cluster Center 2

    Cluster Center 1

    Feature 1

    TSE = Q1 + Q2 would be much higher now: so we want to

    find the cluster centers that minimize TSE


    The clustering problem26 l.jpg
    The Clustering Problem

    • Let D be a set of x vectors,

      • D = { x(1), x(2), ….. x(N) }

    • Fix a value for K, e.g., K = 2

    • Find the locations of the K means that minimize the TSE

      • no direct solution

        • Exhaustive search: how many possible clusterings of N objects into K subsets?

          • O(KN) -> way too many to search directly

      • can use an iterative greedy search algorithm to minimize TSE


    The k means algorithm for clustering l.jpg
    The K-means Algorithm for Clustering

    Inputs: data D, with N feature vectors

    K = number of clusters

    Outputs: K mean vectors (centers of K clusters)

    memberships for each of the N feature vectors


    The k means algorithm for clustering28 l.jpg
    The K-means Algorithm for Clustering

    kmeans(D, k)

    choose K initial means randomly (e.g., pick K points randomly from D)

    while means_are_changing

    % assign each point to a cluster

    for i = 1: N

    membership[x(i)] = cluster with mean closest to x(i)

    end

    % update the means

    for k = 1:K

    mean_k = average of vectors x(i) assigned to cluster k

    end

    % check for convergence

    if (new means are the same as old means) then halt

    else means_are_changing = 1

    end











    Comments on the k means algorithm l.jpg
    Comments on the K-means algorithm

    • Time Complexity

      • per iteration = O(KNd)

    • Can prove that TSE decreases (or converges) at each iteration

    • Does it find the global minimum of TSE?

      • No, not necessarily

      • in a sense it is doing “steepest descent” from a random initial starting point

      • thus, results will be sensitive to the starting point

        • in practice, we can run it from multiple starting points and pick the solution with the lowest TSE (the most “compact” solution)


    Clustering pixels in an image l.jpg
    Clustering Pixels in an Image

    • We can use K-means to cluster pixel intensities in an image into K clusters

      • this provides a simple way to “segment” an image into K regions of similar “compact” image intensities

      • more automated than manual thresholding of an image

    • How to do this?

      • Size(image pixel matrix) = m x n

      • convert to a vector with (m x n) rows and 1 column

        • this is a 1-dimensional feature vector of pixel intensities

      • run the k-means algorithm with input = vector of intensities

      • assign each pixel the “grayscale” of the cluster it is assigned to

      • Note: with color images we can use a 3-dimensional feature vector per pixel, i.e, R, G, B values at each pixel


    Clustering in rgb color space l.jpg
    Clustering in RGB (color) space

    Image

    Clusters on color

    K-means clustering of RGB (3 value) pixel

    color intensities, K = 11 segments

    (courtesy of David Forsyth, UC Berkeley)





    Segmentation with k 5 l.jpg
    Segmentation with K=5

    Note: what K-means is doing in effect is finding 4 threshold intensities (based on the data)

    and assigning each intensity to 1 of 5 “bins” (clusters) based on these thresholds





    Segmentation with k 8 with pseudocolor display l.jpg
    Segmentation with K=8 (with pseudocolor display)

    colormap(‘hsv’)


    Using pixel clustering for region finding l.jpg
    Using pixel clustering for region finding

    • How could you use K-means in your project?

      • K-means puts pixels into K groups based on intensity similarities

      • The result is a set of regions in an image, where each region is relatively homogeneous in terms of pixel intensity

      • => K-means can be used as a simple technique for region-finding

      • Note that K-means clustering knows nothing about the spatial aspects of the image

        • Other region-finding algorithms can operate spatially (more on this in a later lecture)


    Using pixel clustering for region finding50 l.jpg
    Using pixel clustering for region finding

    • How could you use K-means in your project?

      • K-means puts pixels into K groups based on intensity similarities

      • The result is a set of regions in an image, where each region is relatively homogeneous in terms of pixel intensity

      • => K-means can be used as a simple technique for region-finding

      • Note that K-means clustering knows nothing about the spatial aspects of the image

        • Other region-finding algorithms can operate spatially

    • Note that regions and edges are “duals” edges  images

      • So one could find regions give edges

      • Or one could find edges given regions (e.g., boundaries between clusters of pixels produced by K-means)


    Clustering images l.jpg
    Clustering Images

    • We can also cluster sets of images into groups

      • now each vector = a full image (dimensions 1 x (mxn))

      • N images of size m x n

        • convert to a matrix with N rows and (m x n) columns

          • just use image_to_matrix.m

      • call kmeans with D = this matrix

        • kmeans is now clustering in an (m x n) dimensional space

      • kmeans will group the images into K groups





    Matlab code l.jpg
    Matlab Code

    • Code for k-means on Web page: kmeans_clustering.zip

      • Kmeans.m

        • Does the basic clustering

      • Segmentimage.m

        • Uses k-means to cluster pixel intensities

      • Clusterimage.m

        • Uses k-means to cluster images


    Summary l.jpg
    Summary

    • Clustering

      • automated methods to assign feature vectors to K clusters

      • K-means algorithm

      • With images, can use K-means to

        • Cluster pixels into groups of pixels

        • Cluster images into groups of images


    Timeline57 l.jpg
    Timeline

    • Tuesday December 4th

      • No lecture (out of town)

    • Thursday Dec 6th: Student Presentations:

      • About 3-5 slides, 4 minutes per student + questions

      • IMPORTANT: upload your slides to EEE before 9am Thursday!

    • Wednesday Dec 12th: Final project reports due 12 noon to EEE

      • Instructions on format provided on the class Web site


    ad