machine learning k nearest neighbor and support vector machines l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Machine Learning: k-Nearest Neighbor and Support Vector Machines PowerPoint Presentation
Download Presentation
Machine Learning: k-Nearest Neighbor and Support Vector Machines

Loading in 2 Seconds...

play fullscreen
1 / 47

Machine Learning: k-Nearest Neighbor and Support Vector Machines - PowerPoint PPT Presentation


  • 109 Views
  • Uploaded on

CMSC 471. Machine Learning: k-Nearest Neighbor and Support Vector Machines. skim 20.4, 20.6-20.7. Revised End-of-Semester Schedule. Wed 11/21 Machine Learning IV Mon 11/26 Philosophy of AI (You must read the three articles!) Wed 11/28 Special Topics Mon 12/3 Special Topics

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Machine Learning: k-Nearest Neighbor and Support Vector Machines' - pier


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
revised end of semester schedule
Revised End-of-Semester Schedule
  • Wed 11/21 Machine Learning IV
  • Mon 11/26 Philosophy of AI (You must read the three articles!)
  • Wed 11/28 Special Topics
  • Mon 12/3 Special Topics
  • Wed 12/5 Review / Tournament dry run #2 (HW6 due)
  • Mon 12/10 Tournament
  • Wed 12/19 FINAL EXAM (1:00pm - 3:00pm) (Project and final report due)

NO LATE SUBMISSIONS ALLOWED!

  • Special Topics
    • Robotics
    • AI in Games
    • Natural language processing
    • Multi-agent systems
k nearest neighbor instance based learning

k-Nearest Neighbor Instance-Based Learning

Some material adapted from slides by Andrew Moore, CMU.

Visit http://www.autonlab.org/tutorials/ for

Andrew’s repository of Data Mining tutorials.

1 nearest neighbor
1-Nearest Neighbor
  • One of the simplest of all machine learning classifiers
  • Simple idea: label a new point the same as the closest known point

Label it red.

1 nearest neighbor5
1-Nearest Neighbor
  • A type of instance-based learning
    • Also known as “memory-based” learning
  • Forms a Voronoi tessellation of the instance space
distance metrics
Distance Metrics
  • Different metrics can change the decision surface
  • Standard Euclidean distance metric:
    • Two-dimensional: Dist(a,b) = sqrt((a1 – b1)2 + (a2 – b2)2)
    • Multivariate: Dist(a,b) = sqrt(∑ (ai – bi)2)

Dist(a,b) =(a1 – b1)2 + (a2 – b2)2

Dist(a,b) =(a1 – b1)2 + (3a2 – 3b2)2

Adapted from “Instance-Based Learning”

lecture slides by Andrew Moore, CMU.

four aspects of an instance based learner
Four Aspects of anInstance-Based Learner:
  • A distance metric
  • How many nearby neighbors to look at?
  • A weighting function (optional)
  • How to fit with the local points?

Adapted from “Instance-Based Learning”

lecture slides by Andrew Moore, CMU.

1 nn s four aspects as an instance based learner
1-NN’s Four Aspects as anInstance-Based Learner:
  • A distance metric
    • Euclidian
  • How many nearby neighbors to look at?
    • One
  • A weighting function (optional)
    • Unused
  • How to fit with the local points?
    • Just predict the same output as the nearest neighbor.

Adapted from “Instance-Based Learning”

lecture slides by Andrew Moore, CMU.

zen gardens
Zen Gardens

Mystery of renowned zen garden revealed [CNN Article]

Thursday, September 26, 2002 Posted: 10:11 AM EDT (1411 GMT)

LONDON (Reuters) -- For centuries visitors to the renowned Ryoanji Temple garden in Kyoto, Japan have been entranced and mystified by the simple arrangement of rocks.

The five sparse clusters on a rectangle of raked gravel are said to be pleasing to the eyes of the hundreds of thousands of tourists who visit the garden each year.

Scientists in Japan said on Wednesday they now believe they have discovered its mysterious appeal.

"We have uncovered the implicit structure of the Ryoanji garden's visual ground and have shown that it includes an abstract, minimalist depiction of natural scenery," said Gert Van Tonder of Kyoto University.

The researchers discovered that the empty space of the garden evokes a hidden image of a branching tree that is sensed by the unconscious mind.

"We believe that the unconscious perception of this pattern contributes to the enigmatic appeal of the garden," Van Tonder added.

He and his colleagues believe that whoever created the garden during the Muromachi era between 1333-1573 knew exactly what they were doing and placed the rocks around the tree image.

By using a concept called medial-axis transformation, the scientists showed that the hidden branched tree converges on the main area from which the garden is viewed.

The trunk leads to the prime viewing site in the ancient temple that once overlooked the garden. It is thought that abstract art may have a similar impact.

"There is a growing realisation that scientific analysis can reveal unexpected structural features hidden in controversial abstract paintings," Van Tonder said

Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.

k nearest neighbor
k – Nearest Neighbor
  • Generalizes 1-NN to smooth away noise in the labels
  • A new point is now assigned the most frequent label of its k nearest neighbors

Label it red, when k = 3

Label it blue, when k = 7

k nearest neighbor k 9
k-Nearest Neighbor (k = 9)

Appalling behavior! Loses all the detail that 1-nearest neighbor would give. The tails are horrible!

A magnificent job of noise smoothing. Three cheers for 9-nearest-neighbor.

But the lack of gradients and the jerkiness isn’t good.

Fits much less of the noise, captures trends. But still, frankly, pathetic compared

with linear regression.

Adapted from “Instance-Based Learning”

lecture slides by Andrew Moore, CMU.

support vector machines and kernels

Support Vector Machines and Kernels

Doing Really Well with

Linear Decision Surfaces

Adapted from slides by Tim Oates

Cognition, Robotics, and Learning (CORAL) Lab

University of Maryland Baltimore County

slide13

Outline

  • Prediction
    • Why might predictions be wrong?
  • Support vector machines
    • Doing really well with linear models
  • Kernels
    • Making the non-linear linear
supervised ml prediction
Supervised ML = Prediction
  • Given training instances (x,y)
  • Learn a model f
  • Such that f(x) = y
  • Use f to predict y for new x
  • Many variations on this basic theme
why might predictions be wrong
Why might predictions be wrong?
  • True Non-Determinism
    • Flip a biased coin
    • p(heads) = 
    • Estimate 
    • If  > 0.5 predict heads, else tails
    • Lots of ML research on problems like this
      • Learn a model
      • Do the best you can in expectation
why might predictions be wrong16
Why might predictions be wrong?
  • Partial Observability
    • Something needed to predict y is missing from observation x
    • N-bit parity problem
      • x contains N-1 bits (hard PO)
      • x contains N bits but learner ignores some of them (soft PO)
why might predictions be wrong17
Why might predictions be wrong?
  • True non-determinism
  • Partial observability
    • hard, soft
  • Representational bias
  • Algorithmic bias
  • Bounded resources
representational bias

X

X

X

X

O

O

O

O

Representational Bias
  • Having the right features (x) is crucial

X

X

O

O

O

O

X

X

support vector machines

Support Vector Machines

Doing Really Well with Linear Decision Surfaces

strengths of svms
Strengths of SVMs
  • Good generalization in theory
  • Good generalization in practice
  • Work well with few training instances
  • Find globally best model
  • Efficient algorithms
  • Amenable to the kernel trick
linear separators
Linear Separators
  • Training instances
    • x  n
    • y  {-1, 1}
  • w  n
  • b  
  • Hyperplane
    • <w, x> + b = 0
    • w1x1 + w2x2 … + wnxn + b = 0
  • Decision function
    • f(x) = sign(<w, x> + b)
  • Math Review
  • Inner (dot) product:
    • <a, b> = a · b = ∑ ai*bi
    • = a1b1 + a2b2 + …+anbn
intuitions
Intuitions

O

O

X

X

O

X

X

O

O

X

O

X

O

X

O

X

intuitions23
Intuitions

O

O

X

X

O

X

X

O

O

X

O

X

O

X

O

X

intuitions24
Intuitions

O

O

X

X

O

X

X

O

O

X

O

X

O

X

O

X

intuitions25
Intuitions

O

O

X

X

O

X

X

O

O

X

O

X

O

X

O

X

a good separator
A “Good” Separator

O

O

X

X

O

X

X

O

O

X

O

X

O

X

O

X

lots of noise
Lots of Noise

O

O

X

X

O

X

X

O

O

X

O

X

O

X

O

X

maximizing the margin
Maximizing the Margin

O

O

X

X

O

X

X

O

O

X

O

X

O

X

O

X

fat separators
“Fat” Separators

O

O

X

X

O

X

X

O

O

X

O

X

O

X

O

X

support vectors

O

O

X

O

X

X

O

X

O

O

X

O

X

Support Vectors

X

O

X

the math
The Math
  • Training instances
    • x  n
    • y  {-1, 1}
  • Decision function
    • f(x) = sign(<w,x> + b)
    • w  n
    • b  
  • Find w and b that
    • Perfectly classify training instances
      • Assuming linear separability
    • Maximize margin
the math34
The Math
  • For perfect classification, we want
    • yi (<w,xi> + b) ≥ 0 for all i
    • Why?
  • To maximize the margin, we want
    • w that minimizes |w|2
dual optimization problem
Dual Optimization Problem
  • Maximize over 
    • W() = ii - 1/2 i,jij yi yj <xi, xj>
  • Subject to
    • i  0
    • ii yi = 0
  • Decision function
    • f(x) = sign(ii yi <x, xi> + b)
strengths of svms36
Strengths of SVMs
  • Good generalization in theory
  • Good generalization in practice
  • Work well with few training instances
  • Find globally best model
  • Efficient algorithms
  • Amenable to the kernel trick …
what if surface is non linear

O

O

O

O

O

O

O

O

O

O

O

X

X

X

X

O

O

X

X

O

O

O

O

O

O

O

Image from http://www.atrandomresearch.com/iclass/

What if Surface is Non-Linear?
kernel methods

Kernel Methods

Making the Non-Linear Linear

when linear separators fail

x12

X

X

X

X

O

O

x1

O

O

When Linear Separators Fail

x2

x1

X

X

O

O

O

O

X

X

mapping into a new feature space
Mapping into a New Feature Space
  • Rather than run SVM on xi, run it on (xi)
  • Find non-linear separator in input space
  • What if (xi) is really big?
  • Use kernels to compute it implicitly!

 : x  X = (x)

(x1,x2) = (x1,x2,x12,x22,x1x2)

Image from http://web.engr.oregonstate.edu/

~afern/classes/cs534/

kernels
Kernels
  • Find kernel K such that
    • K(x1,x2) = < (x1), (x2)>
  • Computing K(x1,x2) should be efficient, much more so than computing (x1) and (x2)
  • Use K(x1,x2) in SVM algorithm rather than <x1,x2>
  • Remarkably, this is possible
the polynomial kernel
The Polynomial Kernel
  • K(x1,x2) = < x1, x2 > 2
    • x1 = (x11, x12)
    • x2 = (x21, x22)
  • < x1, x2 > = (x11x21 + x12x22)
  • < x1, x2 > 2 = (x112 x212 + x122x222 + 2x11x12 x21x22)
  • (x1) = (x112, x122, √2x11x12)
  • (x2) = (x212, x222, √2x21x22)
  • K(x1,x2) = < (x1), (x2)>
the polynomial kernel43
The Polynomial Kernel
  • (x) contains all monomials of degree d
  • Useful in visual pattern recognition
  • Number of monomials
    • 16x16 pixel image
    • 1010 monomials of degree 5
  • Never explicitly compute (x)!
  • Variation - K(x1,x2) = (< x1, x2 > + 1) 2
a few good kernels
A Few Good Kernels
  • Dot product kernel
    • K(x1,x2) = < x1,x2 >
  • Polynomial kernel
    • K(x1,x2) = < x1,x2 >d (Monomials of degree d)
    • K(x1,x2) = (< x1,x2 > + 1)d (All monomials of degree 1,2,…,d)
  • Gaussian kernel
    • K(x1,x2) = exp(-| x1-x2 |2/22)
    • Radial basis functions
  • Sigmoid kernel
    • K(x1,x2) = tanh(< x1,x2 > + )
    • Neural networks
  • Establishing “kernel-hood” from first principles is non-trivial
the kernel trick
The Kernel Trick

“Given an algorithm which is formulated in terms of a positive definite kernel K1, one can construct an alternative algorithm by replacing K1 with another positive definite kernel K2”

  • SVMs can use the kernel trick
using a different kernel in the dual optimization problem

These are kernels!

Using a Different Kernel in the Dual Optimization Problem
  • For example, using the polynomial kernel with d = 4 (including lower-order terms).
  • Maximize over 
    • W() = ii - 1/2 i,jij yi yj <xi, xj>
  • Subject to
    • i  0
    • ii yi = 0
  • Decision function
    • f(x) = sign(ii yi <x, xi> + b)

(<xi, xj> + 1)4

X

So by the kernel trick,

we just replace them!

(<xi, xj> + 1)4

X

conclusion
Conclusion
  • SVMs find optimal linear separator
  • The kernel trick makes SVMs non-linear learning algorithms