1 / 10

Support Vector Machines

This presentation will guide you through Support Vector Machines, Properties of an SVM, Linear separability, Linear Discriminant, Selecting the hyperplane, Support Vectors, Non linearly separable and Multi-Category Classification.<br><br>

Download Presentation

Support Vector Machines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Support Vector Machines Swipe

  2. Support Vector Machines Support vector machines are supervised learning models that are used for classification and regression analysis, and machine learning methods for such models are available to study and identify patterns.

  3. Properties of an SVM Non probabilistic binary linear classifier Support for non-linear classification using the 'kernel trick'

  4. Linear separability If points in three-dimensional space can be linearly separated by a two-dimensional hyperplane, then they are linearly separable. Example - The two sets of 2D data in the image are separated by straight hyperplane), and hence are linearly separable a single (1D line

  5. Linear Discriminant The hyperplane that separates the two sets of data is called the linear discriminant. Equation: WT [w1,w2,.......wn] [X1,X2,......Xn] for the nth dimension X = C W X = = An SVM is simply a linear discriminant which tries to build a hyperplane such that it has a large margin. It classifies a new sample by simply computing the distance from the hyperplane.

  6. Selecting the hyperplane Every linearly separable data set, no matter how huge, has a large number of hyperplanes dividing it. Therefore, the decision on the most appropriate one for categorization must be made.

  7. Support Vectors Observations (represented as vectors) which lie at marginal distance from the hyperplane are called support vectors. These are important as shifting them even slightly might change the position of the hyperplane to a great extent. class 2 + + + Example - Support vectors The vectors lying on the green lines in the image are the support vectors. "+" are support vectors lines are support planes X1 + ++ + class 1 + + + + ++ + X2

  8. Non linearly separable In this case, an SVM would not able to linearly classify the data. Hence SVM uses what is known as the ‘kernel trick’. The idea is that the enlarged feature space might have a linear boundary which might not quite be linear in the original feature space. In this ‘trick’ the feature space is enlarged. This can be done using various kernel functions.

  9. Multi-Category Classification One-Versus-One Classification One-Versus-All Classification

  10. Topics for next Post Linear regression Logistic regression Naive bayes Stay Tuned with

More Related