1 / 26

Lecture 13: SVM Again

Lecture 13: SVM Again. Machine Learning Queens College. Today. Completion of Support Vector Machines Project Description and Topics. Support Vectors. Support Vectors are those input points (vectors) closest to the decision boundary 1. They are vectors

xerxes
Download Presentation

Lecture 13: SVM Again

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 13: SVM Again Machine Learning Queens College

  2. Today • Completion of Support Vector Machines • Project Description and Topics

  3. Support Vectors Support Vectors are those input points (vectors) closest to the decision boundary 1. They are vectors 2. They “support” the decision hyperplane

  4. Support Vectors Define this as a decision problem The decision hyperplane: No fancy math, just the equation of a hyperplane.

  5. Support Vectors The decision hyperplane: Scale invariance

  6. Support Vectors This scaling does not change the decision hyperplane, or the support vector hyperplanes. But we will eliminate a variable from the optimization The decision hyperplane: Scale invariance

  7. What are we optimizing? • We will represent the size of the margin in terms of w. • This will allow us to simultaneously • Identify a decision boundary • Maximize the margin

  8. Max Margin Loss Function If constraint optimization then Lagrange Multipliers Optimize the “Primal”

  9. Visualization of Support Vectors

  10. Interpretability of SVM parameters • What else can we tell from alphas? • If alpha is large, then the associated data point is quite important. • It’s either an outlier, or incredibly important. • But this only gives us the best solution for linearly separable data sets…

  11. Basis of Kernel Methods The decision process doesn’t depend on the dimensionality of the data. We can map to a higher dimensionality of the data space. Note: data points only appear within a dot product. The error is based on the dot product of data points – not the data points themselves.

  12. Basis of Kernel Methods • Since data points only appear within a dot product. • Thus we can map to another space through a replacement • The error is based on the dot product of data points – not the data points themselves.

  13. Learning Theory bases of SVMs • Theoretical bounds on testing error. • The upper bound doesn’t depend on the dimensionality of the space • The lower bound is maximized by maximizing the margin, γ, associated with the decision boundary.

  14. Why we like SVMs • They work • Good generalization • Easily interpreted. • Decision boundary is based on the data in the form of the support vectors. • Not so in multilayer perceptron networks • Principled bounds on testing error from Learning Theory (VC dimension)

  15. SVM vs. MLP • SVMs have many fewer parameters • SVM: Maybe just a kernel parameter • MLP: Number and arrangement of nodes and eta learning rate • SVM: Convex optimization task • MLP: likelihood is non-convex -- local minima

  16. Soft margin classification • There can be outliers on the other side of the decision boundary, or leading to a small margin. • Solution: Introduce a penalty term to the constraint function

  17. Soft Max Dual Still Quadratic Programming!

  18. Soft margin example Hinge Loss Points are allowed within the margin, but cost is introduced.

  19. Probabilities from SVMs • Support Vector Machines are discriminant functions • Discriminant functions: f(x)=c • Discriminative models: f(x) = argmaxcp(c|x) • Generative Models: f(x) = argmaxcp(x|c)p(c)/p(x) • No (principled) probabilities from SVMs • SVMs are not based on probability distribution functions of class instances.

  20. Efficiency of SVMs • Not especially fast. • Training – n^3 • Quadratic Programming efficiency • Evaluation – n • Need to evaluate against each support vector (potentially n)

  21. Research Projects • Run a machine learning experiment • Identify a problem/task. • Find appropriate data • Implement one or more ML algorithm • Evaluate the performance. • Write a report of the experiment • 4 pages including references • Abstract • One paragraph describing the experiment • Introduction • Describe the problem/task • Data • Describe the data set, features extracted, cleaning processes • Method • Describe the algorithm/approach • Results • Present and Discuss results • Conclusion • Summarize the experiment and results • Teams of two people are acceptable. • Requires a report from each participant (written independently) describing who was responsible for the components of the work.

  22. Sample Problems/Tasks • Vision/Graphics • Object Classification • Facial Recognition • Fingerprint Identification • Fingerprint ID • Handwriting recognition • Non English languages? • Language • Topic classification • Sentiment analysis • Speech recognition • Speaker identification • Punctuation restoration • Semantic Segmentation • Recognition of Emotion, Sarcasm, etc. • SMS Text normalization • Chat participant Id • Twitter classification • Twitter threading

  23. Sample Problems/Tasks • Games • Chess • Checkers • Poker • Blackjack • Go • Recommenders (Collaborative Filtering) • Netflix • Courses • Jokes • Books • Facebook • Video Classification • Motion classification • Segmentation

  24. ML Topics to explore in the project • L1-regularization • Non-linear kernels • Loopy belief propagation • Non-parametric Belief propagation • Soft-decision trees • Analysis of Neural Network Hidden Layers • Structured Learning • Generalized Expectation • One-class learning • Evaluation Measures • Cluster Evaluation • Semi-supervised evaluation • Skewed Data • Graph Embedding • Dimensionality Reduction • Feature Selection • Graphical Model Construction • Non-parametric Bayesian Methods • Latent Dirichlet Allocation • Deep-Learning – Boltzman Machines • SVM Regression

  25. Data • UCI Machine Learning Repository • http://archive.ics.uci.edu/ml/ • Ask Me • Collect some of your own

  26. Next Time Kernel Methods

More Related