1 / 33

More Machine Learning

More Machine Learning. Perceptron Support Vector Machines and Margins The Kernel Trick K-Nearest Neighbor. Recall: Key Components of Intelligent Agents. Representation Language: Graph, Bayes Nets, Linear functions Inference Mechanism: A*, variable elimination, Gibbs sampling

doctor
Download Presentation

More Machine Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. More Machine Learning Perceptron Support Vector Machines and Margins The Kernel Trick K-Nearest Neighbor

  2. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference Mechanism: A*, variable elimination, Gibbs sampling Learning Mechanism: Maximum Likelihood, Laplace Smoothing, gradient descent, many more: perceptron, k-Nearest Neighbor, … ------------------------------------- Evaluation Metric: Likelihood, quadratic loss (a.k.a. squared error), regularized loss, many more: margins, 0-1 loss, conditional likelihood, precision/recall, …

  3. Linear Separability X2 X1 Linear Separator Data has two features: X1 and X2. Two possible labels: blue and red.

  4. Linear Classification Suppose there are N input variables, X1, …, XN (all real numbers). A linear classifieris a function that looks like this: The wi variables are called weights or parameters. Each one is a real number. The set of all functions that look like this (one function for each choice of weights w0 through wN) is called the Hypothesis Classfor linear regression.

  5. Hypotheses X2 X1

  6. Quiz: Making predictions A: Which label? X2 B: Which label? C: Which label? X1

  7. Answer: Making predictions A: Which label? X2 B: Which label? C: Which label? X1

  8. The Perceptron Algorithm Input: Training data (Xi1, …, XiN, Yi), where each Yi is either 0 or 1. • Set each wj random initial guess • For each training example i: For each weight wj: wj  wj + α (Yi – f(Xi1, …, XiN)) Output: weights wj Learning Rate Error

  9. Properties of Perceptron Convergence: If the data set is linearly separable, then the Perceptron algorithm converges to a linear separator (amazingly enough). (If there is no linear separator, then perceptron will keep moving the line around forever.) Online: Unlike gradient descent, MLE, etc., the Perceptron algorithm can train by looking at one example at a time, rather than processing all of the data in a batch. This is something called an online training algorithm.

  10. c Quiz b a X2 X1 Which classifier would you prefer?

  11. c Answer b a X2 X1 It’s an opinion question, so any answer is acceptable. But machine learning people prefer b. Intuitively, b has the best chance of classifying a new data point correctly. a and c are overfitting.

  12. c Margin b a X2 margin Distance between the linear separator and the nearest data point. X1

  13. Maximum Margin Learning A very popular approach to combating overfitting is to select hypotheses with large margins. This is called “maximum margin” learning. Two very popular techniques: • Support Vector Machines • Boosting These techniques are beyond the scope of this class.

  14. c Quiz: Margins b a X2 X1 Which classifier has the largest margin?

  15. c Answer: Margins b a X2 X1 Answer: b is farthest from the data, so it has the largest margin.

  16. Non-linear (or non-linearly-separable) data No line can separate these two classes. X2 X1

  17. The “Kernel Trick” The Kernel Trick is to add a new input variable that is computed from the existing ones. X2 Let X1 X3 Now there’s a linear separator! In the original feature space, the linear separator looks like a circle.

  18. The “Kernel Trick” SVMs use automatic methods (called “kernels”) to add new features to a learning problem. We won’t go into these in detail. The important lesson: it’s possible to apply linear classifiers to non-linearly-separable data, by extending the feature space.

  19. Parametric vs. Nonparametric models Almost all models for machine learning have “parameters” or “weights” that need to be learned.

  20. Parametric Model Examples Linear regression: Each training example has N inputs, X1, …, XN. It doesn’t matter how many examples are in the training data, the regression model will always have N+1 weights. This number is independent of the number of training examples (M). So linear regression is parametric.

  21. Parametric Model Examples Naïve Bayes (with fixed vocabulary): Each training example has a 1 or 0 for every word in the vocabulary. No matter how many training examples there are, we will only need parameters for the number of words in the vocabulary, which is fixed. So this number is independent of the number of training examples (M). So Naïve Bayes (with fixed vocabulary size) is parametric.

  22. Quiz: Nonparametric Model: k-Nearest Neighbor Classifier Color each blank point with the color of its closest neighbor. a b c

  23. Answer: Nonparametric Model: k-Nearest Neighbor Classifier Color each blank point with the color of its closest neighbor. a b c

  24. Quiz: k-Nearest Neighbor, k=3 Color each blank point with the majority color of its three closest neighbors. a b c

  25. Quiz: k-Nearest Neighbor, k=3 Color each blank point with the majority color of its three closest neighbors. a b c

  26. The k-Nearest Neighbor Classifier Learning algorithm: memorize the X and Y components of each training example. Inference algorithm: For each new point X, find the k nearest points from the training data, and select the most common Y value from those training data points. Use that Y value as the prediction.

  27. Properties of k-NN Convergence: as the number of training examples grows, the expected accuracy on test data points approaches 100%. Smoothing: Higher values of k can be used to combat overfitting. Typically, only odd values of k are used, to ensure that there are no ties during prediction. Complexity: Training k-NN is very simple: just memorize each training data point. However, finding the nearest neighbors at test time can be an expensive operation. All sorts of hashing and indexing techniques have been invented to improve the time complexity of inference, but this remains an active area of study.

  28. Quiz: Learning model types

  29. Answers: Learning model types

  30. Quiz: Learning algorithm types

  31. Answers: Learning algorithm types

  32. Quiz: Preventing overfitting

  33. Answers: Preventing overfitting

More Related