1 / 36

CS 188: Artificial Intelligence Spring 2007

CS 188: Artificial Intelligence Spring 2007. Lecture 19: Perceptrons 4/2/2007. Srini Narayanan – ICSI and UC Berkeley. Topic. Last time: Naïve Bayes Classification Today: Perceptrons Mistake-driven learning Data separation, margins, and convergence. Naïve Bayes: Recap.

sanchezs
Download Presentation

CS 188: Artificial Intelligence Spring 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 188: Artificial IntelligenceSpring 2007 Lecture 19: Perceptrons 4/2/2007 Srini Narayanan – ICSI and UC Berkeley

  2. Topic • Last time: • Naïve Bayes • Classification • Today: Perceptrons • Mistake-driven learning • Data separation, margins, and convergence

  3. Naïve Bayes: Recap • Bayes rule lets us do diagnostic queries with causal probabilities • The naïve Bayes assumption makes all effects independent given the cause • We can build classifiers out of a naïve Bayes model using training data • Smoothing estimates is important in real systems

  4. Errors, and What to Do • Examples of errors Dear GlobalSCAPE Customer, GlobalSCAPE has partnered with ScanSoft to offer you the latest version of OmniPage Pro, for just $99.99* - the regular list price is $499! The most common question we've received about this offer is - Is this genuine? We would like to assure you that this offer is authorized by ScanSoft, is genuine and valid. You can get the . . . . . . To receive your $30 Amazon.com promotional certificate, click through to http://www.amazon.com/apparel and see the prominent link for the $30 offer. All details are there. We hope you enjoyed receiving this message. However, if you'd rather not receive future e-mails announcing new store launches, please click . . .

  5. What to Do About Errors? • Need more features– words aren’t enough! • Have you emailed the sender before? • Have 1K other people just gotten the same email? • Is the sending information consistent? • Is the email in ALL CAPS? • Do inline URLs point where they say they point? • Does the email address you by (your) name? • Can add these information sources as new variables in the NB model • Use classifiers which let you easily add arbitrary features more easily

  6. Features • A feature is a function which signals a property of the input • Examples: • ALL_CAPS: value is 1 iff email in all caps • HAS_URL: value is 1 iff email has a URL • NUM_URLS: number of URLs in email • VERY_LONG: 1 iff email is longer than 1K • SUSPICIOUS_SENDER: 1 iff reply-to domain doesn’t match originating server • Features are anything you can think of code to evaluate on an input • Some cheap, some very very expensive to calculate • Can even be the output of another classifier • Domain knowledge goes here! • In naïve Bayes, how did we encode features?

  7. Feature Extractors • A feature extractor maps inputs to feature vectors • Many classifiers take feature vectors as inputs • Feature vectors usually very sparse, use sparse encodings (i.e. only represent non-zero keys) Dear Sir. First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. … W=dear : 1 W=sir : 1 W=this : 2 ... W=wish : 0 ... MISSPELLED : 2 NAMELESS : 1 ALL_CAPS : 0 NUM_URLS : 0 ...

  8. Generative vs. Discriminative • Generative classifiers: • E.g. naïve Bayes • We build a causal model of the variables • We then query that model for causes, given evidence • Discriminative classifiers: • E.g. perceptron (next) • No causal model, no Bayes rule, often no probabilities • Try to predict output directly • Loosely: mistake driven rather than model driven

  9. Some (Vague) Biology • Very loose inspiration: human neurons

  10. Perceptrons abstract from the details of real neurons • Conductivity delays are neglected • An output signal is either discrete (e.g., 0 or 1) or it is a real-valued number (e.g., between 0 and 1) • Net input is calculated as the weighted sum of the input signals • Net input is transformed into an output signal via a simple function (e.g., a threshold function)

  11. Different Activation Functions • Threshold Activation Function (step) • Piecewise Linear Activation Function • Sigmoid Activation Funtion • Gaussian Activation Function • Radial Basis Function BIAS UNIT With X0 = 1

  12. Types of Activation functions

  13. The Perceptron LTU Sigmoid Features

  14. The Binary Perceptron • Inputs are features • Each feature has a weight • Sum is the activation • If the activation is: • Positive, output 1 • Negative, output 0 w1  f1 w2 >0? f2 w3 f3

  15. Example: Spam • Imagine 4 features: • Free (number of occurrences of “free”) • Money (occurrences of “money”) • BIAS (always has value 1) BIAS : 1 free : 1 money : 1 the : 0 ... BIAS : -3 free : 4 money : 2 the : 0 ... “free money”

  16. Binary Decision Rule • In the space of feature vectors • Any weight vector is a hyperplane • One side will be class 1 • Other will be class 0 money 2 1 = SPAM 1 BIAS : -3 free : 4 money : 2 the : 0 ... 0 0 = HAM 0 1 free

  17. Linearly separable patterns PERCEPTRON is an architecture which can solve this type of decision boundary problem. An "on" response in the output node represents one class, and an "off" response represents the other. Linearly Separable Patterns

  18. The Multiclass Perceptron • If we have more than two classes: • Have a weight vector for each class • Calculate an activation for each class • Highest activation wins

  19. Example BIAS : 1 win : 1 game : 0 vote : 1 the : 1 ... “win the vote” BIAS : -2 win : 4 game : 4 vote : 0 the : 0 ... BIAS : 1 win : 2 game : 0 vote : 4 the : 0 ... BIAS : 2 win : 0 game : 2 vote : 0 the : 0 ...

  20. The Perceptron Update Rule • Start with zero weights • Pick up training instances one by one • Try to classify • If correct, no change! • If wrong: lower score of wrong answer, raise score of right answer

  21. Example “win the vote” “win the election” “win the game” BIAS : win : game : vote : the : ... BIAS : win : game : vote : the : ... BIAS : win : game : vote : the : ...

  22. Mistake-Driven Classification • In naïve Bayes, parameters: • From data statistics • Have a causal interpretation • One pass through the data • For the perceptron parameters: • From reactions to mistakes • Have a discriminative interpretation • Go through the data until held-out accuracy maxes out Training Data Held-Out Data Test Data

  23. Properties of Perceptrons Separable • Separability: some parameters get the training set perfectly correct • Convergence: if the training is separable, perceptron will eventually converge (binary case) • Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability Non-Separable

  24. Issues with Perceptrons • Overtraining: test / held-out accuracy usually rises, then falls • Overtraining isn’t quite as bad as overfitting, but is similar • Regularization: if the data isn’t separable, weights might thrash around • Averaging weight vectors over time can help (averaged perceptron) • Mediocre generalization: finds a “barely” separating solution

  25. Linear Separators • Binary classification can be viewed as the task of separating classes in feature space: w . x = 0 w . x > 0 w . x < 0

  26. What if the data is not linearly separable • Multi-layered Perceptron • Also called Feed-forward neural networks • Max-Margin Methods and SVM

  27. The Input Pattern Space

  28. The XOR Decision planes

  29. Multi-layer Perceptron

  30. Pattern Separation and NN architecture

  31. Non-Linear Separators • Data that is linearly separable (with some noise) works out great: • But what are we going to do if the dataset is just too hard? • How about… mapping data to a higher-dimensional space: x 0 x 0 x2 x 0 This and next few slides adapted from Ray Mooney, UT

  32. Non-Linear Separators • General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x→φ(x)

  33. Classification Margin • Distance from example xi to the separator is r • Examples closest to the hyperplane are support vectors. • Marginof the separator is the distance between support vectors.  r

  34. Support Vector Machines • Maximizing the margin: good according to intuition and theory. • Only support vectors matter; other training examples are ignorable. • Support vector machines (SVMs) find the separator with max margin • Mathematically, gives a quadratic program which calculates alphas (weights of data points are zero except for supports). • Basically, SVMs are perceptrons with smarter update counts!

  35. Summary • Naïve Bayes • Build classifiers using model of training data • Smoothing estimates is important in real systems • Gives probabilistic class estimates. • Perceptrons: • Make less assumptions about data • Mistake-driven learning • Multiple passes through data

  36. Next • Unsupervised Learning • Clustering using k-means • Other unsupervised techniques.

More Related