1 / 27

Machine Learning

Machine Learning. Lecture 4: Greedy Local Search (Hill Climbing). Local search algorithms. We’ve discussed ways to select a hypothesis h that performs well on training examples, e.g. Candidate-Elimination Decision Trees Another technique that is quite general:

Download Presentation

Machine Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning Lecture 4: Greedy Local Search (Hill Climbing) Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  2. Local search algorithms • We’ve discussed ways to select a hypothesis h that performs well on training examples, e.g. • Candidate-Elimination • Decision Trees • Another technique that is quite general: • Start with some (perhaps random) hypothesis h • Incrementally improve h • Known as local search Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  3. Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  4. Hill-climbing search • "Like climbing Everest in thick fog with amnesia“ h= initialState loop: h’ = highest valued Successor(h) if Value(h) >= Value(h’) return h else h = h’ Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  5. Hill-climbing search • Problem: depending on initial state, can get stuck in local maxima Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  6. Underfitting • Overfitting: Performance on test examples is much lower than on training examples • Underfitting: Performance on training examples is low Two leading causes: • Hypothesis space is too small/simple • Training algorithm (i.e., hypothesis search algorithm) stuck in local maxima Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  7. Hill-climbing search: 8-queens problem • v = number of pairs of queens that are attacking each other, either directly or indirectly v =17 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  8. Hill-climbing search: 8-queens problem • A local minimum with v = 1 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  9. Simulated annealing search • Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency h= initialState T = initialTemperature loop: h’ = random Successor(h) if (V = Value(h’)-Value(h)) > 0 h = h’ else h = h’ with probability eV/T decrease T; if T==0, return h Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  10. Properties of simulated annealing • One can prove: If Tdecreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 • Widely used in VLSI layout, airline scheduling, etc Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  11. Local beam search • Keep track of k states rather than just one • Start with k randomly generated states • At each iteration, all the successors of all k states are generated • If any one is a goal state, stop; else select the k best successors from the complete list and repeat. Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  12. Gradient Descent • Hill Climbing and Simulated Annealing are “generate and test” algorithms • Successor function generates candidates, Value function helps select • In some cases, we can do much better: • Define: Error(training data D, hypothesis h) • If his represented by parameters w1,…wn and dError/dwi is known, we can compute the error gradient, and descend in the direction that is (locally) steepest Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  13. Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  14. About distance…. • Clustering requires distance measures. • Local methods require a measure of “locality” • Search engines require a measure of similarity • So….when are two things close to each other? Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  15. Euclidean Distance • What people intuitively think of as “distance” Dimension 2: y Dimension 1: x Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  16. Generalized Euclidean Distance Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  17. Weighting Dimensions • Apparent clusters at one scaling of X are not so apparent at another scaling Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  18. Weighted Euclidean Distance • You can, of course compensate by weighting your dimensions…. Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  19. More Generalization: Minkowsky metric • My three favorites are special cases of this: Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  20. What is a “metric”? • A metric has these four qualities. • …otherwise, call it a “measure” Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  21. Metric, or not? • Driving distance with 1-way streets • Categorical Stuff : • Is distance Jazz -> Blues -> Rock no less than distance Jazz -> Rock? Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  22. What about categorical variables? • Consider feature vectors for genre & vocals • Genre: {Blues, Jazz, Rock, Zydeco} • Vocals: {vocals,no vocals} s1 = {rock, vocals} s2 = {jazz, no vocals} s3 = { rock, no vocals} • Which two songs are more similar? Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  23. Binary Features + Hamming distance s1 = {rock, yes} s2 = {jazz, no} s3 = { rock, no vocals} Blues Jazz Rock Zydeco Vocals Hamming Distance = number of bits different betweenbinary vectors Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  24. Hamming Distance Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  25. Other approaches… • Define your own distance: f(a,b) Quote Frequency Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  26. Missing data • What if, for some category, on some examples, there is no value given? • Approaches: • Discard all examples missing the category • Fill in the blanks with the mean value • Only use a category in the distance measure if both examples give a value Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

  27. Dealing with missing data Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

More Related