1 / 50

Machine Learning in Practice Lecture 3

Machine Learning in Practice Lecture 3. Carolyn Penstein Ros é Language Technologies Institute/ Human-Computer Interaction Institute. Plan for Today. Announcements Assignment 2 Quiz 1 Weka helpful hints Topic of the day: Input and Output More on cross-validation ARFF format.

nay
Download Presentation

Machine Learning in Practice Lecture 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning in PracticeLecture 3 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute

  2. Plan for Today • Announcements • Assignment 2 • Quiz 1 • Weka helpful hints • Topic of the day: Input and Output • More on cross-validation • ARFF format

  3. Weka Helpful Hints

  4. Increase Heap Size

  5. Weka Helpful Hint: Documentation!! Click on More button!

  6. Output Predictions Option

  7. Important note: Because of the way Weka randomizes the data for cross-validation, the only circumstance under which you can match the instance numbers to positions in your data is if you have separate train and test sets so the order will be preserved! Output Predictions Option

  8. View Classifier Errors

  9. Input and Output

  10. Representations • Concept: the rule you want to learn • Instance: one data point from your training or testing data (row in table) • Attribute: one of the features that an instance is composed of (column in table)

  11. Numeric versus Nominal Attributes • What kind of reasoning does your representation enable? • Numeric attributes allow instances to be ordered • Numeric attributes allow you to measure distance between instances • Sometimes numeric attributes make too fine grained of a distinction .2 .25 .28 .31 .35 .45 .47 .52 .6 .63

  12. Numeric versus Nominal Attributes • Numeric attributes can be discretized into nominal values • Then you lose ordering and distance • Another option is applying a function that maps a range of values into a single numeric attribute • Nominal attributes can be mapped into numbers • i.e., decide that blue=1 and green=2 • But are inferences made based on this valid? .2 .25 .28 .31 .35 .45 .47 .52 .6 .63

  13. .2 .3 .5 .6 Numeric versus Nominal Attributes • Numeric attributes can be discretized into nominal values • Then you lose ordering and distance • Another option is applying a function that maps a range of values into a single numeric attribute • Nominal attributes can be mapped into numbers • i.e., decide that blue=1 and green=2 • But are inferences made based on this valid? .2 .25 .28 .31 .35 .45 .47 .52 .6 .63

  14. Example! • Problem: Learn a rule that predicts how much time a person spends doing math problems each day • Attributes: You know gender, age, socio-economic status of parents, chosen field if any • How would you represent age, and why? What would you expect the target rule to look like?

  15. Styles of Learning • Classification – learn rules from labeled instances that allow you to assign new instances to a class • Association – look for relationships between features, not just rules that predict a class from an instance (more general) • Clustering – look for instances that are similar (involves comparisons of multiple features) • Numeric Prediction (regression models)

  16. Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  17. What else would be affected if wheat were to disappear? Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  18. How would you represent this data? Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  19. What would the learned rule look like? Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  20. What would the learned rule look like? Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  21. Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  22. What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  23. What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  24. 122 rows altogether! Now let’s look at the learned rule…. What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  25. 122 rows altogether! Now let’s look at the learned rule…. What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  26. Does it have to be this complicated? 122 rows altogether! Now let’s look at the learned rule…. What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  27. What would your representation for Affects(Entity1, Entity2) look like? Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  28. What would your representation for Affects(Entity1, Entity2) look like? Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  29. What would your representation for Affects(Entity1, Entity2) look like? Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

  30. More on Cross-Validation

  31. Cross Validation Exercise 1 2 What is the same? What is different? What surprises you? 3 5 4

  32. Compare Folds with Tree Trained on Whole Set 1 2 3 5 4

  33. Train Versus Test Performance on Training Data Performance on Testing Data

  34. Which Model Do You Think Will Perform Best on Test Set? 1 2 3 5 4

  35. Fold 1

  36. Fold 2

  37. Fold 3

  38. Fold 4

  39. Fold 5

  40. Total Performance What do you notice?

  41. Total Performance Average Kappa = .5

  42. Starting to think about Error Analyses • Step 1: Look at the confusion matrix • Where are most of the errors occurring? • What are possible explanations for systematic errors you see? • Are the instances in the confusable classes too similar to each other? If so, how can we distinguish them? • Are we paying attention to the wrong features? • Are we missing features that would allow us to see commonalities within classes that we are missing?

  43. What went wrong on Fold 3? 1 2 3 5 4

  44. What went wrong on Fold 3? Training Set Performance Testing Set Performance Hypotheses?

  45. What went wrong on Fold 3? Training Set Performance Testing Set Performance Hypotheses?

  46. What’s the difference?

  47. Hypothesis: Problem with first cut

  48. Some Examples

  49. What do you conclude?

  50. What do you conclude? Problem with Fold 3 was probably just a sampling fluke. Distribution of classes different between train and test.

More Related