1 / 17

Evaluating Hypotheses Reading: Coursepack : Learning From Examples, Section 4 (pp. 16-21)

Evaluating Hypotheses Reading: Coursepack : Learning From Examples, Section 4 (pp. 16-21). Evaluating Hypotheses . What we want: hypothesis that best predicts unseen data Assumption: Data is “ iid ” (independently and identically distributed). Accuracy and Error.

libba
Download Presentation

Evaluating Hypotheses Reading: Coursepack : Learning From Examples, Section 4 (pp. 16-21)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating HypothesesReading: Coursepack: Learning From Examples, Section 4 (pp. 16-21)

  2. Evaluating Hypotheses • What we want: hypothesis that best predicts unseen data • Assumption: Data is “iid” (independently and identically distributed)

  3. Accuracy and Error • Accuracy = fraction of correct classifications on unseen data (test set) • Error rate = 1 − Accuracy

  4. How to use available data to best measure accuracy?

  5. How to use available data to best measure accuracy? Split data into training and test sets.

  6. How to use available data to best measure accuracy? Split data into training and test sets. But how to split?

  7. How to use available data to best measure accuracy? Split data into training and test sets. But how to split? Too little training data: Too little test data:

  8. How to use available data to best measure accuracy? Split data into training and test sets. But how to split? Too little training data: Don’t get optimal classifier Too little test data: Measured accuracy is not correct

  9. One solution: “k-fold cross validation” • Each example is used both as a training instance and as a test instance. • Split data into k disjoint parts: S1, S2, ..., Sk. • For i = 1 to k Select Sito be the test set. Train on the remaining data, test on Si, to obtain accuracy Ai . • Report as the final accuracy.

  10. Avoid “peeking” at test data when training Example from readings: Split data into training and test sets. Train model with one learning parameter (e.g., “gain” vs “gain ratio”) Test on test set. Repeat with other learning parameter. Test on test set. Return accuracy of model with best performance. What’s wrong with this procedure?

  11. Avoid “peeking” at test data when training Example from readings: Split data into training and test sets. Train model with one learning parameter (e.g., “gain” vs “gain ratio”) Test on test set. Repeat with other learning parameter. Test on test set. Return accuracy of model with best performance. Problem: You used the test set to select the best model – but is part of the learning process! Risk of overfitting to a particular test set. Need to evaluate final learned model on previously unseen data.

  12. Can also solve this problem by using k-fold cross-validation to select model parameters, and then evaluate the resulting model on unseen test data that has been set aside previous to training.

  13. Evaluating classification algorithms “Confusion matrix” for a given class c Predicted (or “classified”) True False Actu al (in class c) (not in class c) True (in class c) TruePositiveFalseNegative False (not in class c) FalsePositiveTrueNegative

  14. Evaluating classification algorithms “Confusion matrix” for a given class c Predicted (or “classified”) True False Actu al (in class c) (not in class c) True (in class c) TruePositiveFalseNegative False (not in class c) FalsePositiveTrueNegative Type 2 error Type 1 error

  15. Precision: Fraction of true positives out of all predicted positives: • Recall: Fraction of true positives out of all actual positives:

  16. Error vs. Loss • Loss functions

  17. Regularization

More Related