Evaluating Hypotheses Reading: Coursepack : Learning From Examples, Section 4 (pp. 16-21)

1 / 17

# Evaluating Hypotheses Reading: Coursepack : Learning From Examples, Section 4 (pp. 16-21) - PowerPoint PPT Presentation

Evaluating Hypotheses Reading: Coursepack : Learning From Examples, Section 4 (pp. 16-21). Evaluating Hypotheses . What we want: hypothesis that best predicts unseen data Assumption: Data is “ iid ” (independently and identically distributed). Accuracy and Error.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'Evaluating Hypotheses Reading: Coursepack : Learning From Examples, Section 4 (pp. 16-21)' - libba

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Evaluating HypothesesReading: Coursepack: Learning From Examples, Section 4 (pp. 16-21)

Evaluating Hypotheses
• What we want: hypothesis that best predicts unseen data
• Assumption: Data is “iid” (independently and identically distributed)
Accuracy and Error
• Accuracy = fraction of correct classifications on unseen data (test set)
• Error rate = 1 − Accuracy
How to use available data to best measure accuracy?

Split data into training and test sets.

How to use available data to best measure accuracy?

Split data into training and test sets.

But how to split?

How to use available data to best measure accuracy?

Split data into training and test sets.

But how to split?

Too little training data:

Too little test data:

How to use available data to best measure accuracy?

Split data into training and test sets.

But how to split?

Too little training data: Don’t get optimal classifier

Too little test data: Measured accuracy is not correct

One solution: “k-fold cross validation”
• Each example is used both as a training instance and as a test instance.
• Split data into k disjoint parts: S1, S2, ..., Sk.
• For i = 1 to k

Select Sito be the test set. Train on the remaining data, test on Si, to obtain accuracy Ai .

• Report as the final accuracy.
Avoid “peeking” at test data when training

Split data into training and test sets.

Train model with one learning parameter

(e.g., “gain” vs “gain ratio”)

Test on test set.

Repeat with other learning parameter.

Test on test set.

Return accuracy of model with best performance.

What’s wrong with this procedure?

Avoid “peeking” at test data when training

Split data into training and test sets.

Train model with one learning parameter

(e.g., “gain” vs “gain ratio”)

Test on test set.

Repeat with other learning parameter.

Test on test set.

Return accuracy of model with best performance.

Problem: You used the test set to select the best model – but is part of the

learning process! Risk of overfitting to a particular test set.

Need to evaluate final learned model on previously unseen data.

Can also solve this problem by using k-fold cross-validation to select model parameters, and then evaluate the resulting model on unseen test data that has been set aside previous to training.

Evaluating classification algorithms

“Confusion matrix” for a given class c

Predicted (or “classified”)

True False

Actu al (in class c) (not in class c)

True (in class c) TruePositiveFalseNegative

False (not in class c) FalsePositiveTrueNegative

Evaluating classification algorithms

“Confusion matrix” for a given class c

Predicted (or “classified”)

True False

Actu al (in class c) (not in class c)

True (in class c) TruePositiveFalseNegative

False (not in class c) FalsePositiveTrueNegative

Type 2 error

Type 1 error

• Recall: Fraction of true positives out of all actual positives:
Error vs. Loss
• Loss functions