Ling 570 day 7 classifiers
Sponsored Links
This presentation is the property of its rightful owner.
1 / 86

Ling 570 Day 7: Classifiers PowerPoint PPT Presentation


  • 121 Views
  • Uploaded on
  • Presentation posted in: General

Ling 570 Day 7: Classifiers. Outline. Open questions One last bit on POS taggers: Evaluation Classifiers!. Evaluating Taggers. Evaluation. How can we evaluate a POS tagger?. Evaluation. How can we evaluate a POS tagger? Overall error rate w.r.t . gold-standard test set.

Download Presentation

Ling 570 Day 7: Classifiers

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Ling 570 Day 7:Classifiers


Outline

  • Open questions

  • One last bit on POS taggers: Evaluation

  • Classifiers!


Evaluating Taggers


Evaluation

  • How can we evaluate a POS tagger?


Evaluation

  • How can we evaluate a POS tagger?

    • Overall error rate w.r.t. gold-standard test set.


Evaluation metric

  • A common single metric is accuracy

  • Defined as

  • Why not precision and recall?


Evaluation metric

  • Precision

  • Recall


Tagging Scenarios

  • But tagging doesn’t have to result in one tag

    • It might be possible that we don’t want to resolve tag ambiguities at tag time

      • E.g., save them for some other process

  • Or there may not necessarily be one answer for any given word/sequence (e.g., the gold standard contains multiple answers)

  • In either case eval is not so cut and dried


Evaluation

  • How can we evaluate the POS tagger?

    • Overall error rate w.r.t. gold-standard test set.

    • Error rates on particular tags

    • Error rates on particular words

    • Tag confusions...


Error Analysis

  • Confusion matrix (contingency table)

  • Identify primary contributors to error rate

    • Noun (NN) vsProperNoun (NNP) vsAdj (JJ)

    • Preterite (VBD) vs Participle (VBN) vs Adjective (JJ)


Evaluation

  • Result is compared to manually coded “Gold Standard”

    • Typically accuracy reaches 96-97%

    • Compare with result for a baseline tagger (no context).


Evaluation

  • Result is compared to manually coded “Gold Standard”

    • Typically accuracy reaches 96-97%

    • Compare with result for a baseline tagger (no context).

      • E.g. Most common class

        • Assign most frequent POS for each words


Evaluation

  • Result is compared to manually coded “Gold Standard”

    • Typically accuracy reaches 96-97%

    • Compare with result for a baseline tagger (no context).

      • E.g. Most common class

        • Assign most frequent POS for each words

  • Important:

    • 100% is impossible even for human annotators.


Text Classification andMachine Learning


Example: Sentiment detection

  • Input: Customer reviews

  • Output: Did the customer like the product?

  • Setup:

    • Get a bunch of labeled data

    • SOMEBODY HAS TO DO THIS BY HAND!

    • Want to predict the sentiment given new reviews

  • Features:

    • Information about the reviews

    • Words: “love”, “happy”, “warm”; “waste”, “joke”, “mistake”


Example: Sentiment detection

  • Input: Customer reviews

  • Output: Did the customer like the product?

  • Setup:

    • Get a bunch of labeled data

    • SOMEBODY HAS TO DO THIS BY HAND!

    • Want to predict the sentiment given new reviews

  • Features:

    • Information about the reviews

    • Words: “love”, “happy”, “warm”; “waste”, “joke”, “mistake”


Example: Sentiment detection

Don't waste your money. This company is a joke. They were and have been backordered for quite some time, but they just kept advertising on TV all through the holiday season.I agree with another reviewer that it's too easy to make a mistake when ordering on the company webite. I thought I was doing a mock order to see the final price before actually placing the order, but no. It placed the order immedia

  • Input: Customer reviews

  • Output: Did the customer like the product?

  • Setup:

    • Get a bunch of labeled data

    • SOMEBODY HAS TO DO THIS BY HAND!

    • Want to predict the sentiment given new reviews

  • Features:

    • Information about the reviews

    • Words: “love”, “happy”, “warm”; “waste”, “joke”, “mistake”


Example: Sentiment detection

Don't waste your money. This company is a joke. They were and have been backordered for quite some time, but they just kept advertising on TV all through the holiday season.I agree with another reviewer that it's too easy to make a mistake when ordering on the company webite. I thought I was doing a mock order to see the final price before actually placing the order, but no. It placed the order immedia

  • Input: Customer reviews

  • Output: Did the customer like the product?

  • Setup:

    • Get a bunch of labeled data

    • SOMEBODY HAS TO DO THIS BY HAND!

    • Want to predict the sentiment given new reviews

  • Features:

    • Information about the reviews

    • Words: “love”, “happy”, “warm”; “waste”, “joke”, “mistake”

I love my snuggie! It is warm. I wear it with the opening to the back.The length is long but keeps your feet warm when you set down. If I need to walk around I put the opening to the front. It fits great either way and keeps me very warm. Try it before you judge. You will be very happy and warm. I have gave some to family and friends and the all loved them after they tryed them.


Example: Sentiment detection

Don't waste your money. This company is a joke. They were and have been backordered for quite some time, but they just kept advertising on TV all through the holiday season.I agree with another reviewer that it's too easy to make a mistake when ordering on the company webite. I thought I was doing a mock order to see the final price before actually placing the order, but no. It placed the order immedia

  • Input: Customer reviews

  • Output: Did the customer like the product?

  • Setup:

    • Get a bunch of labeled data

    • SOMEBODY HAS TO DO THIS BY HAND!

    • Want to predict the sentiment given new reviews

  • Features:

    • Information about the reviews

    • Words: “love”, “happy”, “warm”; “waste”, “joke”, “mistake”

I love my snuggie! It is warm. I wear it with the opening to the back.The length is long but keeps your feet warm when you set down. If I need to walk around I put the opening to the front. It fits great either way and keeps me very warm. Try it before you judge. You will be very happy and warm. I have gave some to family and friends and the all loved them after they tryed them.


Example: Sentiment Analysis

  • Task:

    • Given a review, predict if they liked it

  • Outcomes:

    • Positive / negative, or 1-5 scale

    • Split by aspect (liked the comfort, hated the price)

  • What information would help us predict this?


Formalizing the task

  • Task:

    • inputs, e.g. review

    • outcomes; could be

  • Data set input/output pairs

  • Split into training, dev, test

    • Experimentation cycle

      • Learn a classifier on train

      • Tune free parameters on dev data

      • Evaluate on test

    • Need to keep these separate!

Test

data

Training

data

Development

data


Classification Examples

  • Spam filtering

  • Call routing

  • Text classification


POS Tagging

  • Task: Given a sentence, predict tag of each word

  • Is this a classification problem?


POS Tagging

  • Task: Given a sentence, predict tag of each word

  • Is this a classification problem?

  • Categories: N, V, Adj,…

  • What information is useful?


POS Tagging

  • Task: Given a sentence, predict tag of each word

  • Is this a classification problem?

  • Categories: N, V, Adj,…

  • What information is useful?

  • How do POS tagging, text classification differ?


POS Tagging

  • Task: Given a sentence, predict tag of each word

  • Is this a classification problem?

  • Categories: N, V, Adj,…

  • What information is useful?

  • How do POS tagging, text classification differ?

    • Sequence labeling problem


Word Segmentation

  • Task: Given a string, break into words

  • Categories:


Word Segmentation

  • Task: Given a string, break into words

  • Categories:

    • B(reak), NB (no break)

    • B(eginning), I(nside), E(nd)

  • e.g. c1 c2 || c3 c4 c5


Word Segmentation

  • Task: Given a string, break into words

  • Categories:

    • B(reak), NB (no break)

    • B(eginning), I(nside), E(nd)

  • e.g. c1 c2 || c3 c4 c5

    • c1/NB c2/B c3/NB c4/NB c5/B

    • c1/B c2/E c3/B c4/I c5/E

  • What type of task?


Word Segmentation

  • Task: Given a string, break into words

  • Categories:

    • B(reak), NB (no break)

    • B(eginning), I(nside), E(nd)

  • e.g. c1 c2 || c3 c4 c5

    • c1/NB c2/B c3/NB c4/NB c5/B

    • c1/B c2/E c3/B c4/I c5/E

  • What type of task?

    • Also sequence labeling


The Structure of a Classification Problem


Classification Problem Steps

  • Input processing:

    • Split data into training/dev/test

    • Convert data into an Attribute-Value Matrix

      • Identify candidate features

      • Perform feature selection

      • Create AVM representation

  • Training

  • Testing

  • Evaluation


Classifiers in practice

Training data

Model learning

Model


Classifiers in practice

Training data

Model learning

Model

Predictions

Test data

Modeltesting


Classifiers in practice

Training data

Model learning

Model

Preprocessing

Post-processing

Predictions

Test data

Modeltesting


Classifiers in practice

Training data

Model learning

Model

Preprocessing

Post-processing

Predictions

Test data

Modeltesting

Evaluation


Representing Input

  • Potentially infinite values to represent


Representing Input

  • Potentially infinite values to represent

  • Represent input as feature vector

    • x=<v1,v2,v3,…,vn>

    • x=<f1=v1,f2=v2,…,fn=vn>


Representing Input

  • Potentially infinite values to represent

  • Represent input as feature vector

    • x=<v1,v2,v3,…,vn>

    • x=<f1=v1,f2=v2,…,fn=vn>

  • What are good features?


Example I

  • Spam Tagging

    • Classes: Spam/Not Spam

    • Input:

      • Email messages


Doc1

Western Union Money Transfer [email protected] Bishops Square Akpakpa E1 6AO, CotonouBenin RepublicWebsite: http://www.westernunion.com/ info/selectCountry.asPPhone: +229 99388639Attention Beneficiary,This to inform you that the federal ministry of finance Benin Republic has started releasing scam victim compensation fund mandated by United Nation Organization through our office.I am contacting you because our agent have sent you the first payment of $5,000 for your compensation funds total amount of $500 000 USD (Five hundred thousand united state dollar)We need your urgent response so that we shall release your payment information to you.You can call our office hot line for urgent attention(+22999388639)


Doc2

  • Hello! my dear. How are you today and your family? I hope all is good,kindly pay Attention and understand my aim of communicating you todaythrough this Letter, My names is Saif al-Islam  al-Gaddafi the Son offormer  Libyan President. i was born on 1972 in Tripoli Libya,By Gaddafi’ssecond wive.I want you to help me clear this fund in your name which i deposited inEurope please i would like this money to be transferred into your accountbefore they find it.the amount is 20.300,000 million GBP British Pounds sterling through a


Doc3

  • from: [email protected]

  • Apply for loan at 3% interest Rate..Contact us for details.


Doc4

  • from: [email protected]

  • REMINDER:If you have not received a PIN number to vote in the elections and have not already contacted us, please contact either DragoRadev ([email protected]) or Priscilla Rasmussen ([email protected]) right away.Everyone who has not received a pin but who has contacted us already will get a new pin over the weekend.Anyone who still wants to join for 2011 needs to do this by Monday (November 7th) in order to be eligible to vote.And, if you do have your PIN number and have not voted yet, remember every vote counts!


What are good features?


Possible Features

  • Words!


Possible Features

  • Words!

    • Feature for each word


Possible Features

  • Words!

    • Feature for each word

      • Binary: presence/absence

      • Integer: occurrence count

      • Particular word types: money/sex/: [Vv].*gr.*


Possible Features

  • Words!

    • Feature for each word

      • Binary: presence/absence

      • Integer: occurrence count

      • Particular word types: money/sex/: [Vv].*gr.*

  • Errors:

    • Spelling, grammar


Possible Features

  • Words!

    • Feature for each word

      • Binary: presence/absence

      • Integer: occurrence count

      • Particular word types: money/sex/: [Vv].*gr.*

  • Errors:

    • Spelling, grammar

  • Images


Possible Features

  • Words!

    • Feature for each word

      • Binary: presence/absence

      • Integer: occurrence count

      • Particular word types: money/sex/: [Vv].*gr.*

  • Errors:

    • Spelling, grammar

  • Images

  • Header info


Representing Input:Attribute-Value Matrix


Representing Input:Attribute-Value Matrix


Representing Input:Attribute-Value Matrix


Representing Input:Attribute-Value Matrix


Representing Input:Attribute-Value Matrix


Classifier

  • Result of training on input data

    • With or without class labels


Classifier

  • Result of training on input data

    • With or without class labels

  • Formal perspective:

    • f(x) =y: x is input; y in C


Classifier

  • Result of training on input data

    • With or without class labels

  • Formal perspective:

    • f(x) =y: x is input; y in C

    • More generally:

      • f(x)={(ci,scorei)}, where

        • x is input,

        • ci in C,

        • scoreiis score for category assignment


Testing

  • Input:

    • Test data:

      • e.g. AVM

    • Classifier

  • Output:


Testing

  • Input:

    • Test data:

      • e.g. AVM

    • Classifier

  • Output:

    • Decision matrix

    • Can assign highest scoring class to each input


Testing

  • Input:

    • Test data:

      • e.g. AVM

    • Classifier

  • Output:

    • Decision matrix

    • Can assign highest scoring class to each input


Testing

  • Input:

    • Test data:

      • e.g. AVM

    • Classifier

  • Output:

    • Decision matrix

    • Can assign highest scoring class to each input


Evaluation

  • Confusion matrix:

  • Precision: TP/(TP+FP)


Evaluation

  • Confusion matrix:

  • Precision: TP/(TP+FP)

  • Recall: TP/(TP+FN)


Evaluation

  • Confusion matrix:

  • Precision: TP/(TP+FP)

  • Recall: TP/(TP+FN)

  • F-score: 2PR/(P+R)


Evaluation

  • Confusion matrix:

  • Precision: TP/(TP+FP)

  • Recall: TP/(TP+FN)

  • F-score: 2PR/(P+R)

  • Accuracy = (TP+TN)/(TP+TN+FP+TN)


Evaluation

  • Confusion matrix:

  • Precision: TP/(TP+FP)

  • Recall: TP/(TP+FN)

  • F-score: 2PR/(P+R)

  • Accuracy = (TP+TN)/(TP+TN+FP+TN)

  • Why F-score? Accuracy?


Evaluation Example

  • Confusion matrix:

  • Precision: 1/(1+4)=1/5

  • Recall: TP/(TP+FN)=1/6

  • F-score: 2PR/(P+R)=2*1/5*1/6/(1/5+1/6)=2/11

  • Accuracy = 91%


Evaluation Example

  • Confusion matrix:

  • Precision:


Evaluation Example

  • Confusion matrix:

  • Precision: 1/(1+4)=1/5

  • Recall: TP/(TP+FN)


Evaluation Example

  • Confusion matrix:

  • Precision: 1/(1+4)=1/5

  • Recall: TP/(TP+FN)=1/6

  • F-score: 2PR/(P+R)=


Evaluation Example

  • Confusion matrix:

  • Precision: 1/(1+4)=1/5

  • Recall: TP/(TP+FN)=1/6

  • F-score: 2PR/(P+R)=2*1/5*1/6/(1/5+1/6)=2/11

  • Accuracy = 91%


Classification Problem Steps

  • Input processing:

    • Split data into training/dev/test


Classification Problem Steps

  • Input processing:

    • Split data into training/dev/test

    • Convert data into an Attribute-Value Matrix

      • Identify candidate features

      • Perform feature selection

      • Create AVM representation


Classification Problem Steps

  • Input processing:

    • Split data into training/dev/test

    • Convert data into an Attribute-Value Matrix

      • Identify candidate features

      • Perform feature selection

      • Create AVM representation

  • Training


Classification Problem Steps

  • Input processing:

    • Split data into training/dev/test

    • Convert data into an Attribute-Value Matrix

      • Identify candidate features

      • Perform feature selection

      • Create AVM representation

  • Training

  • Testing

  • Evaluation


Classification Algorithms

Will be covered in detail in 572

  • Nearest Neighbor

  • Naïve Bayes

  • Decision Trees

  • Neural Networks

  • Maximum Entropy


Feature Design & Representation

  • What sorts of information do we want to encode?


Feature Design & Representation

  • What sorts of information do we want to encode?

    • words, frequencies, ngrams, morphology, sentence length, etc

  • Issue


Feature Design & Representation

  • What sorts of information do we want to encode?

    • words, frequencies, ngrams, morphology, sentence length, etc

  • Issue: Learning algorithms work on numbers

    • Many work only on binary values (0/1)


Feature Design & Representation

  • What sorts of information do we want to encode?

    • words, frequencies, ngrams, morphology, sentence length, etc

  • Issue: Learning algorithms work on numbers

    • Many work only on binary values (0/1)

    • Others work on any real-valued input

  • How can we represent different information

    • Numerically?

    • Binary?


Representation

  • Words/tags/ngrams/etc


Representation

  • Words/tags/ngrams/etc

    • One feature per item:


Representation

  • Words/tags/ngrams/etc

    • One feature per item:

      • Binary: presence/absence

      • Real: counts

  • Binarizing numeric features:


Representation

  • Words/tags/ngrams/etc

    • One feature per item:

      • Binary: presence/absence

      • Real: counts

  • Binarizing numeric features:

    • Single threshold

    • Multiple thresholds

    • Binning: 1 binary feature/bin


  • Login