1 / 46

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers. Panos Ipeirotis Stern School of Business New York University . Joint work with Victor Sheng, Foster Provost, and Jing Wang. Motivation. Many task rely on high-quality labels for objects:

star
Download Presentation

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Get Another Label? Improving Data Quality and Data MiningUsing Multiple, Noisy Labelers Panos IpeirotisStern School of Business New York University Joint work with Victor Sheng, Foster Provost, and Jing Wang

  2. Motivation • Many task rely on high-quality labels for objects: • relevance judgments for search engine results • identification of duplicate database records • image recognition • song categorization • videos • Labeling can be relatively inexpensive, using Mechanical Turk, ESP game …

  3. Micro-Outsourcing: Mechanical Turk Requesters post micro-tasks, a few cents each

  4. Motivation • Labels can be used in training predictive models • But: labels obtained through such sources are noisy. • This directly affects the quality of learning models

  5. Quality and Classification Performance Labeling quality increases  classification quality increases Q = 1.0 Q = 0.8 Q = 0.6 Q = 0.5 Training set size

  6. How to Improve Labeling Quality • Find better labelers • Often expensive, or beyond our control • Use multiple noisy labelers: repeated-labeling • Our focus

  7. Majority Voting and Label Quality • Ask multiple labelers, keep majority label as “true” label • Quality is probability of majority label being correct P=1.0 P=0.9 P=0.8 P is probabilityof individual labelerbeing correct P=0.7 P=0.6 P=0.5 P=0.4

  8. Tradeoffs for Modeling • Get more examples  Improve classification • Get more labels per example Improve quality  Improve classification Q = 1.0 Q = 0.8 Q = 0.6 Q = 0.5

  9. Basic Labeling Strategies • Single Labeling • Get as many data points as possible • One label each • Round-robin Repeated Labeling • Repeatedly label data points, • Give next label to the one with the fewest so far

  10. Repeat-Labeling vs. Single Labeling Single Repeated P= 0.8, labeling quality K=5, #labels/example With low noise, more (single labeled) examples better

  11. Repeat-Labeling vs. Single Labeling Repeated Single P= 0.6, labeling quality K=5, #labels/example With high noise, repeated labeling better

  12. Selective Repeated-Labeling • We have seen: • With enough examples and noisy labels, getting multiple labels is better than single-labeling • Can we do better than the basic strategies? • Key observation: we have additional information to guide selection of data for repeated labeling • the current multiset of labels • Example: {+,-,+,+,-,+} vs. {+,+,+,+}

  13. Natural Candidate: Entropy • Entropy is a natural measure of label uncertainty: • E({+,+,+,+,+,+})=0 • E({+,-, +,-, +,- })=1 Strategy: Get more labels for high-entropy label multisets

  14. What Not to Do: Use Entropy Improves at first, hurts in long run

  15. Why not Entropy • In the presence of noise, entropy will be high even with many labels • Entropy is scale invariant • (3+ , 2-) has same entropy as (600+ , 400-)

  16. Estimating Label Uncertainty (LU) • Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} • Label uncertainty = tail of beta distribution Beta probability density function SLU 0.5 0.0 1.0

  17. Label Uncertainty • p=0.7 • 5 labels(3+, 2-) • Entropy ~ 0.97 • CDFb=0.34

  18. Label Uncertainty • p=0.7 • 10 labels(7+, 3-) • Entropy ~ 0.88 • CDFb=0.11

  19. Label Uncertainty • p=0.7 • 20 labels(14+, 6-) • Entropy ~ 0.88 • CDFb=0.04

  20. Quality Comparison Label Uncertainty Round robin(already better than single labeling)

  21. Examples Models + + - - - - - - - - + + + + ? - - - - - - - - + + + + + + + + - - - - - - - - Model Uncertainty (MU) + + - - - - + + - - - - + + ? ? • Learning a model of the data provides an alternative source of information about label certainty • Model uncertainty: get more labels for instances that cause model uncertainty • Intuition? • for data quality, low-certainty “regions” may be due to incorrect labeling of corresponding instances • for modeling: why improve training data quality if model already is certain there? Self-healing process

  22. Label + Model Uncertainty • Label and model uncertainty (LMU): avoid examples where either strategy is certain

  23. Quality Model Uncertainty alone also improves quality Label + Model Uncertainty Label Uncertainty Uniform, round robin

  24. Comparison: Model Quality (I) Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU. Label & Model Uncertainty 24

  25. Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU. Comparison: Model Quality (II)

  26. Summary of results • Micro-outsourcing (e.g., MTurk, RentaCoder, ESP game) change the landscape for data acquisition • Repeated labeling improves data quality and model quality • With noisy labels, repeated labeling can be preferable to single labeling • When labels relatively cheap, repeated labeling can do much better than single labeling • Round-robin repeated labeling works well • Selective repeated labeling improves substantially

  27. Example: Build an Adult Web Site Classifier • Need a large number of hand-labeled sites • Get people to look at sites and classify them as: G (general), PG(parental guidance), R (restricted), X (porn) • Cost/Speed Statistics • Undergrad intern: 200 websites/hr, cost: $15/hr • MTurk: 2500 websites/hr, cost: $12/hr

  28. Bad news: Spammers! • Worker ATAMRO447HWJQ • labeled X (porn) sites as G (general audience)

  29. Solution: Repeated Labeling 11 workers 93% correct 1 worker 70% correct • Probability of correctness increases with numberof workers • Probability of correctness increases with quality of workers

  30. But Majority Voting can be Expensive • Single Vote Statistics • MTurk: 2500 websites/hr, cost: $12/hr • Undergrad: 200 websites/hr, cost: $15/hr • 11-vote Statistics • MTurk: 227 websites/hr, cost: $12/hr • Undergrad: 200 websites/hr, cost: $15/hr

  31. Spammer among 9 workers • We can compute error rates for each worker • Error rates for ATAMRO447HWJQ • P[X → X]=9.847% P[X → G]=90.153% • P[G → X]=0.053% P[G → G]=99.947% Our “friend” ATAMRO447HWJQmainly marked sites as G.Obviously a spammer…

  32. Rejecting spammers and Benefits Random answers error rate = 50% Average error rate for ATAMRO447HWJQ: 45.2% • P[X → X]=9.847% P[X → G]=90.153% • P[G → X]=0.053% P[G → G]=99.947% Action: REJECT and BLOCK Results: • Over time you block all spammers • Spammers learn to avoid your HITS • You can decrease redundancy, as quality of workers is higher

  33. After rejecting spammers, quality goes up Without spam 5 workers 94% correct Without spam 1 worker 80% correct With spam 11 workers 93% correct With spam 1 worker 70% correct

  34. Correcting biases • Sometimes workers are careful but biased • Classifies G → P and P → R • Average error rate for ATLJIK76YH1TF: 45.0% • Error Rates for Worker: ATLJIK76YH1TF • P[G → G]=20.0% P[G → P]=80.0% P[G → R]=0.0% P[G → X]=0.0% • P[P → G]=0.0% P[P → P]=0.0%P[P → R]=100.0% P[P → X]=0.0% • P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0% • P[X → G]=0.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=100.0% Is ATLJIK76YH1TF a spammer?

  35. Error Rates for Worker: ATLJIK76YH1TF • P[G → G]=20.0% P[G → P]=80.0% P[G → R]=0.0% P[G → X]=0.0% • P[P → G]=0.0% P[P → P]=0.0%P[P → R]=100.0% P[P → X]=0.0% • P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0% • P[X → G]=0.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=100.0% Correcting biases • For ATLJIK76YH1TF, we simply need to compute the “non-recoverable” error-rate (technical details omitted) • Non-recoverable error-rate for ATLJIK76YH1TF: 9% • The “condition number” of the matrix [how easy is to invert the matrix] is a good indicator of spamminess

  36. Too much theory? Open source implementation available at: http://code.google.com/p/get-another-label/ • Input: • Labels from Mechanical Turk • Cost of incorrect labelings(e.g., XG costlier than GX) • Output: • Corrected labels • Worker error rates • Ranking of workers according to their quality • Alpha version, more improvements to come! • Suggestions and collaborations welcomed!

  37. Many new directions… • Strategies using “learning-curve gradient” • Increased compensation vs. labeler quality • Multiple “real” labels • Truly “soft” labels • Selective repeated tagging

  38. Other Projects • SQoUT projectStructured Querying over Unstructured Texthttp://sqout.stern.nyu.edu • Faceted Interfaces • EconoMining projectThe Economic Value of User Generated Contenthttp://economining.stern.nyu.edu

  39. SQoUT: Structured Querying over Unstructured Text • Information extraction applications extract structured relations from unstructured text July 8, 2008: Intel Corporation and DreamWorks Animation today announced they have formed a strategic alliance aimed at revolutionizing 3-D filmmaking technology,… Alliances covered in The New York Times Information Extraction System (e.g., OpenCalais) Alliances and strategic partnerships before 1990 are sparsely covered in databases such as SDC Platinum

  40. SIGMOD’06, TODS’07, ICDE’09, TODS’09 In an ideal world… Text Databases Extraction System(s) Retrieve documents from database/web/archive Process documents Extract output tuples SELECT Date, Company1, Company2 FROM Alliances USING OpenCalais OVER NYT_archive [WITH recall>0.2 AND precision >0.9]

  41. SIGMOD’06 best paper, TODS’07, ICDE’09,TODS’09 SQoUT: The Questions Text Databases Extraction System(s) Retrieve documents from database/web/archive Process documents Extract output tuples Questions: How to we retrieve the documents? (Scan all? Specific websites? Query Google?) How to configure the extraction systems? What is the execution time? What is the output quality?

  42. Basic Idea Applications (in increasing order of difficulty) • Opinion mining an important application of information extraction • Opinions of users are reflected in some economic variable (price, sales) EconoMining ProjectShow me the Money! • Buyer feedback and seller pricing power in online marketplaces (ACL 2007) • Product reviews and product sales (KDD 2007) • Importance of reviewers based on economic impact (ICEC 2007) • Hotel ranking based on “bang for the buck” (WebDB 2008) • Political news (MSM, blogs), prediction markets, and news importance

  43. Some Indicative Dollar Values Negative Positive captures misspellings as well Natural method for extracting sentiment strength and polarity good packaging -$0.56 Negative Positive? ? Naturally captures the pragmatic meaning within the given context

  44. Thanks!Q & A?

  45. So… • Multiple noisy labelers improve quality • (Sometimes) quality of multiple noisy labelers better than quality of best labeler in set So, should we always get multiple labels?

  46. Optimal Label Allocation

More Related