1 / 38

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers

Joint work with Foster Provost & Panos Ipeirotis New York University Stern School. Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers. Victor Sheng Assistant Professor, University of Central Arkansas. Outsourcing KDD Preprocessing.

genna
Download Presentation

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Joint work with Foster Provost & Panos Ipeirotis New York University Stern School Get Another Label?Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Victor Sheng Assistant Professor, University of Central Arkansas

  2. Outsourcing KDD Preprocessing Traditionally, data mining teams invest in data formulation, information extraction, cleaning, etc. Raghu Ramakrishnan from his SIGKDD Innovation Award Lecture (2008) “the best you can expect are noisy labels” Now, outsource preprocessing tasks, such as labeling, feature extraction, etc. using Mechanical Turk, Rent-a-Coder, etc. quality may be lower than expert labeling but low costs can allow massive scale 2

  3. Mechanical Turk Example 3

  4. ESP Game (by Luis von Ahn) 4

  5. Other “Free” Labeling Schemes • Open Mind initiative (www.openmind.org) • Other gwap games • Tag a Tune • Verbosity (tag words) • Matchin (image ranking) • Web 2.0 systems? • Can/should tagging be directed? 5

  6. Noisy Labels Can Be Problematic Many tasks rely on high-quality labels: learning predictive models searching for relevant information duplicate detection (i.e. entity resolution) image recognition/labeling song categorization (e.g. Pandora) sentiment analysis Noisy labels can lead to degrade task performance 6

  7. Here, labels are values for target variable Noisy Labels Impact Model Quality Labeling quality increases  classification quality increases P = 1.0 P = 0.8 P = 0.6 P = 0.5 7

  8. Summary of Results • Repeated labeling can improve data quality and model quality(but not always) • Repeated labeling can be preferable to single labeling when labels aren’t particularly cheap • When labels are relatively cheap, repeated labeling can do much better • Round-robin repeated labeling does well • Selective repeated labeling performs better 8

  9. X Y Setting for This Talk (I) • Some process provides data points to be labeled with some fixed probability distribution • data points <X, Y>, or “examples” • Y is a multi-label set, e.g., {+,-,+} • each example has a correct label • set L of labelers, L1, L2, …, (potentially unbounded) • Li has “quality” pi, the probability of labeling any given example correctly • same quality for all labelers, i.e.,pi = pj • some strategies will acquire k labels for each example 9

  10. Setting for This Talk (II) • Total acquisition cost includes CU andCL • CU the cost of acquiring unlabeled “feature portion” • CL the cost of acquiring “label” of example • Same CU and CL for all examples • Ignore CU, ρ=CU/CL gives cost ratio • A process (majority voting) to generate an integrated label from a set of labels e.g.,{+,-,+}  + • We care about: • the quality of the integrated labels • the quality of predictive models induced from the data+integrated labels, e.g., accuracy on testing data 10

  11. Repeated Labeling Improves Label Quality • Ask multiple labelers, keep majority label as “true” label • Quality is probability of being correct P=1.0 P=0.9 P=0.8 P is probabilityof individual labelerbeing correct P=0.7 P=0.6 P=0.5 P=0.4 11

  12. Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 1.0 P = 0.8 P = 0.6 P = 0.5 12

  13. Basic Labeling Strategies • Single Labeling • Get as many examples as possible • One label each • Round-robin Repeated Labeling (giving next label to the one with the fewest so far) • Fixed Round Robin (FRR) • Keep labeling the same set of examples in some order • Generalized Round Robin (GRR) • Get new examples 13

  14. Fixed Round Robin vs. Single Labeling FRR (100 examples) SL p= 0.6, labeling quality #examples =100 14 With high noise, repeated labeling better than single labeling

  15. Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 1.0 P = 0.8 P = 0.6 P = 0.5 15

  16. Fixed Round Robin vs. Single Labeling SL FRR (50 examples) p= 0.8, labeling quality #examples =50 16 With low noise and few examples, more (single labeled) examples better

  17. Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 1.0 P = 0.8 P = 0.6 P = 0.5 17

  18. Gen. Round Robin vs. Single Labeling ρ=CU/CL=0,(i.e.CU=0), k=10 ρ : cost ratio k: #labels Use up all examples Repeated labeling is better than single labeling when labels are not particularly cheap 18

  19. Gen. Round Robin vs. Single Labeling ρ=CU/CL=3, k=5 ρ : cost ratio k: #labels Repeated labeling is better than single labeling when labels are relatively cheap 19

  20. Gen. Round Robin vs. Single Labeling ρ=CU/CL=10, k=12 ρ : cost ratio k: #labels Repeated labeling is better than single labeling when labels are relatively cheaper 20

  21. Selective Repeated-Labeling • We have seen: • With noisy labels, getting multiple labels is better than single-labeling • Considering costly preprocessing, the benefit is magnified • Can we do better than the basic strategies? • Key observation: additional information to guide the selection of data for repeated labeling • the current multiset of labels • Example: {+,-,+,+,-,+} vs. {+,+,+,+} 22

  22. Natural Candidate: Entropy • Entropy is a natural measure of label uncertainty: • E({+,+,+,+,+,+})=0 • E({+,-, +,-, +,- })=1 Strategy: Get more labels for examples with high-entropy label multisets 23

  23. What Not to Do: Use Entropy Improves at first, hurts in long run 24

  24. Why not Entropy • In noise scenarios • Label sets truly reflecting the truth have high entropy, even with many labels • Some sets accidently get low entropy, never get more labels • Entropy is scale invariant (3+ , 2-) has same entropy as (600+ , 400-) • Fundamental problem: entropy not for uncertainty, but for mixture 25

  25. Estimating Label Uncertainty (LU) • Uncertainty: estimated probability of causing the wrong label • Model as beta distribution, based on observing +’s and –’s • Label uncertainty SLU= tail probability of beta distribution Beta probability density function SLU 26 0.5 0.0 1.0

  26. Label Uncertainty • p=0.7 • 5 labels(3+, 2-) • Entropy ~ 0.97 • SLU =0.34 27

  27. Label Uncertainty • p=0.7 • 10 labels(7+, 3-) • Entropy ~ 0.88 • SLU=0.11 28

  28. Label Uncertainty • p=0.7 • 20 labels(14+, 6-) • Entropy ~ 0.88 • SLU=0.04 29

  29. Label Uncertainty vs. Round Robin similar results across a dozen data sets 30

  30. Gen. Round Robin vs. Single Labeling ρ=CU/CL=10, k=12 ρ : cost ratio k: #labels Repeated labeling is better than single labeling here 31

  31. Label Uncertainty vs. Round Robin similar results across a dozen data sets Until now, LU is the best strategy 32

  32. - - - - + + + + + + - - - - - - - - + + + + Examples - - - - - - - - + + - - - - + + - - - - + + ? Models Another strategy:Model Uncertainty (MU) • Learning a model of the data provides an alternative source of information about label certainty • Model uncertainty: get more labels for instances that cause model uncertainty • Intuition • for data quality, low-certainty “regions” may be due to incorrect labeling of corresponding instances • for modeling: why improve training data quality if model already is certain there? Self-healing process - - - - + + + + 33

  33. Yet another strategy:Label & Model Uncertainty (LMU) • Label and model uncertainty (LMU): avoid examples where either strategy is certain 34

  34. Comparison Model Uncertainty alone also improves quality Label & Model Uncertainty Label Uncertainty GRR 35

  35. Comparison: Model Quality (I) Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU. Label & Model Uncertainty 36

  36. Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU. Comparison: Model Quality (II) 37

  37. Summary of Results • Micro-task outsourcing (e.g., MTurk, ESP game) has changed the landscape for data formulation • Repeated labeling can improve data quality and model quality(but not always) • Repeated labeling can be preferable to single labeling when labels aren’t particularly cheap • When labels are relatively cheap, repeated labeling can do much better • Round-robin repeated labeling does well • Selective repeated labeling performs better 38

  38. Thanks!Q & A?

More Related