1 / 71

Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers. Foster Provost New York University. Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang. Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers.

quana
Download Presentation

Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Get Another Label?Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers Foster Provost New York University Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang

  2. Get Another Label?Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers Foster Provost New York University Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang

  3. Outsourcing machine learning preprocessing Traditionally, modeling teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing Raghu Ramakrishnan from his SIGKDD Innovation Award Lecture (2008) “the best you can expect are noisy labels” Now, we can outsource preprocessing tasks, such as labeling, feature extraction, verifying information extraction, etc. using Mechanical Turk, Rent-a-Coder, etc. quality may be lower than expert labeling (much?) but low costs can allow massive scale The ideas may apply also to focusing user-generated tagging, crowdsourcing, etc. 6

  4. Example: Build a web-page classifier for inappropriate content • Need a large number of hand-labeled web-pages • Get people to look at pages and classify them. • For example, for adult content: G (general), PG(parental guidance), R (restricted), X (porn) • Cost/Speed Statistics • Undergrad intern: 200 web-pages/hr, cost: $15/hr • MTurk: 2500 web-pages/hr, cost: $12/hr

  5. Noisy labels can be problematic Many tasks rely on high-quality labels for objects: webpage classification for safe advertising learning predictive models searching for relevant information finding duplicate database records image recognition/labeling song categorization sentiment analysis Noisy labels can lead to degraded task performance 12

  6. Here, labels are values for target variable Quality and Classification Performance Labeling quality increases  classification quality increases P = 1.0 P = 0.8 P = 0.6 P = 0.5

  7. Summary of results • Repeated labeling can improve data quality and model quality(but not always) • When labels are noisy, repeated labeling can be preferable to single labeling • When labels are relatively cheap, repeated labeling can do much better • Round-robin repeated labeling does well • Selective repeated labeling improves significantly

  8. I won’t talk about …

  9. Related topic • Estimating (and using) the labeler quality • for multilabeled data: Dawid & Skene 1979; Raykar et al. JMLR 2010; Donmez et al. KDD09 • for single-labeled data with variable-noise labelers: Donmez & Carbonell 2008; Dekel & Shamir 2009a,b • to eliminate/down-weight poor labelers: Dekel & Shamir, Donmez et al.; Raykar et al. (implicitly) • and correct labeler biases: Ipeirotis et al. HCOMP-10 • Example-conditional labeler performance • Yan et al. 2010a,b • Using learned model to find bad labelers/labels: Brodley & Friedl 1999; Dekel & Shamir, Us (I’ll discuss)

  10. Setting for this talk (I) • unknown process provides data points to be labeled, randomly from some fixed probability distribution • data points, or “examples”, comprise a vector of features or descriptive attributes • we sometimes consider a fixed subset S of examples • labels are binary • set L of labelers, L1, L2, …, (potentially unbounded for this talk) • each Li has “quality” pi, which is the probability that Li will label any given example correctly • pi = pjfor most of this talk (sometimes called q) • some subset of Lwill label each example • some strategies will acquire k labels for each example • Total acquisition cost includes CU cost of acquiring unlabeled “feature portion” and cost CL of acquiring “label” of example • for most of the talk I’ll ignore CU • ρ=CU/CL gives cost ratio

  11. Setting for this talk (II) • we select a fixed process for producing an integrated label from a set of labels (e.g., majority voting) • we care about: • the quality of the labeling, i.e., the expectation that an integrated label will be correct • the generalization performance of predictive models induced from the data+integrated labels, e.g., measured as generalization performance on hold-out data (accuracy, AUC, etc.)

  12. Majority Voting and Label Quality • Ask multiple labelers, keep majority label as “true” label • Quality is probability of being correct P=1.0 P=0.9 P=0.8 P is probabilityof individual labelerbeing correct P=0.7 P=0.6 P=0.5 P=0.4

  13. Tradeoffs for Modeling Get more examples  Improve classification Get more labels  Improve label quality  Improve classification P = 1.0 P = 0.8 P = 0.6 P = 0.5 21

  14. Basic Labeling Strategies • Single Labeling • Get as many data points as possible • One label each • Round-robin Repeated Labeling • Fixed Round Robin (FRR) • keep labeling the same set of points in some order • Generalized Round Robin (GRR) • repeatedly label data points, giving next label to the one with the fewest so far

  15. Fixed Round Robin vs. Single Labeling FRR (100 examples) SL p= 0.6, labeling quality #examples =100 With high noise or many examples, repeated labeling better than single labeling

  16. Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 1.0 P = 0.8 P = 0.6 P = 0.5 24

  17. Fixed Round Robin vs. Single Labeling Single FRR (50 examples) p= 0.8, labeling quality #examples =50 With low noise and few examples, more (single labeled) examples better

  18. Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 1.0 P = 0.8 P = 0.6 P = 0.5 26

  19. Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 1.0 P = 0.8 P = 0.6 P = 0.5 27

  20. Gen. Round Robin vs. Single Labeling CU=0 (i.e. ρ =0), k=10 ρ : cost ratio k: #labels Use up all examples Repeated labeling is better than single labeling for this setting

  21. Gen. Round Robin vs. Single Labeling ρ=CU/CL=3, k=5 ρ : cost ratio k: #labels Repeated labeling is better than single labeling for this setting

  22. Gen. Round Robin vs. Single Labeling ρ=CU/CL=10, k=12 ρ : cost ratio k: #labels Repeated labeling is better than single labeling for this setting

  23. Selective Repeated-Labeling • We have seen so far: • With enough examples and noisy labels, getting multiple labels is better than single-labeling • When we consider costly preprocessing, the benefit is magnified • Can we do better than the basic strategies? • Key observation: we have additional information to guide selection of data for repeated labeling the current multiset of labels • Example: {+,-,+,-,-,+} vs. {+,+,+,+,+,+}

  24. Natural Candidate: Entropy • Entropy is a natural measure of label uncertainty: • E({+,+,+,+,+,+})=0 • E({+,-, +,-, -,+ })=1 Strategy: Get more labels for high-entropy label multisets

  25. What Not to Do: Use Entropy Improves at first, hurts in long run

  26. Why not Entropy • In the presence of noise, entropy will be high even with many labels • Entropy is scale invariant (3+ , 2-) has same entropy as (600+ , 400-)

  27. Estimating Label Uncertainty (LU) • Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} • Label uncertainty = tail of beta distribution Beta probability density function SLU 0.5 0.0 1.0

  28. Label Uncertainty • p=0.7 • 5 labels(3+, 2-) • Entropy ~ 0.97 • CDFb=0.34

  29. Label Uncertainty • p=0.7 • 10 labels(7+, 3-) • Entropy ~ 0.88 • CDFb=0.11

  30. Label Uncertainty • p=0.7 • 20 labels(14+, 6-) • Entropy ~ 0.88 • CDFb=0.04

  31. Label Uncertainty vs. Round Robin similar results across a dozen data sets 40

  32. Gen. Round Robin vs. Single Labeling ρ=CU/CL=10, k=12 ρ : cost ratio k: #labels Repeated labeling is better than single labeling here

  33. Label Uncertainty vs. Round Robin similar results across a dozen data sets 42

  34. More sophisticated label uncertainty?

  35. Estimating Label Uncertainty (LU) • Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} • Label uncertainty = tail of beta distribution Beta probability density function SLU 0.5 0.0 1.0

  36. More sophisticated label uncertainty?(using estimated instance-specific label quality)

  37. More sophisticated LU improves labeling quality under class imbalance and fixes some pesky LU learning curve glitches Both techniques perform essentially optimally with balanced classes

  38. Examples Models + + - - - - - - - - + + + + - - - - Another strategy:Model Uncertainty (MU) - - - - + + + + + + + + + - - - - - - - - + + - - - - + + - - - - + + • Learning models of the data provides an alternative source of information about label certainty • (a random forest for the results to come) • Model uncertainty: get more labels for instances that cause model uncertainty • Intuition? • for modeling: why improve training data quality if model already is certain there? • for data quality, low-certainty “regions” may be due to incorrect labeling of corresponding instances Self-healing process

  39. Yet another strategy:Label & Model Uncertainty (LMU) • Label and model uncertainty (LMU): avoid examples where either strategy is certain

  40. Quality Label + Model Uncertainty Model Uncertainty alone also improves quality Label Uncertainty Uniform, round robin

  41. Comparison: Model Quality (I) Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU. Label & Model Uncertainty 51

  42. Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU. Comparison: Model Quality (II)

  43. + + - - - - - - - - Why does Model Uncertainty (MU) work? + + + + - - - - - - - - + + + + + + + + + - - - - - - - - + + - - - - + + - - - - + + • MU score distributions for correctly labeled (blue) and incorrectly labeled (purple) cases

  44. Self-healing process Examples Models + + - - - - - - - - Why does Model Uncertainty (MU) work? + + + + - - - - - - - - + + + + + + + + + - - - - - - - - + + - - - - + + - - - - + + Self-healing MU “active learning” MU

  45. Adult content classification

  46. Summary of results • Micro-task outsourcing (e.g., MTurk, RentaCoder ESP game) changes the landscape for data formulation • Repeated labeling improves data quality and model quality(but not always) • With noisy labels, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap • When labels are relatively cheap, repeated labeling can do much better • Round-robin repeated labeling works well • Selective repeated labeling improves substantially • Best performance is by combining model-based and label-set based indications of uncertainty

More Related