1 / 47

Text Correction using Domain Dependent Bigram Models from Web Crawls

Text Correction using Domain Dependent Bigram Models from Web Crawls. Christoph Ringlstetter, Max Hadersbeck, Klaus U. Schulz, and Stoyan Mihov. Two recent goals of text correction. Two recent goals of text correction. Use of powerful language models

dermot
Download Presentation

Text Correction using Domain Dependent Bigram Models from Web Crawls

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Text Correction using Domain Dependent Bigram Models from Web Crawls Christoph Ringlstetter, Max Hadersbeck, Klaus U. Schulz, and Stoyan Mihov

  2. Two recent goals of text correction

  3. Two recent goals of text correction Use of powerful language models word frequencies, n-gram models, HMMs, probabilistic grammars, etc. Keenan et al. 91, Srihari 93, Hong & Hull 95,Golding & Schabes 96,...

  4. Two recent goals of text correction Use of powerful language models word frequencies, n-gram models, HMMs, probabilistic grammars, etc. Keenan et al. 91, Srihari 93, Hong & Hull 95,Golding & Schabes 96,... Document centric and adaptive text correction prefer words of the text as correction suggestions for unknown tokens. Taghva & Stofsky 2001, Nartker et al. 2003, Rong Jin 2003, ...

  5. Two recent goals of text correction Use of powerful language models word frequencies, n-gram models, HMMs, probabilistic grammars, etc. Keenan et al. 91, Srihari 93, Hong & Hull 95,Golding & Schabes 96,... Document centric and adaptive text correction prefer words of the text as correction suggestions for unknown tokens. Taghva & Stofsky 2001, Nartker et al. 2003, Rong Jin 2003, ...

  6. Two recent goals of text correction Use of powerful language models word frequencies, n-gram models, HMMs, probabilistic grammars, etc. Keenan et al. 91, Srihari 93, Hong & Hull 95,Golding & Schabes 96,... Document centric and adaptive text correction prefer words of the text as correction suggestions for unknown tokens. Taghva & Stofsky 2001, Nartker et al. 2003, Rong Jin 2003, ... Here: Use of document centric language models (bigrams)

  7. Use of document centric bigram models Idea

  8. Use of document centric bigram models Idea Text T: ............. Wk-1 Wk Wk+1 .............

  9. Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Wk Wk+1 .............

  10. Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Wk Wk+1 ............. V1 V2 ... Vn correction candidates

  11. Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Wk Wk+1 ............. V1 V2 ... Vn correction candidates Prefer those correction candidates V where bigrams Wk-1V and VWk+1"are natural, given the text T".

  12. Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Wk+1 ............. Vi V1 V2 ... Vn correction candidates Prefer those correction candidates V where bigrams Wk-1V and VWk+1"are natural, given the text T".

  13. Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Vi Wk+1 ............. V1 V2 ... Vn correction candidates Prefer those correction candidates V where bigrams Wk-1V and VWk+1"are natural, given the text T".

  14. Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Vi Wk+1 ............. V1 V2 ... Vn correction candidates Prefer those correction candidates V where bigrams Wk-1V and VWk+1"are natural, given the text T". Problem How to measure "naturalness of a bigram, given a text"?

  15. How to derive "natural" bigram models for a text?

  16. How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T?

  17. How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T? Sparseness of bigrams: low chance to find bigrams repeated in T.

  18. How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T? Sparseness of bigrams: low chance to find bigrams repeated in T. • Using a fixed background corpus • (British National Corpus, Brown Corpus)?

  19. How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T? Sparseness of bigrams: low chance to find bigrams repeated in T. • Using a fixed background corpus • (British National Corpus, Brown Corpus)? Sparseness problem partially solved - but models not document centric!

  20. How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T? Sparseness of bigrams: low chance to find bigrams repeated in T. • Using a fixed background corpus • (British National Corpus, Brown Corpus)? Sparseness problem partially solved - but models not document centric! Our suggestion Using domain dependent terms from T, crawl a corpus C in the web that reflects domain and vocabulary of T. Count bigram frequencies in C.

  21. Correction Experiments Text T

  22. Correction Experiments Text T 1. Extract domain specific terms (compounds).

  23. Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T.

  24. Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T. Dictionary D

  25. Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T. Dictionary D 3. For each pair of dictionary words UV, store the frequency of UV in C as a score s(U,V).

  26. Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T. Dictionary D 3. For each pair of dictionary words UV, store the frequency of UV in C as a score s(U,V). First experiment ("in isolation") What is the correction accuracy reached when using s(U,V) as the single information for ranking correction suggestions?

  27. Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T. Dictionary D 3. For each pair of dictionary words UV, store the frequency of UV in C as a score s(U,V). First experiment ("in isolation") What is the correction accuracy reached when using s(U,V) as the single information for ranking correction suggestions? Second experiment ("in combination") Which gain is obtained when adding s(U,V) as a new parameter to a sophisticated correction system using other scores as well?

  28. Experiment 1: bigram scores "in isolation" • Set of ill-formed output tokens of commercial OCR system. • Candidate sets for ill-formed tokens: dictionary entries with edit distance < 3. • Using s(U,V) as the single information for ranking correction suggestions. • Measured the percentage of correctly top-ranked correction suggestions. • Comparing bigram scores from web crawls, from BNC, from Brown Corpus. Texts from 6 domains

  29. Experiment 1: bigram scores "in isolation" • Set of ill-formed output tokens of commercial OCR system. • Candidate sets for ill-formed tokens: dictionary entries with edit distance < 3. • Using s(U,V) as the single information for ranking correction suggestions. • Measured the percentage of correctly top-ranked correction suggestions. • Comparing bigram scores from web crawls, from BNC, from Brown Corpus. Texts from 6 domains Resumee: crawled bigram frequencies clearly better than those from static corpora.

  30. Experiment 2: adding bigram scores to fully-fledged correction system • Baseline: correction with length-sensitive Levenshtein distance and crawled word frequencies as two scores. • Then adding bigram frequencies as a third score. • Measuring the correction accuracy (percentage of correct tokens) reached with fully automated correction (optimized parameters). • Corrected output of commercial OCR 1 and open source OCR 2.

  31. Experiment 2: adding bigram scores to fully-fledged correction system

  32. Experiment 2: adding bigram scores to fully-fledged correction system Output highly accurate

  33. Experiment 2: adding bigram scores to fully-fledged correction system Baseline correction adds significant improvement

  34. Experiment 2: adding bigram scores to fully-fledged correction system Small additional gain by adding bigram score

  35. Experiment 2: adding bigram scores to fully-fledged correction system

  36. Experiment 2: adding bigram scores to fully-fledged correction system Reduced output accuracy

  37. Experiment 2: adding bigram scores to fully-fledged correction system Baseline correction adds drastic improvement

  38. Experiment 2: adding bigram scores to fully-fledged correction system Considerable additional gain by adding bigram score

  39. Additional experiments: comparing language models Experiment • Compare word frequencies in input text with • word frequencies retrieved from "general" standard corpora • word frequencies retrieved from crawled domain dependent corpora Result Using the same large word list (dictionary) D, the top-k segments w.r.t. ordering using frequencies of type 2 covers much more tokens of the input text than the top-k segments w.r.t. ordering using frequencies of type 1

  40. Additional experiments: comparing language models Crawled frequencies Standard frequencies Tokens Types

  41. Summing up

  42. Summing up • Bigram scores represent a useful additional score for correction systems.

  43. Summing up • Bigram scores represent a useful additional score for correction systems. • Bigram scores obtained from text-centered domain dependent • crawled corpora more valuable than uniform bigram scores • from general corpora.

  44. Summing up • Bigram scores represent a useful additional score for correction systems. • Bigram scores obtained from text-centered domain dependent • crawled corpora more valuable than uniform bigram scores • from general corpora. • Sophisticated crawling strategies developed. Special techniques for • keeping arbitrary bigram scores in main memory (see paper).

  45. Summing up • Bigram scores represent a useful additional score for correction systems. • Bigram scores obtained from text-centered domain dependent • crawled corpora more valuable than uniform bigram scores • from general corpora. • Sophisticated crawling strategies developed. Special techniques for • keeping arbitrary bigram scores in main memory (see paper). • The additional gain in accuracy reached with bigram scores • depends on the baseline.

  46. Summing up • Bigram scores represent a useful additional score for correction systems. • Bigram scores obtained from text-centered domain dependent • crawled corpora more valuable than uniform bigram scores • from general corpora. • Sophisticated crawling strategies developed. Special techniques for • keeping arbitrary bigram scores in main memory (see paper). • The additional gain in accuracy reached with bigram scores • depends on the baseline. • Language models obtained from text-centered domain dependent • corpora retrieved in the web reflect the language of the input document • much more closely than those obtained from general corpora.

  47. Thanks for your attention!

More Related