1 / 61

Ngram Models

Ngram Models. Bahareh Sarrafzadeh Winter 2010. Agenda. Ngrams Language Modeling Evaluation of LMs Markov Models Stochastic Process Markov Chain Text Classification Ngram-based Approach. NGram. What is an N-Gram?. A subsequence of n items from a given sequence Items: Phonemes

jzenon
Download Presentation

Ngram Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ngram Models BaharehSarrafzadeh Winter 2010

  2. Agenda • Ngrams • Language Modeling • Evaluation of LMs • Markov Models • Stochastic Process • Markov Chain • Text Classification • Ngram-based Approach

  3. NGram

  4. What is an N-Gram? • A subsequence of n items from a given sequence • Items: • Phonemes • Syllables • Letters • Words • Number of Items: • Unigram, Bigram, Trigram, ...

  5. N-Gram - Examples • 3-Grams • ceramics collectables collectibles (55) • ceramics collectables fine (130) • ceramics collected by (52) • ceramics collectible pottery (50) • ceramics collectibles cooking (45) • 4-Grams • serve as the incoming (92) • serve as the incubator (99) • serve as the independent (794) • serve as the index (223) • serve as the indication (72) • serve as the indicator (120)

  6. N-Gram Model • A Probabilistic Model for Predicting the next Item in such a sequence. • Why do we want to Predict Words? • Chatbots • Speech recognition • Handwriting recognition/OCR • Spelling correction • Author attribution • Plagiarism detection • ...

  7. N-Gram Model • Models Sequences, esp. NL, using the Statistical Properties of N-Grams • Idea: Shannon • given a sequence of letters (e.g. "for ex"), what is the likelihood of the next letter? • From training data, derive a probability distribution for the next letter given a history of size n.

  8. N-Gram Model • Predicts xi based on xi – 1,xi – 2, ..., xi – n: • NGramIndependence Assumption: • word is affected only by its “prior local context” (last few words) • Advantages: • Massively simplifies the problem of learning the language model • because of the open nature of language, it is common to group words unknown to the language model together

  9. Language Models • A statistical language model assigns a probability to a sequence of m words by means of a probability distribution • Applications in NLP: • speech recognition, • machine translation, • part-of-speech tagging, • parsing, • information retrieval.

  10. The goal of Statistical Language Modeling is to build a statistical language model that can estimate the distribution of natural language as accurate as possible.

  11. A bad language model

  12. A bad language model

  13. A bad language model

  14. A bad language model

  15. What happened? • A Language model is a probability distribution over word sequences • P(“And nothing but the truth”)  0.001 • P(“And nuts sing on the roof”)  0

  16. How language models work? • Hard to compute P(“And nothing but the truth”) • Step 1: Decompose probability

  17. Language Models - Simplification • Estimating the probability of sequences can become difficult in corpora • Arbitrary long phrases or sentences • Data sparseness • Overfitting • Solution: Models are often approximated using smoothed N-gram models.

  18. Ngram Modeling of a Language • In an n-gram model, the probability of observing the sentence w1,...,wm is approximated as: • The conditional probability can be calculated from n-gram frequency counts: History Prediction

  19. Example • Assume each word depends only on the previous two words (Trigram Assumption)

  20. Smoothing • It is useful to assign small probabilities to unseen n-mers. • For example, for 3-grams we add 2 “dummy“ words (such as ‘.’) to the beginning of each sentence, we have:

  21. Graphical Representation . . . 1-gram . . . . . . 2-gram . . . Previous (n-1)-gram n-gram

  22. Use of Log Probabilities • Multiplying a large number of probabilities gives a very small result (close to zero) • So in order to avoid floating-point underflow, we should use logarithms of the probabilities in the model.

  23. Evaluation • Extrinsic • The language model is embedded in a wider application: • Slow • Specific to the application • Intrinsic • The language model is evaluated directly using some measure, such as Perplexity

  24. Perplexity • Perplexity is a measure of the size of the set of words from which the next word is chosen given that we observe the history of spoken words. • The perplexity of a LM depends on the domain of discourse.

  25. Perplexity: Intuition • Ask a speech recognizer to recognize digits “0, 1, 2, 3, 4, 5, 6, 7, 8, 9” – easy – perplexity 10 • Ask a speech recognizer to recognize names at Microsoft – hard – 30,000 – perplexity 30,000 • Perplexity is weighted equivalent branching factor.

  26. Perplexity: Is lower better? • Remarkable fact: the true model for data has the lowest possible perplexity • Lower the perplexity, the closer we are to the true model.

  27. Markov Model

  28. Markov Property – Markov Process • “the future is independent of the past given the present.” • A stochastic process has the Markov property if the conditional probability distribution of future states of the process depend only upon the present state. • A process with this property is called Markov process.

  29. Markov Chain • We have a set of states, S = {s1, s2, ... , sr}. • The process starts in one of these states and moves successively from one state to another. • Each move is called a step. • If the chain is currently in state si, then it moves to state sj at the next step with a probability denoted by pij . • This probability does not depend upon which states the chain was in before the currentstate

  30. Order m – Markov Chain • A Markov chain of order m (or a Markov chain with memory m) where m is finite, is a process in which the future state depends on the past m states.

  31. Text Generation using Markov Chains • Markov processes can also be used to generate superficially "real-looking" text given a sample document • These processes are also used by spammers to inject real-looking hidden paragraphs into emails to get these messages past spam filters.

  32. REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE. • Shannon considers a series of Markov chain approximations to English prose. • For example, he presents first a simulation where the words are chosen independently but with appropriate frequencies.

  33. THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED. • He then notes the increased similarity to ordinary English text when the words are chosen as a Markov chain, in which case he obtains

  34. Garkov!

  35. Text Classification using NGram

  36. Text Classification • A fundamental kind of document processing • A content based assignment of one or more predefined categories to free texts. • Approaches: • Supervised • Unsupervised • Semisupervised

  37. Main Tasks • Feature Construction / Selection • Extracting Representative Features • Words- Frequency • Context of Words – Set of Words • Spare Phrases – Neighbour Words • Word Ngrams - Frequency • Learning Phase • Binary Classifiers • M-ary Classifiers

  38. Learning Algorithms • Decision Trees • Naive Bayes • KNN • Neural Networks • Support Vector Machines

  39. Ngram based Text Classification • Features: • N-grams • Values: • N-grams Frequencies • Similarity measure • Of various types

  40. Classifier’s Characteristics • The categorization must work reliably in spite of textual errors. • The categorization must be efficient, consuming as little storage and processing time as possible. • The categorization must be able to recognize when a given document does not match any category, or when it falls between two categories.

  41. Overall Approach • Start with a set of pre-existing text categories (such as subject domains) • Generate a set of N-gram frequency profiles to represent each of the categories. • When a new document arrives for classification, the system first computes its N-gram frequency profile. • It then compares this profile against the profiles for each of the categories using an easily calculated distance measure. • The system classifies the document as belonging to the category having the smallest distance.

  42. N-gram Frequency Statistics • Each word occurs in human languages with a different frequency. • One of the most common ways of expressing this idea: Zipf’s Law

  43. Zipf’s Law • The nth most common word in a human language text occurs with a frequency inversely proportional to n: • there is always a set of words which dominates most of the other words of the language in terms of frequency of use.

  44. Zipf’s Law • The most frequent word will occur approximately twice as often as the second most frequent word, which occurs twice as often as the fourth most frequent word ... • This is true for: • Languages, • Subject – specific words

  45. Zipf’s Law: Example • For example, in the Brown Corpus "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences, • The second-place word "of" accounts for slightly over 3.5% of words, • Followed by "and" (about 2%) • Only 135 vocabulary items are needed to account for half the Brown Corpus.

  46. Zipf’s Law Applies to Lots of Things • frequency of accesses to web pages • sizes of settlements • income distribution amongst individuals • size of earthquakes • words in the English language

  47. word frequency in Wikipedia

  48. Zipf’s Law: Classification • Zipf’s Law implies that classifying documents with N-gram frequency statistics will not be very sensitive to cutting off the distributions at a particular rank. • It also implies that if we are comparing documents from the same category they should have similar N-gram frequency distributions.

  49. Document Representation • Documents were represented, by their N-gram frequency profiles: • The list of N-grams ordered by the number of occurrences in the given document. • It simply describes the Zipfian distribution of N-grams in the document.

  50. Generating N-Gram FrequencyProfiles • Split the text into separate tokens • Scan down each token, generating all possible N-grams • Hash into a table to find the counter for the N-gram, and increment it. • When done, output all N-grams and their counts. • Sort those counts into reverse order by the number of occurrences.

More Related