1 / 57

Maximum Entropy Model

Maximum Entropy Model. ELN – Natural Language Processing. Slides by: Fei Xia. History. The concept of Maximum Entropy can be traced back along multiple threads to Biblical times Introduced to NLP by Berger et. al . (1996 )

grant
Download Presentation

Maximum Entropy Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Maximum Entropy Model ELN – Natural Language Processing Slides by: Fei Xia

  2. History • The concept of Maximum Entropy can be traced back along multiple threads to Biblical times • Introduced to NLP by Berger et. al. (1996) • Used in many NLP tasks: MT, Tagging, Parsing, PP attachment, LM, …

  3. Outline • Modeling: Intuition, basic concepts, … • Parameter training • Feature selection • Case study

  4. Reference papers • (Ratnaparkhi, 1997) • (Ratnaparkhi, 1996) • (Berger et. al., 1996) • (Klein and Manning, 2003)  Different notations.

  5. Modeling

  6. The basic idea • Goal: estimate p • Choose pwith maximum entropy (or “uncertainty”) subject to the constraints (or “evidence”).

  7. Setting • From training data, collect (a, b) pairs: • a: thing to be predicted (e.g., a class in a classification problem) • b: the context • Ex: POS tagging: • a=NN • b=the words in a window and previous two tags • Learn the probability of each (a, b): p(a, b)

  8. Features in POS tagging (Ratnaparkhi, 1996) context (a.k.a. history) allowable classes

  9. Maximum Entropy • Why maximum entropy? • Maximize entropy = Minimize commitment • Model all that is known and assume nothing about what is unknown. • Model all that is known: satisfy a set of constraints that must hold • Assume nothing about what is unknown: choose the most “uniform” distribution  choose the one with maximum entropy

  10. (Maximum) Entropy • Entropy: the uncertainty of a distribution • Quantifying uncertainty (“surprise”) • Event x • Probability px • “surprise” log(1/px) • Entropy: expected surprise (over p): H p(HEADS)

  11. Ex1: Coin-flip example (Klein & Manning 2003) • Toss a coin: p(H)=p1, p(T)=p2 • Constraint: p1 + p2 = 1 • Question: what’s your estimation of p=(p1, p2)? • Answer: choose the p that maximizes H(p) H p1 p1=0.3

  12. Convexity • Constrained H(p) = – Σ x log x is convex: • – x log x is convex • – Σ x log x is convex (sum of convex functions is convex) • The feasible region of constrained H is a linear subspace (which is convex) • The constrained entropy surface is therefore convex • The maximum likelihood exponential model (dual) formulation is also convex

  13. Ex2: An MT example (Berger et. al., 1996) Possible translation for the word “in” is: {dans, en, à, au cours de, pendant} Constraint: p(dans) + p(en) +p(à) + p(au cours de) +p(pendant) = 1 Intuitive answer: p(dans)= 1/5 p(en)= 1/5 p(à) = 1/5 p(au cours de)= 1/5 p(pendant) = 1/5

  14. An MT example (cont) Constraint: p(dans) + p(en) = 1/5 p(dans) + p(en) +p(à) + p(au cours de) +p(pendant) = 1 Intuitive answer: p(dans)= 1/10 p(en)= 1/10 p(à) = 8/30 p(au cours de)= 8/30 p(pendant) = 8/30

  15. An MT example (cont) Constraint: p(dans) + p(en) = 1/5 p(dans) +p(à) = 1/2 p(dans) + p(en) +p(à) + p(au cours de) +p(pendant) = 1 Intuitive answer: p(dans)= p(en)= p(à) = p(au cours de)= p(pendant) =

  16. Ex3: POS tagging (Klein and Manning, 2003) • Lets say we have the following event space: • … and the following empirical data: • Maximize H: • … want probabilities: E[NN, NNS, NNP, NNPS, VBZ, VBD] = 1

  17. Ex3 (cont) • Too uniform! • N* are more common than V*, so we add the feature fN = {NN, NNS, NNP, NNPS}, with E[fN] = 32/36 • … and proper nous are more frequent than common nouns, so we add fP = {NNP, NNPS}, with E[fP] = 24/36 • … we could keep refining the models. E.g. by adding a feature to distinguish singular vs. plural nouns, or verb types.

  18. Ex4: overlapping features (Klein and Manning, 2003) • Maxent models handle overlapping features • Unlike a NB model, there is no double counting! • But do not automatically model feature interactions.

  19. Modeling the problem • Objective function: H(p) • Goal: Among all the distributions that satisfy the constraints, choose the one, p*, that maximizes H(p). • Question: How to represent constraints?

  20. Features • Feature (a.k.a. feature function, Indicator function) is a binary-valued function on events: fj : e {0, 1}, e = AB • A: the set of possible classes (e.g., tags in POS tagging) • B: space of contexts (e.g., neighboring words/ tags in POS tagging) • Example:

  21. Some notations Finite training sample of events: Observed probability of x in S: The model p’s probability of x: The jth feature: Observed expectation of fi: (empirical count of fi) Model expectation of fi

  22. Constraints • Model’s feature expectation = observed feature expectation • How to calculate ?

  23. Training data  observed events

  24. Restating the problem The task: find p*s.t. where Objective function: -H(p) Constraints: Add a feature

  25. Questions • Is P empty? • Does p* exist? • Is p* unique? • What is the form of p*? • How to find p*?

  26. What is the form of p*? (Ratnaparkhi, 1997) Theorem: if p* P  Qthen Furthermore, p*is unique.

  27. Using Lagrangian multipliers MinimizeA(p): Derivative = 0

  28. Two equivalent forms

  29. Relation to Maximum Likelihood The log-likelihood of the empirical distribution as predicted by a model q is defined as Theorem: if then Furthermore, p* is unique.

  30. Summary (so far) Goal: find p* in P, which maximizes H(p). It can be proved that when p* exists it is unique. The model p* in P with maximum entropy is the model in Q that maximizes the likelihood of the training sample

  31. Summary (cont) • Adding constraints (features): (Klein and Manning, 2003) • Lower maximum entropy • Raise maximum likelihood of data • Bring the distribution further from uniform • Bring the distribution closer to data

  32. Parameter estimation

  33. Algorithms • Generalized Iterative Scaling (GIS): • (Darroch and Ratcliff, 1972) • Improved Iterative Scaling (IIS): • (Della Pietra et al., 1995)

  34. GIS: setup Requirements for running GIS: • Obey form of model and constraints: • An additional constraint: • Add a new feature fk+1:

  35. GIS algorithm • Compute dj, j=1, …, k+1 • Initialize (any values, e.g., 0) • Repeat until converge • for each j • compute where • update

  36. Approximation for calculating feature expectation

  37. Properties of GIS • L(p(n+1)) >= L(p(n)) • The sequence is guaranteed to converge to p*. • The converge can be very slow. • The running time of each iteration is O(NPA): • N: the training set size • P: the number of classes • A: the average number of features that are active for a given event (a, b).

  38. Feature selection

  39. Feature selection • Throw in many features and let the machine select the weights • Manually specify feature templates • Problem: too many features • An alternative: greedy algorithm • Start with an empty set S • Add a feature at each iteration

  40. Notation With the feature set S: After adding a feature: The gain in the log-likelihood of the training data:

  41. Feature selection algorithm (Berger et al., 1996) • Start with S being empty; thus ps is uniform. • Repeat until the gain is small enough • For each candidate feature f • Computer the model using IIS • Calculate the log-likelihood gain • Choose the feature with maximal gain, and add it to S  Problem: too expensive

  42. Approximating gains (Berger et. al., 1996) • Instead of recalculating all the weights, calculate only the weight of the new feature.

  43. Training a MaxEnt Model Scenario #1: • Define features templates • Create the feature set • Determine the optimum feature weights via GIS or IIS Scenario #2: • Define feature templates • Create candidate feature set S • At every iteration, choose the feature from S (with max gain) and determine its weight (or choose top-n features and their weights).

  44. Case study

  45. POS tagging (Ratnaparkhi, 1996) • Notation variation: • fj(a, b): a: class, b: context • fj(hi, ti): h: history for ith word, t: tag for ithword • History: hi = {wi, wi-1, wi-2, wi+1, wi+2, ti-2, ti-1} • Training data: • Treat it as a list of (hi, ti)pairs • How many pairs are there?

  46. Using a MaxEnt Model • Modeling: • Training: • Define features templates • Create the feature set • Determine the optimum feature weights via GIS or IIS • Decoding:

  47. Modeling

  48. Training step 1: define feature templates History hi Tag ti

  49. Step 2: Create feature set • Collect all the features from the training data • Throw away features that appear less than 10 times

  50. Step 3: determine the feature weights • GIS • Training time: • Each iteration: O(NTA): • N: the training set size • T: the number of allowable tags • A: average number of features that are active for a (h, t). • How many features?

More Related