1 / 43

Natural Language Processing via Learning and Global Inference with Constraints

Natural Language Processing via Learning and Global Inference with Constraints. Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign. With thanks to:

muriel
Download Presentation

Natural Language Processing via Learning and Global Inference with Constraints

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Natural Language ProcessingviaLearning and Global Inferencewith Constraints Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign • With thanks to: • Collaborators:Ming-Wei Chang, Vasin Punyakanok, Lev Ratinov, Mark Sammons, Scott Yih, Dav Zimak • Funding: ARDA, under the AQUAINT program • NSF: ITR IIS-0085836, ITR IIS-0428472, ITR IIS- 0085980 • A DOI grant under the Reflex program, • DASH Optimization (Xpress-MP) November 2007 Beckman Institute

  2. Nice to Meet You

  3. Learning and Inference • Global decisions in which several local decisions play a role but there are mutual dependencies on their outcome. • (Learned) models/classifiers for different sub-problems • Incorporate classifiers’ information, along with constraints, in making coherent decisions – decisions that respect the local models as well as domain & context specific constraints. • Global inference for the best assignment to all variables of interest.

  4. Inference

  5. A process that maintains and updates a collection of propositions about the state of affairs. Comprehension (ENGLAND, June, 1989) - Christopher Robin is alive and well. He lives in England. He is the same person that you read about in the book, Winnie the Pooh. As a boy, Chris lived in a pretty home called Cotchfield Farm. When Chris was three years old, his father wrote a poem about him. The poem was printed in a magazine for others to read. Mr. Robin then wrote a book. He made up a fairy tale land where Chris lived. His friends were animals. There was a bear called Winnie the Pooh. There was also an owl and a young pig, called a piglet. All the animals were stuffed toys that Chris owned. Mr. Robin made them come to life with his words. The places in the story were all near Cotchfield Farm. Winnie the Pooh was written in 1925. Children still love to read about Christopher Robin and his animal friends. Most people don't know he is a real person who is grown now. He has written two books of his own. They tell what it is like to be famous. 1. Christopher Robin was born in England. 2. Winnie the Pooh is a title of a book. 3. Christopher Robin’s dad was a magician. 4. Christopher Robin must be at least 65 now.

  6. What we Know: Stand Alone Ambiguity Resolution Illinois’ bored of education board ...Nissan Car and truck plantis … …divide life into plant and animal kingdom (This Art) (can N) (will MD) (rust V) V,N,N The dog bit the kid. Hewas taken to a veterinarian a hospital Learn a function f: X Ythat maps observations in a domain to one of several categories or <

  7. Classification is Well Understood • Theoretically: generalization bounds • How many example does one need to see in order to guarantee good behavior on previously unobserved examples. • Algorithmically: good learning algorithms for linear representations. • Can deal with very high dimensionality (106 features) • Very efficient in terms of computation and # of examples. On-line. • Key issues remaining: • Learning protocols: how to minimize interaction (supervision); how to map domain/task information to supervision; semi-supervised learning; active learning; ranking; sequences; adaptation • What are the features? No good theoretical understanding here. • Programming systems that have multiple classifiers

  8. A process that maintains and updates a collection of propositions about the state of affairs. Comprehension (ENGLAND, June, 1989) - Christopher Robin is alive and well. He lives in England. He is the same person that you read about in the book, Winnie the Pooh. As a boy, Chris lived in a pretty home called Cotchfield Farm. When Chris was three years old, his father wrote a poem about him. The poem was printed in a magazine for others to read. Mr. Robin then wrote a book. He made up a fairy tale land where Chris lived. His friends were animals. There was a bear called Winnie the Pooh. There was also an owl and a young pig, called a piglet. All the animals were stuffed toys that Chris owned. Mr. Robin made them come to life with his words. The places in the story were all near Cotchfield Farm. Winnie the Pooh was written in 1925. Children still love to read about Christopher Robin and his animal friends. Most people don't know he is a real person who is grown now. He has written two books of his own. They tell what it is like to be famous. 1. Christopher Robin was born in England. 2. Winnie the Pooh is a title of a book. 3. Christopher Robin’s dad was a magician. 4. Christopher Robin must be at least 65 now. This is an Inference Problem

  9. This Talk • Global Inference over Local Models/Classifiers + Expressive Constraints • Model • Generality of the framework • Training Paradigms • Global vs. Local training • Semi-Supervised Learning • Examples • Semantic Parsing • Information Extraction • Pipeline processes

  10. s1 s2 s3 s4 s5 s6 s2 s1 s3 s4 s5 s6 o2 o1 o3 o4 o5 o6 o1 o2 o3 o4 o5 o6 Allows for Dynamic Programming based Inference Sequential Constrains Structure • Three models for sequential inference with classifiers [Punyakanok & Roth NIPS’01] • HMM; HMM with Classifiers • Sufficient for easy problems • Conditional Models (PMM) • Allows direct modeling of states as a function of input • Classifiers may vary; SNoW (Winnow;Perceptron); MEMM: MaxEnt; SVM based • Constraint Satisfaction Models (CSCL: more general constrains) • The inference problem is modeled as weighted 2-SAT • With sequential constraints: shown to have efficient solution • Recent work – viewed as multi-class classification; emphasis on global training [Collins’02, CRFs,SVMs]; efficiency and performance issues By far, the most popular in applications What if the structure of the problem/constraints is not sequential?

  11. Pipeline • Pipelining is a crude approximation; interactions occur across levels and down stream decisions often interact with previous decisions. • Leads to propagation of errors • Occasionally, later stage problems are easier but upstream mistakes will not be corrected. Raw Data • Most problems are not single classification problems POS Tagging Phrases Semantic Entities Relations Parsing WSD Semantic Role Labeling • Looking for: • Global inference over the outcomes of different (learned) predictors as a way to break away from this paradigm [between pipeline & fully global] • Allows a flexible way to incorporate linguistic and structural constraints.

  12. Dole ’s wife, Elizabeth , is a native of N.C. E1E2E3 R23 R12 Improvement over no inference: 2-5% Inference with General Constraint Structure

  13. y1 y2 y3 C(y2,y3,y6,y7,y8) C(y1,y4) y4 y5 y6 y8 (+ WC) Problem Setting • Random Variables Y: • Conditional DistributionsP (learned by models/classifiers) • Constraints C– any Boolean function defined on partial assignments (possibly: + weights W ) • Goal: Find the “best” assignment • The assignment that achieves the highest global performance. • This is an Integer Programming Problem y7 observations Y*=argmaxYPY subject to constraints C Other, more general ways to incorporate soft constraints here [ACL’07]

  14. A General Inference Setting • Essentially all complex models studied today can be viewed as optimizing a linear objective function: HMMs/CRFs[Roth’99; Collins’02;Laffarty et. al 02] • Linear objective functions can be derived from probabilistic perspective: • Markov Random Field  [standard; Kleinberg&Tardos] Optimization Problem (Metric Labeling)  [Chekuri et. al’01] Linear Programming Problems  Inference as Constrained Optimization[Yih&Roth CoNLL’04] [Punyakanok et. al COLING’04]… • The probabilistic perspective supports finding the most likely assignment • Not necessarily what we want • Our Integer linear programming (ILP) formulation • Allows the incorporation of more general cost functions • General (non-sequential) constraint structure • Better exploitation (computationally) of hard constraints • Can find the optimal solution if desired

  15. Penalty for violating the constraint. Weight Vector for “local” models How far away is y from a “legal” assignment A collection of Classifiers; Log-linear models (HMM, CRF) or a combination Formal Model Subject to constraints (Soft) constraints component How to solve? This is an Integer Linear Program Solve using ILP packages gives an exact solution. Search techniques are also possible How to train? Incorporating constraints in the learning process?

  16. Example: Semantic Role Labeling Who did what to whom, when, where, why,… I left my pearls to my daughter in my will . [I]A0left[my pearls]A1[to my daughter]A2[in my will]AM-LOC . • A0 Leaver • A1 Things left • A2 Benefactor • AM-LOC Location I left my pearls to my daughter in my will . • Special Case (structure output problem): here, all the data is available at one time; in general, classifiers might be learned from different sources, at different times, at different contexts. • Implications on training paradigms Overlapping arguments If A2 is present, A1 must also be present.

  17. Semantic Role Labeling (2/2) • PropBank [Palmer et. al. 05] provides a large human-annotated corpus of semantic verb-argument relations. • It adds a layer of generic semantic labels to Penn Tree Bank II. • (Almost) all the labels are on the constituents of the parse trees. • Core arguments: A0-A5 and AA • different semantics for each verb • specified in the PropBank Frame files • 13 types of adjuncts labeled as AM-arg • where arg specifies the adjunct type

  18. I left my nice pearls to her I left my nice pearls to her I left my nice pearls to her I left my nice pearls to her [ [ [ [ [ [ [ [ [ [ ] ] ] ] ] ] ] ] ] ] Identify Vocabulary Algorithmic Approach candidate arguments • Identify argument candidates • Pruning [Xue&Palmer, EMNLP’04] • Argument Identifier • Binary classification (SNoW) • Classify argument candidates • Argument Classifier • Multi-class classification (SNoW) • Inference • Use the estimated probability distribution given by the argument classifier • Use structural and linguistic constraints • Infer the optimal global output EASY Inference over (old and new) Vocabulary Ileftmy nice pearlsto her

  19. I left my nice pearls to her Inference • The output of the argument classifier often violates some constraints, especially when the sentence is long. • Finding the best legitimate output is formalized as an optimization problem and solved via Integer Linear Programming. [Punyakanok et. al 04, Roth & Yih 04;05] • Input: • The probability estimation (by the argument classifier) • Structural and linguistic constraints • Allows incorporating expressive (non-sequential) constraints on the variables (the arguments types).

  20. Integer Linear Programming Inference • For each argument ai • Set up a Boolean variable: ai,tindicating whether ai is classified as t • Goal is to maximize • i score(ai = t ) ai,t • Subject to the (linear) constraints • Any Boolean constraints can be encoded as linear constraint(s). • If score(ai = t ) = P(ai = t ), the objective is to find the assignment that maximizes the expected number of arguments that are correct and satisfies the constraints.

  21. Constraints Any Boolean rule can be encoded as a linear constraint. • No duplicate argument classes aPOTARG x{a = A0} 1 • R-ARG  a2POTARG , aPOTARG x{a = A0}x{a2 = R-A0} • C-ARG • a2POTARG , (aPOTARG)  (a is before a2 )x{a = A0}x{a2 = C-A0} • Many other possible constraints: • Unique labels • No overlapping or embedding • Relations between number of arguments • If verb is of type A, no argument of type B If there is an R-ARG phrase, there is an ARG Phrase If there is an C-ARG phrase, there is an ARG before it Universally quantified rules Joint inference can be used also to combine different SRL Systems.

  22. y1 y2 y3 y4 y5 y x1 x2 x3 x4 x5 x s t A A A A A B B B B B C C C C C So far, shown the use of only (deterministic) constraints. Can be used with statistical constraints. ILP as a Unified Algorithmic Scheme • Consider a common model for sequential inference: HMM/CRF • Inference in this model is done via the Viterbi Algorithm. • Viterbi is a special case of the Linear Programming based Inference. • Viterbi is a shortest path problem, which is a LP, with a canonical matrix that is totally unimodular. Therefore, you can get integrality constraints for free. • One can now incorporate non-sequential/expressive/declarative constraints by modifying this canonical matrix • The extension reduces to a polynomial scheme under some conditions (e.g., when constraints are sequential, when the solution space does not change, etc.) • Not necessarily increases complexity and very efficient in practice [Roth&Yih, ICML’05] Learn a rather simple model; make decisions with a more expressive model

  23. Extracting Relations via Semantic Analysis Screen shot from a CCG demo http://L2R.cs.uiuc.edu/~cogcomp Semantic parsing reveals several relations in the sentence along with their arguments. This approach produces a very good semantic parser. F1~90% Easy and fast: ~7 Sent/Sec (using Xpress-MP) Top ranked system in CoNLL’05 shared task Key difference is the Inference

  24. This Talk • Global Inference over Local Models/Classifiers + Expressive Constraints • Model • Generality of the framework • Training Paradigms • Global vs. Local training • Semi-Supervised Learning • Examples • Semantic Parsing • Information Extraction • Pipeline processes

  25. Textual Entailment Phrasal verb paraphrasing [Connor&Roth’07] • Given: Q: Who acquired Overture? • Determine: A: Eyeing the huge market potential, currently led by Google, Yahoo took over search company Overture Services Inc last year. Entity matching [Li et. al, AAAI’04, NAACL’04] Semantic Role Labeling Is it true that? (Textual Entailment) Eyeing the huge market potential, currently led by Google, Yahoo took over search company Overture Services Inc. last year  Yahoo acquired Overture Overture is a search company Google is a search company Google owns Overture ……….

  26. Training Paradigms that Support Global Inference • Incorporating general constraints (Algorithmic Approach) • Allow both statistical and expressive declarative constraints • Allow non-sequential constraints (generally difficult) • Coupling vs. Decoupling Training and Inference. • Incorporating global constraints is important but • Should it be done only at evaluation time or also at training time? • Issues related to: • modularity, efficiency and performance, availability of training data May not be relevant in some problems.

  27. [ [ [ [ ] ] ] ] ] ] [ ] [ ] Did this classifier make a mistake? Phrase Identification Problem How to train it? • Use classifiers’ outcomes to identify phrases • Final outcome determined by optimizing classifiers outcome and constrains Input:o1 o2 o3 o4 o5 o6 o7 o8 o9 o10 Classifier 1: Classifier 2: Infer:

  28. Training in the presence of Constraints • General Training Paradigm: • First Term: Learning from data • Second Term: Guiding the model by constraints • Can choose if constraints are included in training or only in evaluation

  29. Learning the components together! y2 y3 y1 y4 y5 x3 x4 x1 x5 x2 x7 x6 Cartoon: each model can be more complex and may have a view on a set of output variables. L+I: Learning plus Inference Training w/o ConstraintsTesting: Inference with Constraints f1(x) IBT: Inference-based Training f2(x) f3(x) Y f4(x) f5(x) X

  30. True Global Labeling Y -1 1 -1 -1 1 Apply Constraints: Local Predictions Y’ -1 1 -1 1 1 x3 x4 x1 x5 x2 x7 x6 Y’ -1 1 1 1 1 Perceptron-based Global Learning f1(x) X f2(x) f3(x) Y f4(x) f5(x) Which one is better? When and Why?

  31. Claims • When the local classification problems are “easy”, L+I outperforms IBT. • In many applications, the components are identifiable and easy to learn (e.g., argument, open-close, PER). • Only when the local problems become difficult to solve in isolation, IBT outperforms L+I, but needs a larger number of training examples. • When data is scarce, problems are not easy and constraints can be used, along with a “weak” model, to label unlabeled data and improve model. • Experimental results and theoretical intuition to support our claims. L+I: cheaper computationally; modular IBT is better in the limit, and other extreme cases. Combinations: L+I, and then IBT are possible

  32. Bounds Simulated Data opt=0.1 opt=0 opt=0.2 L+I vs. IBT: the more identifiable individual problems are the better overall performance is with L+I Bound Prediction • Local  ≤ opt + ( ( d log m + log 1/ ) / m)1/2 • Global  ≤ 0 + ( ( cd log m + c2d + log 1/ ) / m)1/2 Indication for hardness of problem

  33. L+I is better. When the problem is artificially made harder, the tradeoff is clearer. In some cases problems are hard due to lack of training data. Semi-supervised learning Relative Merits: SRL Difficulty of the learning problem(# features) hard easy

  34. Information extraction with Background Knowledge (Constraints) Lars Ole Andersen . Program analysis and specialization for the C Programming language. PhD thesis. DIKU , University of Copenhagen, May 1994 . Prediction result of a trained HMM Lars Ole Andersen . Program analysis and specialization for the C Programming language . PhD thesis . DIKU , University of Copenhagen , May 1994 . [AUTHOR] [TITLE] [EDITOR] [BOOKTITLE] [TECH-REPORT] [INSTITUTION] [DATE] Violates lots of constraints!

  35. Examples of Constraints • Each field must be aconsecutive list of words,and can appear at mostoncein a citation. • State transitions must occur onpunctuation marks. • The citation can only start withAUTHORorEDITOR. • The wordspp., pagescorrespond toPAGE. • Four digits starting with20xx and 19xx areDATE. • Quotationscan appear only inTITLE • …….

  36. Information Extraction with Constraints • Adding constraints, we getcorrectresults! • [AUTHOR]Lars Ole Andersen . [TITLE]Program analysis andspecialization for the C Programming language . [TECH-REPORT] PhD thesis . [INSTITUTION] DIKU , University of Copenhagen , [DATE] May, 1994 . • If incorporated into the training, better results mean • Better Feedback!

  37. Semi-Supervised Learning with Constraints • In traditional Semi-Supervised learning the model can drift away from the correct one. • Constraints can be used • At decision time, to bias the objective function towards favoring constraint satisfaction. • At training to improve labeling of un-labled data (and thus improve the model) Constraints Model Un-labeled Data Decision Time Constraints

  38. Constraint - DrivenLearning (CODL)[Chang, Ratinov, Roth, ACL’07] Any supervised learning algorithm parametrized by  =learn(Tr) For N iterations do T= For each x in unlabeled dataset y Inference(x, ) T=T  {(x, y)} Augmenting the training set (feedback). Any inference algorithm (with constraints). Inference(x,C, )

  39. Constraint - DrivenLearning (CODL)[Chang, Ratinov, Roth, ACL’07] Any supervised learning algorithm parametrized by  =learn(Tr) For N iterations do T= For each x in unlabeled dataset {y1,…,yK} Top-K-Inference(x,C, ) T=T  {(x, yi)}i=1…k = +(1- )learn(T) Augmenting the training set (feedback). Any inference algorithm (with constraints). Learn from new training data. Weight supervised and unsupervised model(Nigam2000*).

  40. Token-based accuracy (inference with constraints)

  41. Semi-Supervised Learning with Constraints Objective function: Learning w/o Constraints: 300 examples. Constraints are used to Bootstrap a semi-supervised learner Poor model + constraints are used to annotate unlabeled data, which in turn is used to keep training the model. Learning w Constraints # of available labeled examples

  42. Conclusions • Discussed a general paradigm for learning and inference in the context of natural language understanding tasks • A general Constraint Optimization approach for integration of learned models with additional (declarative or statistical) expressivity. • A paradigm for making Machine Learning practical – allow domain/task specific constraints. • How to train? • Learn Locally and make use globally (via global inference) • [Punyakanok et. al IJCAI’05] • Ability to make use of domain & constraints to drive supervision • [Klementiev & Roth, ACL’06; Chang, Ratinov, Roth, ACL’07] LBJ (Learning Based Java):http://L2R.cs.uiuc.edu/~cogcomp A modeling language that supports programming along with building learned models and allows incorporating and inference with constraints

  43. C(y2,y3,y6,y7,y8) C(y1,y4) y1 y2 y3 y4 y5 y6 y7 y8 (+ WC) Problem Setting • Random Variables Y: • Conditional DistributionsP (learned by models/classifiers) • Constraints C– any Boolean function defined on partial assignments (possibly: + weights W ) • Goal: Find the “best” assignment • The assignment that achieves the highest global accuracy. • This is an Integer Programming Problem Y*=argmaxYPY subject to constraints C Other, more general ways to incorporate soft constraints here [ACL’07]

More Related