1 / 25

Modeling Semantic Containment and Exclusion in Natural Language Inference

Modeling Semantic Containment and Exclusion in Natural Language Inference. Bill MacCartney and Christopher D. Manning NLP Group Stanford University 22 August 2008. Some. Some. no.

mccrackenk
Download Presentation

Modeling Semantic Containment and Exclusion in Natural Language Inference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling Semantic Containment and Exclusion in Natural Language Inference Bill MacCartney and Christopher D. Manning NLP Group Stanford University 22 August 2008

  2. Some Some no Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Natural language inference (NLI) • Aka recognizing textual entailment (RTE) • Does premise P justify an inference to hypothesis H? • An informal, intuitive notion of inference: not strict logic • Emphasis on variability of linguistic expression P Every firm polled saw costs grow more than expected,even after adjusting for inflation. H Every big company in the poll reported cost increases. yes • Necessary to goal of natural language understanding (NLU) • Can also enable semantic search, question answering, …

  3. robust,but shallow deep,but brittle lexical/semanticoverlap Jijkoun & de Rijke 2005 FOL &theoremproving Bos & Markert 2006 patternedrelationextraction Romano et al. 2006 semantic graph matching Hickl et al. 2006 MacCartney et al. 2006 Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion naturallogic (this work) NLI: a spectrum of approaches Solution? Problem:hard to translate NL to FOL idioms, anaphora, ellipsis, intensionality, tense, aspect, vagueness, modals, indexicals, reciprocals, propositional attitudes, scope ambiguities, anaphoric adjectives, non-intersective adjectives, temporal & causal relations, unselective quantifiers, adverbs of quantification, donkey sentences, generic determiners, comparatives, phrasal verbs, … Problem:imprecise  easily confounded by negation, quantifiers, conditionals, factive & implicative verbs, etc.

  4. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Outline • Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion

  5. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion What is natural logic? ( natural deduction) • Characterizes valid patterns of inference via surface forms • precise, yet sidesteps difficulties of translating to FOL • A long history • traditional logic: Aristotle’s syllogisms, scholastics, Leibniz, … • modern natural logic begins with Lakoff (1970) • van Benthem & Sánchez Valencia (1986-91): monotonicity calculus • Nairn et al. (2006): an account of implicatives & factives • We introduce a new theory of natural logic • extends monotonicity calculus to account for negation & exclusion • incorporates elements of Nairn et al.’s model of implicatives

  6. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion 7 basic entailment relations Relations are defined for all semantic types: tiny⊏small, hover⊏fly, kick⊏strike,this morning⊏today, in Beijing⊏in China, everyone⊏someone, all⊏most⊏some

  7. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Entailment & semantic composition • Ordinarily, semantic composition preserves entailment relations: eat pork⊏eat meat, big bird | big fish • But many semantic functions behave differently:tango⊏dance  refuse to tango⊐refuse to danceFrench | German  not French _ not German • We categorize functions by how they project entailment • a generalization of monotonicity classes, implication signatures • e.g., not has projectivity {=:=, ⊏:⊐, ⊐:⊏, ^:^, |:_, _:|, #:#} • e.g., refuse has projectivity {=:=, ⊏:⊐, ⊐:⊏, ^:|, |:#, _:#, #:#}

  8. @ @ ⊐ ⊐ ⊐ @ @ ⊏ ⊏ @ @ Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion @ @ nobody nobody can can without without a shirt clothes enter enter Projecting entailment relations upward • If two compound expressions differ by a single atom, their entailment relation can be determined compositionally • Assume idealized semantic composition trees • Propagate entailment relation between atoms upward, according to projectivity class of each node on path to root

  9. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion A (weak) inference procedure • Find sequence of edits connecting P and H • Insertions, deletions, substitutions, … • Determine lexical entailment relation for each edit • Substitutions: depends on meaning of substituends: cat | dog • Deletions: ⊏ by default: red socks⊏socks • But some deletions are special: not ill ^ ill, refuse to go | go • Insertions are symmetric to deletions: ⊐ by default • Project up to find entailment relation across each edit • Compose entailment relations across sequence of edits • à la Tarski’s relation algebra

  10. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion The NatLog system NLI problem linguistic analysis 1 alignment 2 lexical entailment classification 3 entailment projection 4 entailment composition 5 prediction

  11. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Running example P Jimmy Dean refused to move without blue jeans. H James Dean didn’t dance without pantsyes OK, the example is contrived, but it compactly exhibits containment, exclusion, and implicativity

  12. refuse without JimmyDean move blue jeans Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion + + + – – – + + Step 1: Linguistic analysis • Tokenize & parse input sentences (future: & NER & coref & …) • Identify items w/ special projectivity & determine scope • Problem: PTB-style parse tree  semantic structure! S category: –/o implicatives examples: refuse, forbid, prohibit, … scope: S complement pattern: __ > (/VB.*/ > VP $. S=arg) projectivity: {=:=, ⊏:⊐, ⊐:⊏, ^:|, |:#, _:#, #:#} VP S VP VP PP NP NP NNP NNP VBD TO VB IN JJ NNS Jimmy Dean refused to move without blue jeans • Solution: specify scope in PTB trees using Tregex [Levy & Andrew 06]

  13. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Step 2: Alignment • Alignment as sequence of atomic phrase edits • Ordering of edits defines path through intermediate forms • Need not correspond to sentence order • Decomposes problem into atomic inference problems • We haven’t (yet) invested much effort here • Experimental results use alignments from other sources

  14. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Step 3: Lexical entailment classification • Goal: predict entailment relation for each edit, based solely on lexical features, independent of context • Approach: use lexical resources & machine learning • Feature representation: • WordNet features: synonymy (=), hyponymy (⊏/⊐), antonymy (|) • Other relatedness features: Jiang-Conrath (WN-based), NomBank • Fallback: string similarity (based on Levenshtein edit distance) • Also lexical category, quantifier category, implication signature • Decision tree classifier • Trained on 2,449 hand-annotated lexical entailment problems • E.g., SUB(gun, weapon): ⊏, SUB(big, small): |, DEL(often): ⊏

  15. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Step 3: Lexical entailment classification

  16. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion inversion Step 4: Entailment projection

  17. For example: human ^ nonhuman fish | human fish < nonhuman Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion final answer Step 5: Entailment composition

  18. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion The FraCaS test suite • FraCaS: a project in computational semantics [Cooper et al. 96] • 346 “textbook” examples of NLI problems • 3 possible answers: yes, no, unknown (not balanced!) • 55% single-premise, 45% multi-premise (excluded)

  19. 27% error reduction Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Results on FraCaS

  20. 27% error reduction in largest category, all but one correct high accuracy in sections most amenable to natural logic Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion high precision even outsideareas of expertise Results on FraCaS

  21. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion The RTE3 test suite • Somewhat more “natural”, but not ideal for NatLog • Many kinds of inference not addressed by NatLog:paraphrase, temporal reasoning, relation extraction, … • Big edit distance  propagation of errors from atomic model

  22. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Results on RTE3: NatLog (each data set contains 800 problems) • Accuracy is unimpressive, but precision is relatively high • Strategy: hybridize with Stanford RTE system • As in Bos & Markert 2006 • But NatLog makes positive prediction far more often (~25% vs. 4%)

  23. 4% gain (significant,p < 0.05) Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Results on RTE3: hybrid system (each data set contains 800 problems)

  24. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion Conclusion: what natural logic can’t do • Not a universal solution for NLI • Many types of inference not amenable to natural logic • Paraphrase: Eve was let go = Eve lost her job • Verb/frame alternation: he drained the oil⊏the oil drained • Relation extraction: Aho, a trader at UBS…⊏Aho works for UBS • Common-sense reasoning: the sink overflowed⊏the floor got wet • etc. • Also, has a weaker proof theory than FOL • Can’t explain, e.g., de Morgan’s laws for quantifiers: Not all birds fly= Some birds don’t fly

  25. Introduction • A Theory of Natural Logic • The NatLog System • Experiments with FraCaS • Experiments with RTE • Conclusion :-) Thanks! Questions? Conclusion: what natural logic can do Natural logic enables precise reasoning about containment, exclusion, and implicativity, while sidestepping the difficulties of translating to FOL. The NatLog system successfully handles a broad range of such inferences, as demonstrated on the FraCaS test suite. Ultimately, open-domain NLI is likely to require combining disparate reasoners, and a facility for natural logic is a good candidate to be a component of such a system.

More Related