1 / 62

SIMS 290-2: Applied Natural Language Processing

SIMS 290-2: Applied Natural Language Processing. Marti Hearst December 1, 2004. Today. Discourse Processing Going beyond the sentence Characteristics Cohesion / coherence Given / new Rhetorical structure Issues: Segmentation Linear Hierarchical Text vs. Dialogue

komala
Download Presentation

SIMS 290-2: Applied Natural Language Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SIMS 290-2: Applied Natural Language Processing Marti Hearst December 1, 2004

  2. Today • Discourse Processing • Going beyond the sentence • Characteristics • Cohesion / coherence • Given / new • Rhetorical structure • Issues: • Segmentation • Linear • Hierarchical • Text vs. Dialogue • Discourse cues vs. content change • Co-reference / anaphora resolution • Dialogue Processing

  3. What makes a text/dialogue coherent? “Consider, for example, the difference between passages (18.71) and (18.72). Almost certainly not. The reason is that these utterances, when juxtaposed, will not exhibit coherence. Do you have a discourse? Assume that you have collected an arbitrary set of well-formed and independently interpretable utterances, for instance, by randomly selecting one sentence from each of the previous chapters of this book.” vs…. Adapted from slide by Julia Hirschberg

  4. What makes a text/dialogue coherent? “Assume that you have collected an arbitrary set of well-formed and independently interpretable utterances, for instance, by randomly selecting one sentence from each of the previous chapters of this book. Do you have a discourse? Almost certainly not. The reason is that these utterances, when juxtaposed, will not exhibit coherence. Consider, for example, the difference between passages (18.71) and (18.72). (J&M:695) Adapted from slide by Julia Hirschberg

  5. What makes a text coherent? • Discourse/topic structure • Appropriate sequencing of subparts of the discourse • Rhetorical structure • Appropriate use of coherence relations between subparts of the discourse • Referring expressions • Words or phrases, the semantic interpretation of which is a discourse entity Adapted from slide by Julia Hirschberg

  6. Information Status • Contrast • John wanted apoodlebut Becky preferred acorgi. • Topic/comment • The corgi they boughtturned out to have fleas. • Theme/rheme • The corgi they boughtturned out to have fleas. • Focus/presupposition • It wasBeckywho took him to the vet. • Given/new • Some wildcatsbite, but thiswildcatturned out to be a sweetheart. • Contrast Speaker (S) and Hearer (H) Adapted from slide by Julia Hirschberg

  7. Determining Given vs. New • Entities when first introduced are new • Brand-new (H must create a new entity) I saw a dinosaur today. • Unused (H already knows of this entity) I saw your mother today. • Evoked entities are old -- already in the discourse • Textually evoked The dinosaur was scaley and gray. • Situationally evoked The light was red when you went through it. • Inferrables • Containing I bought a carton of eggs. One of them was broken. • Non-containing A bus pulled up beside me. The driver was a monkey. Adapted from slide by Julia Hirschberg

  8. Given/New and Definiteness/Indefiniteness • Subject NPs tend to be syntactically definite and old • Object NPs tend to be indefinite and new I saw a black cat yesterday. The cat looked hungry. • Definite articles, demonstratives, possessives, personal pronouns, proper nouns, quantifiers like all, every • Indefinite articles, quantifiers like some, any, one signal indefiniteness…but…. This guy came into the room Adapted from slide by Julia Hirschberg

  9. Discourse/Topic Structure • Text Segmentation: • Linear • TextTiling • Look for changes in content words • Hierarchical • Grosz & Sidner’s Centering theory • Morris & Hirst’s algorithm • Lexical chaining through Roget’s thesaurus • Hierarchical + Relations • Mann et al.’s Rhetorical Structure Theory • Marcu’s algorithm

  10. TextTiling • Goal: find multi-paragraph topics • Example: 21 paragraph article called Stargazers

  11. TextTiling • Goal: find multi-paragraph topics • But … it’s difficult to define topic (Brown & Yule) • Focus instead on topic shift or change • Change in content, by contrast with setting, scene, characters • Mechanism: • compare adjacent blocks of text • look for shifts in vocabulary

  12. Intuition behind TextTiling

  13. TextTiling Algorithm • Tokenization • Lexical Score Determination • Blocks • Vocabulary Introductions • Chains • Boundary Identification

  14. Tokenization • Convert text stream into terms (words) • Remove “stop words” • Reduce to root (inflectional morphology) • Subdivide into “token-sequences” (substitute for sentences) • Find potential boundary points (paragraphs breaks)

  15. Determining Scores • Compute a score at each token-sequence gap • Score based on lexical occurrences • Block algorithm:

  16. Boundary Identification • Smooth the plot (average smoothing) • Assign depth score at each token-sequence gap • “Deeper” valleys score higher • Order boundaries by depth score • Choose boundary cut off (avg-sd/2)

  17. Evaluation • DATA • Twelve news articles from Dialog • Seven human judges per article • “major” boundaries: chosen by >= 3 judges • Avg number of paragraphs: 26.75 • Avg number of boundaries: 10 (39%) • RESULTS • Between upper and lower bounds • Upper bound: judges’ averages • Lower bound: reasonable simple algorithm

  18. Assessing Agreement Among Judges • KAPPA Coefficient • Measures pairwise agreement • Takes expected chance agreement into account • P(A) = proportion of times judges agree • P(E) = proportion expected chance agreement • .43 to .68 (Isard & Carletta 95, boundaries) • .65 to .90 (Rose 95, sentence segmentation) • Here, k= .647

  19. TextTiling Conclusions • First computational investigation into multi-paragraph discourse units • Simple Discourse Cue: position-sensitive term repetition • Acceptable performance for some tasks • Has been reproduced/used by many researchers • Multi-lingual (applied by others to French, German, Arabic)

  20. What Can Hierarchical Structure Tell Us? Welcome toword processing. That’s using acomputerto type letters and reports.Make atypo? Noproblem. Just back up, type over themistake, andit’sgone. And,iteliminates retyping. And,iteliminates retyping. Adapted from slide by Julia Hirschberg

  21. Centering Theory of Discourse Structure (Grosz & Sidner ‘86) • A prominent theory of discourse structure • Provides for multiple levels of analysis: S’s purpose as well as content of utterances and S and H’s attentional state • Identifies only a few, general relations that hold among intentions • Often leads to a hierarchical structure • Three components: • Linguistic structure • Intentional structure • Attentional structure Adapted from slide by Julia Hirschberg

  22. Example of Hierarchical Analysis(Morris and Hirst ’91)

  23. Rhetorical Structure Theory (Mann, Matthiessen, and Thompson ‘89) • One theory of discourse structure, based on identifying relations between parts of the text • Identify meaningful units and the relations between them • Clauses and clause-like units that are unequivocally the nucleus or satellite of a rhetorical relation. • Only the midday sun at tropical latitudes is warm enough] [to thaw ice on occasions,] [but any liquid water formed in this way would evaporate almost instantly] [because of the low atmospheric pressure.] • Nucleus/satellite notion encodes asymmetry Adapted from slide by Julia Hirschberg

  24. Rhetorical Structure Theory • Some rhetorical relations: • Elaboration (set/member,class/instance/whole/part…) • Contrast: multinuclear • Condition: Sat presents precondition for N • Purpose: Sat presents goal of the activity in N • Sequence: multinuclear • Result: N results from something presented in Sat • Evidence: Sat provides evidence for something claimed in N Adapted from slide by Julia Hirschberg

  25. Determining high-level relations [Smart cards are not a new phenomenon.1] [They have been in development since the late 1970s and have found major applications in Europe, with more than a quarter of a billion cards made so far.2] [The vast majority of chips have gone into prepaid, disposable telephone cards, but even so the experience gained has reduced manufacturing costs, improved reliability and proved the viability of smart cards.3] [International and national standards for smart cards are well under development to ensure that cards, readers and the software for the many different applications that may reside on them can work together seamlessly and securely.4] [Standards set by the International Organization for Standardization (ISO), for example, govern the placement of contacts on the face of a smart card so that any card and reader will be able to connect.5] Adapted from slide by Daniel Marcu

  26. Representing implicit relations [Smart cards are becoming more attractive2] [as the price of microcomputing power and storage continues to drop.3] [They have two main advantages over magnetic-stripe cards.4] [First, they can carry 10 or even 100 times as much information5] [- and hold it much more robustly.6] [Second, they can execute complex tasks in conjunction with a terminal.7] Adapted from slide by Daniel Marcu

  27. What’s the Rhetorical Structure? • System: Hello. How may I help you? • User: I would like to find out why I was charged for a call? • System: What call would you like to inquire about? • User: My bill says I made a call to Syncamaloo, Texas, but I’ve never even heard of this town. • System: May I have the date of the call that appears on your bill? Adapted from slide by Julia Hirschberg

  28. Issues for RST • Many variations in expression • [I have not read this book.] [It was written by Bertrand Russell.] • [I have not read this book,] [which was written by Bertrand Russell.] • [I have not read this book written by Bertrand Russell.] • [I have not read this Bertrand Russell book.] • Rhetorical relations are ambiguous • [He caught a bad fever] [while he was in Africa.] • Circumstance > Temporal-Same-Time • [With its distant orbit, Mars experiences frigid weather conditions.] [Surface temperatures typically average about –60 degrees Celsius at the equator and can dip to –123 degrees C near the poles. ] • Evidence > Elaboration Adapted from slide by Daniel Marcu

  29. Identifying RS Automatically (Marcu ’99) • Train a parser on a discourse treebank • 90 RS trees, hand-annotated for rhetorical relations • Elementary discourse units (edu’s) linked by RR • Parser learns to identify N and S and their RR • Features: Wordnet-based similarity, lexical, structural • Uses discourse segmenter to identify discourse units • Trained to segment on hand-labeled corpus (C4.5) • Features: 5-word POS window, presence of discourse markers, punctuation, seen a verb?,… • Eval: 96-8% accuracy Adapted from slide by Julia Hirschberg

  30. Identifying RS Automatically (Marcu ’99) • Evaluation of parser: • Id edu’s: Recall 75%, Precision 97% • Id hierarchical structure (2 edu’s related): Recall 71%, Precision 84% • Id nucleus/satellite labels: Recall 58%, Precision 69% • Id RR: Recall 38%, Precision 45% • Later errors due mostly to edu mis-identification • Id of hierarchical structure and n/s status comparable to human when hand-labeled edu’s used • Hierarchical structure is easier to id than RR Adapted from slide by Julia Hirschberg

  31. Some Problems with RST (cf. Moore & Pollack ‘92) • How many Rhetorical Relations are there? • How can we use RST in dialogue as well as monologue? • RST does not allow for multiple relations holding between parts of a discourse • RST does not model overall structure of the discourse Adapted from slide by Julia Hirschberg

  32. Referring Expressions • Referring expressions are words or phrases, the semantic interpretation of which is a discourse entity (also called referent) • Discourse entities are semantic objects . • Can have multiple syntactic realizations within a text • Discourse entities exist in the domain D, in which a text is interpreted Adapted from slide by Ani Nenkova

  33. Referring Expressions: Example A pretty woman entered the restaurant. She sat at the table next to mine and only then I recognized her. This was Amy Garcia, my next door neighbor from 10 years ago. The woman has totally changed! Amy was at the time shy… Adapted from slide by Ani Nenkova

  34. Pronouns vs. Full NP A pretty woman entered the restaurant. She sat at the table next to mine and only then I recognized her. This was Amy Garcia, my next door neighbor from 10 years ago. The woman has totally changed! Amy was at the time shy… Adapted from slide by Ani Nenkova

  35. Definite vs. Indefinite NPs A pretty woman entered the restaurant. She sat at the table next to mine and only then I recognized her. This was Amy Garcia, my next door neighbor from 10 years ago. The woman has totally changed! Amy was at the time shy… Adapted from slide by Ani Nenkova

  36. Common Noun vs. Proper Noun A pretty woman entered the restaurant. She sat at the table next to mine and only then I recognized her. This was Amy Garcia, my next door neighbor from 10 years ago. The woman has totally changed! Amy was at the time shy… Adapted from slide by Ani Nenkova

  37. Modified vs. Bare head NP A pretty woman entered the restaurant. She sat at the table next to mine and only then I recognized her. This was Amy Garcia, my next door neighbor from 10 years ago. The woman has totally changed! Amy was at the time shy… Adapted from slide by Ani Nenkova

  38. Premodified vs. Postmodified A pretty woman entered the restaurant. She sat at the table next to mine and only then I recognized her. This was Amy Garcia, my next door neighbor from 10 years ago. The woman has totally changed! Amy was at the time shy… Adapted from slide by Ani Nenkova

  39. Anaphora resolution • Finding in a text all the referring expressions that have one and the same denotation • Pronominal anaphora resolution • Anaphora resolution between named entities • Full noun phrase anaphora resolution Adapted from slide by Ani Nenkova

  40. Anaphora Resolution A pretty woman entered the restaurant. She sat at the table next to mine and only then I recognized her. This was Amy Garcia, my next door neighbor from 10 years ago. The woman has totally changed! Amy was at the time shy… Adapted from slide by Ani Nenkova

  41. Pronominal anaphora resolution • Rule-based vs statistical • (Ken 1996), (Lap 1994) vs (Ge 1998) • Performed on full syntactic parse vs on shallow syntactic parse • (Lap 1994), (Ge 1998) vs (Ken 1996) • Type of text used for the evaluation • (Lap 1994) computer manual texts (86% accuracy) • (Ge 1998) WSJ articles (83% accuracy) • (Ken 1996) different genres (75% accuracy) Adapted from slide by Ani Nenkova

  42. Pronominal anaphora resolution • Generic vs specific reference 1. The Vice-President of the United States is also President of the Senate. 2. Historically, he is the President’s key person in negotiations with Congress 3a. He is required to be 35 years old. 3b. As Ambassador to China, he handled many tricky negotiations, so he is well prepared for the job Adapted from slide by Ani Nenkova

  43. Talking to a Machine….and (often) Getting an Answer • Today’s spoken dialogue systems make it possible to accomplish real tasks without talking to a person • Key advances • Stick to goal-directed interactions in a limited domain • Prime users to adopt the vocabulary you can recognize • Partition the interaction into manageable stages • Judicious use of system vs. mixed initiative

  44. Acoustic and Prosodic Cues to Discourse Structure • Intuition: • Speakers vary acoustic and prosodic cues to convey variation in discourse structure • Systematic? In read or spontaneous speech? • Evidence: • Observations from recorded corpora • Laboratory experiments • Machine learning of discourse structure from acoustic/prosodic features Adapted from slide by Julia Hirschberg

  45. Boston Directions Corpus (Hirschberg & Nakatani ’96) • Experimental Design • 12 speakers: 4 used • Spontaneous and read versions of 9 direction-giving tasks • Corpus: 50m read; 67m spon • Labeling • Prosodic: ToBI intonational labeling • Discourse: Grosz & Sidner • Features used in analysis Adapted from slide by Julia Hirschberg

  46. Boston Directions Corpus: Describe how to get to MIT from Harvard • ds1: step 1, enter and get token first enter the Harvard Square T stop and buy a token • ds2: inbound on red line then proceed to get on the inbound um Red Line uh subway Adapted from slide by Julia Hirschberg

  47. ds3: take subway from hs, to cs to ks and take the subway from Harvard Square to Central Square and then to Kendall Square • ds4: describe ks station you’ll see a music sculpture there which will tell you it’s Kendall Square it’s very nice • ds5: get off T. then get off the T Adapted from slide by Julia Hirschberg

  48. Dialogue vs. Monologue • Monologue and dialogue both involve interpreting • Information status • Coherence issues • Reference resolution • Speech acts, implicature, intentionality • Dialogue involves managing • Turn-taking • Grounding and repairing misunderstandings • Initiative and confirmation strategies

More Related