1 / 17

Computational Models of Discourse Analysis

Computational Models of Discourse Analysis. Carolyn Penstein Ros é Language Technologies Institute/ Human-Computer Interaction Institute. Warm-Up. Get ready to discuss the following:

luisa
Download Presentation

Computational Models of Discourse Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute

  2. Warm-Up • Get ready to discuss the following: • What did you notice was similar and different between the DA/Speech-Act paradigm of Levinson and the Negotiation framework of Martin & Rose? • What answers would Martin & Rose propose to the objections Levinson presented about DA such as non falsifiability or looseness of conditional relevance constraints? To help you get started: Here’s a quote from Levinson about DA theorists. Do these all apply to the Negotiation framework? * No one posted. Are you still ready for the discussion?

  3. How is conversation locally managed? • Two main issues: • When do we speak • Monday we’ll look at a computational approach to modeling this (paper published last year at the most prestigious language technologies conference) • Given what was spoken last, what types of contributions are conditionally relevant now? • Today we’ll here Elijah talk about a computational approach to this problem (paper submitted this year to the most prestigious language technologies conference) • What should you get out of this? • Learn to see how state-of-the-art approaches computationalize constructs from theory more or less faithfully • Learn to evaluate computationalizations in light of theory

  4. Chicken and Egg… Main issue for this week: Exploring sequencing and linking between speech acts in conversation Operationalization Computationalization * Where do the ordering constraints come from? Is it the language? Or is it what is behind the language (e.g., intentions, task structure)? If the latter, how do we computationalize that?

  5. The nature of what we are modeling What we can know about it and how certain we can be How we learn what we know SFL fits here Qualitative observations, anthrooplogy style Rules, like speech acts

  6. The Negotiation System * This indicates an adjacency pair. * They consider the 13 leaf nodes as speech acts.

  7. How do we recognize speech acts • Form-function correspondences: Mood, modality markers, tense/finiteness, temporal adverbials, person of subject • Discourse markers • Or at least Linguistic tests (could a discourse marker indicating a speech act be added?) • Indirect speech-acts are a form of grammatical metaphor

  8. More Insight into the Grammar of a Speech Act and Responses

  9. The Structure of an Exchange * Indicates simultaneous choices.

  10. The Structure of an Exchange • Adjacency Pairs: First pair part followed by second pair part, possible with embedded pairs • Negotiation: One core move, possibly preceded by a secondary move • There can also be preparatory and follow up moves – these are related to the core move, not embedded exchanges Q: Q: A: A: Q: A: S1: Can I ask you a question? S2: What did you say? S1: I said, “Can I ask you a question?” S2: Oh, Sure. S1: Where is the chocolate? S2: In the fridge. dK2 tr rtr dK1 K2 K1 S1: Can I ask you a question? S2: What did you say? S1: I said, “Can I ask you a question?” S2: Sure. S1: Where is the chocolate? S2: In the fridge.

  11. What would this look like in the Negotiation Framework?

  12. What would this look like in the Negotiation Framework? Left on hold A2 A1/K2 K2 dK1 f K1 tr rtr tr K2 K1 dA1 f Do we get embedding, or do we abandon the K2 in T4? Do you think there is a linguistic test that can tell us?

  13. Tips for next time

  14. Tips for next time • We will look at a paper about turn taking • When perplexity is high, the model is having a harder time predicting what is next • For turn taking perplexity, we have a state representation that specifies at one time point which participants are talking and which are not • The model takes the current state into account and measures how surprised it is at the next state • If the next state is surprising given the current state, the perplexity at that time point is high

  15. Tips for next time • If you compare models based on turn taking perplexity, the one with lower perplexity probably has more of the information needed to account for transitions between states • Differences between models: • Whose behavior is contingent on whose behavior • Which data is used to build the model, and which data is used to test

  16. Tips for next time • What do the results say about how conversation is locally managed? • Considering that we’re really good at deciding when we can start talking, what must we be paying attention to? • Connects back to the discussion from Levinson pp 296-303 • Based on your reading of Levinson, what other experiments would you propose that Laskowsky run?

  17. Questions?

More Related