1 / 25

Context-sensitive description of objects

Context-sensitive description of objects. Mari ë t Theune (joint work with Emiel Krahmer). Introduction. An important question: How to generate distinguishing descriptions of objects? State of the art: Incremental Algorithm, Dale & Reiter (1995) Today’s aims:

palti
Download Presentation

Context-sensitive description of objects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Context-sensitive description of objects Mariët Theune (joint work with Emiel Krahmer)

  2. Introduction An important question: How to generate distinguishing descriptions of objects? State of the art: Incremental Algorithm, Dale & Reiter (1995) Today’s aims: • Show that Dale and Reiter’s algorithm can be refined by taking salience into account • … which opens the way for several interesting extensions

  3. Overview • Dale & Reiter’s Incremental Algorithm • A modified Incremental Algorithm • Extensions: • Pronouns • Relational descriptions • Implementation • Evaluation • Concluding remarks • Related work

  4. The Incremental Algorithm Terminology Distinguishing description An accurate description of the intended referent r, but not of any other object in the current context set Distractors The objects from which r has to be distinguished (= all objects other than r)

  5. Terminology (cont.) Preferred attributes The properties that human speakers and hearers prefer for a specific domain Best value The value that is closest to the basic level value of a property, and that still rules out the maximal number of distractors

  6. The Incremental Algorithm Strategy Iterate through the list of preferred attributes, • adding the best value of an attribute if: - it rules out any distractors not previously ruled out - or the attribute is ‘type’ • terminating when a distinguishing description has been constructed (all distractors have been ruled out)

  7. The Incremental Algorithm Example Domain (Dale & Reiter 1995:258): d1 <type, chihuahua>, <size, small>, <colour, black> d2 <type, chihuahua>, <size, large>, <colour, white> d3 <type, siamese cat>, <size, small>, <colour, black> Preferred attributes: < type, colour, size >

  8. Example (cont.) animal DOG CAT d1 chihuahua poodle siamese cat Describe d2: • Property ‘type’, best value = ‘dog’ d2 • Property ‘colour’, best value = ‘white’ d3 Result: < white, dog >

  9. The Incremental Algorithm Good points • Fast and efficient due to lack of backtracking • Psychologically realistic Still lacking • Construction of natural language expressions • Context sensitivity

  10. A modified algorithm Adding salience Intuition A definite description refers to the most salient object which has the properties expressed by it Salience (Lewis 1979): The dog got in a fight with another dog

  11. Adding salience (cont.) Salience weights (sw) In each state, every object is assigned a natural number between 0 (not salient) and 10 (maximally salient) Salience weight assignment • In the initial state, salience weight is 0 for all objects • If an object is mentioned, its salience weight increases • If an object is not mentioned, its salience weight decreases (to a minimum of 0)

  12. A modified algorithm Strategy Same as the Incremental Algorithm, except: • The distractors are those objects in the domain with a salience weight that is equal to or higher than that of the intended referent • An NP tree is built within the algorithm, to check the expressibility of properties (cf. Horacek 1997)

  13. Result: the2 white dog A modified algorithm Example (1) d1 Input: • Object r = d2 d2 • State s0 : sw(d1)= sw(d2) = sw (d3) = 0 • P = < type, colour, size … > d3 • L = <>

  14. A modified algorithm Example (2) d1 Context = The2 white dog and the3 cat … Input • Object r = d2 d2 • State s : sw(d2)= sw(d3) > sw (d1) • P = < type, colour, size … > • L = <> d3 The2 dog …

  15. A modified algorithm Example (3) d4 Context = The2 white chihuahua … Input d2 • Object r = d2 • State s : sw(d2) > sw (d1,3,4) d1 • ... d3 The2 dog …

  16. Extensions Pronouns If • r is currently the single most salient object • and there is an antecedent for r Then pronominalise reference to r The2 white chihuahua was fast asleep. It2was dreaming of tasty bones.

  17. Extensions Relational descriptions • Add relations as attributes • Add a hierarchy of relations spatial NEXT_TO IN left_of right_of If a relation with object r’ is included when describing r, then recursively call the algorithm to describe r’

  18. Relational descriptions (cont.) Input: d2 , s0 , P = <type, …, spatial>, L = <> • First property rules out d1 and d4, best value is ‘dog’ • Next properties (‘colour’, ‘size’) rule nothing out • The spatial relation rules out d3; best value ‘next to’ d1d2 d3 d4

  19. Relational descriptions (cont.) Recursive call, input: d1, s0 , P, L = < next_to (d2,d1) > • The spatial relation in L rules out all distractors • The ‘type’ property is included by default • Resulting description of d1: the snowman d1 d2 d3 d4

  20. Relational descriptions (cont.) Recursive call, input: d1, s0 , P, L = < next_to (d2,d1) > • The spatial relation in L rules out all distractors • The ‘type’ property is included by default • Resulting description of d1: the snowman d1d2 d3 d4 The2 dog next to the1 snowman

  21. Implementation The modified algorithm has been implemented in LGM, IPO’s data-to-speech system. Applications of LGM are: • DYD: information about Mozart compositions • GoalGetter: soccer reports • OVIS: train time information • VODIS: in-car route descriptions • Pavlov: toy system, testing the modified algorithm

  22. Evaluation The basic assumptions underlying the modified algorithm are that an anaphoric reference: • Contains fewer properties • Uses more general phrasing • Is pronominalised whenever possible • Obeys 1 and 2 also after an intervening sentence All hypotheses were experimentally confirmed, except 2 (which turns out to depend on wording)

  23. Concluding remarks Use of salience allows for the generation of context-sensitive descriptions:

  24. snowman next-to dog d3 white next-to snowman d2 dog d1 white dog Related work Krahmer, van Erk & Verleg (2001): • Domain represented as a labeled, directed graph • Property selection is subgraph construction d1d2 d3

  25. Related work Van der Sluis & Krahmer (in progress): generating referring expressions in a multi-modal context • Three kinds of salience: linguistic, inherent, and focus space salience • Pointing decision and determiner choice are added * * this blue one that white one left of the blue one

More Related