1 / 37

Ryu Iida Tokyo Institute of Technology ryu-i@cl.cs.titech.ac.jp

Capturing Salience with a Trainable Cache Model for Zero-anaphora Resolution. Ryu Iida Tokyo Institute of Technology ryu-i@cl.cs.titech.ac.jp. Kentaro Inui Yuji Matsumoto Nara Institute of Science and Technology { inui,matsu }@ is.naist.jp. Introduction . Search space.

huy
Download Presentation

Ryu Iida Tokyo Institute of Technology ryu-i@cl.cs.titech.ac.jp

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CapturingSalience with a Trainable Cache Model for Zero-anaphora Resolution Ryu Iida Tokyo Institute of Technology ryu-i@cl.cs.titech.ac.jp Kentaro Inui Yuji MatsumotoNara Institute of Science and Technology{inui,matsu}@is.naist.jp

  2. Introduction Search space NTSB Chairman Jim Hall is to address a briefing on the investigation in Seattle Thursday, but board spokesman Mike Benson said Hall isn't expected to announce any findings. Benson said investigators are simulating air loads on the 737's rudder. ``It's a slow, methodical job since we don't have adequate black boxes,'' he said. Newer models of flight data recorders, or ``black boxes,'' would record the angle of the rudder and the pedal controlling it. antecedent • Many researchers have focused on the research area of anaphora (coreference) • For NLP applicationssuch as IE and MT • Anaphora resolution • Search for an antecedent in the search space anaphor

  3. Problem Search space The National Transportation Safety Board is borrowing a Boeing 737 from Seattle's Museum of Flight as part of its investigation into why a similar jetliner crashed near Pittsburgh in 1994. The museum's aircraft, ironically enough, was donated by USAir, which operated the airplane that crashed, killing 132 people on board. The board is testing the plane's rudder controls to learn why Flight 427 suddenly rolled and crashed while on its approach to the Pittsburgh airport Sept. 8, 1994. Aviation safety investigators say a sharp movement of the rudder ( the movable vertical piece in the plane's tail ) could have caused the jet's deadly roll. NTSB Chairman Jim Hall is to address a briefing on the investigation in Seattle Thursday, but board spokesman Mike Benson said Hall isn't expected to announce any findings. Benson said investigators are simulating air loads on the 737's rudder. ``It's a slow, methodical job since we don't have adequate black boxes,'' he said. Newer models of flight data recorders, or ``black boxes,'' would record the angle of the rudder and the pedal controlling it. Large search space makes practical anaphora resolution difficult Task: reducing the search space

  4. Previous work • Machine learning-based approaches(Aone and Bennett, 1995;McCarthy and Lehnert, 1995; Soon et al., 2001;Ng and Cardie, 2002; Seki et al., 2002; Isozakiand Hirao, 2003; Iida et al., 2005; Iida et al.,2007a, Yang et al. 2008) • Less attention to search space problem • Heuristically limit search space • e.g. system deals with candidates only occurring in N previous sentences(Yang et al. 2008)  Problem: Exclude an antecedent when it is located further than N sentences from its anaphor

  5. Previous work (Cont’d) • Rule-based approaches (e.g. approaches based on Centering Theory (Grosz et al. 1995)) • Only deal with the salient discourse entities at each point of discourse status • Drawback: Centering Theory only retains information about the previous sentence • Exception: Suri&McCoy (1994), Hahn&Strube(1997) • Overcome this drawback • Still limited by the restrictions fundamental to the notion of Centering Theory

  6. Our solution Search space NTSB Chairman Jim Hall is to address a briefing on the investigation in Seattle Thursday, but board spokesman Mike Benson said Hall isn't expected to announce any findings. Benson said investigators are simulating air loads on the 737's rudder. ``It's a slow, methodical job since we don't have adequate black boxes,'' he said. Newer models of flight data recorders, or ``black boxes,'' would record the angle of the rudder and the pedal controlling it. Reduce search space for given anaphor by applying the notion of ‘‘caching’’ introduced by Walker (1996)

  7. Our solution extract mostsalient candidates Search space cache NTSB Chairman Jim Hall is to address a briefing on the investigation in Seattle Thursday, but board spokesman Mike Benson said Hall isn't expected to announce any findings. Benson said investigators are simulating air loads on the 737's rudder. ``It's a slow, methodical job since we don't have adequate black boxes,'' he said. Newer models of flight data recorders, or ``black boxes,'' would record the angle of the rudder and the pedal controlling it. NTSB Chairman Jim Hall , investigators, the rudder search for antecedent Reduce search space for given anaphor by applying the notion of ‘‘caching’’ introduced by Walker (1996)

  8. Implementation of cache models • Walker (1996)’s cache model • Two devices • Cache: holds most salient discourse entities • Main memory: retains all other entities • Not fully specified for implementation • Our approach • Specify how to retain salient candidates based on machine learning to capture both local and global foci of discourse • Dynamic cache model (DCM)

  9. Dynamic cache model (DCM) Cache Ci Sentence Si ei1 ei2 … eiN ci1 ci2 … ciM dynamic cache model retained discarded Cache Ci+1 e(i+1)1 e(i+1)2 … e(i+1)N C(i+1)1 c(i+1)2 … c(i+1)M • Dynamically update cache information in sentence-wise manner • Take into account local transition of salience

  10. Dynamic cache model (DCM) Cache Ci Sentence Si ei1 ei2 … eiN ci1 ci2 … ciM dynamic cache model retained discarded Cache Ci+1 e(i+1)1 e(i+1)2 … e(i+1)N C(i+1)1 c(i+1)2 … c(i+1)M Difficult to create the training instances for the problem where the model retains the N most salient candidates

  11. DCM: ranking candidates • Recast candidate selection as ranking problem in machine learning • Training instances created from anaphoric relations annotated in corpus • For given candidate C at the current context,(i.e. either C is in current cache or C appears in current sentence)if C is referred to by anaphor appearing in following contexts  ‘‘retained’’ (1st place)otherwise  ‘‘discarded’’ (2nd place)

  12. DCM: creating training instances Training instances Annotated corpus S1 retained (1st): C1 discarded (2nd): C2 S2 S3 retained (1st): C1 C4 discarded (2nd): C3 C5 C6 C: candidate A: anaphor C1 is referred to by Ai in S2  retained C2 is not referred to by any anaphors appearing in the following contexts  discarded

  13. Zero-anaphora resolution process cache (size=2) φ: zero-pronoun

  14. Zero-anaphora resolution process cache (size=2) Tom (Tom), kouen (park) Tom (Tom), John (John) φ: zero-pronoun

  15. Zero-anaphora resolution process cache (size=2) Tom (Tom), kouen (park) Tom (Tom), John (John) Tom (Tom), kekka (result) φ: zero-pronoun

  16. Zero-anaphora resolution process cache (size=2) Tom (Tom), kouen (park) Tom (Tom), John (John) Tom (Tom), kekka (result) φ: zero-pronoun

  17. Evaluating caching mechanism on Japanese zero-anaphora resolution # of antecedents retained in cache models # of all antecedents • Investigate how cache model contributes to candidate reduction • Explore candidate reduction ratio of each cache model and its coverage • Coverage = • Create a ranker using Ranking SVM (Joachims 2002)

  18. Data set • NAIST Text Corpus (Iida et al., 2007) • Data set for cross-validation: 287 articles • 699 zero-pronouns • Conduct 5-fold cross-validation

  19. Baseline cache models • Centering-based cache model • store the preceding ‘wa’ (topic)-marked or ‘ga’ (subject)-marked candidate antecedents • An approximation of the model proposed by Nariyama (2002) • Sentence-based cache model (Soon et al. 2001, Yang et al. 2008, etc.) • Store candidate antecedents in the N previous sentences of a zero-pronoun • Static cache model • Does not capture dynamics of text • Rank candidates at once according to rank based on global focus of text

  20. Feature set for cache models • Default features • Part-of-speech, located in a quoted sentence or not, located in the beginning of a text, case marker (i.e. wa, ga), syntactically depends on the last bunsetsuunit (i.e. as basic unit in Japanese) in a sentence • Features only used in DCM • The set of connectives intervening between Ci and the beginning of the current sentence S • The number of anaphoric chain • Ci is currently stored in the cache or not • Distances between S and Ci in terms of a sentence

  21. Results: caching mechanism Search space CM: centering-based model, SM: sentence-based model

  22. Evaluating antecedent identification • Antecedent identification task of inter-sentential zero-anaphora resolution • cache size: 5 to all candidates • Compare the three cache models • Centering-based cache model • Sentence-based cache model • Dynamic cache model • Investigate computational time

  23. Antecedent identification and anaphoricity determination models Antecedent identification model • Tournament model (Iida et al., 2003) • Select the most likely candidate antecedent by conducting a series of matches in which candidates compete with each others Anaphoricity determination model • Selection-then-classification model (Iida et al., 2005) • Determine anaphoricity by judging an anaphor as anaphoric only if its most likely candidate is judged as its antecedent.

  24. Results of antecedent identification CM: centering-based model, SM: sentence-based model, DCM: dynamic cache model

  25. Results of antecedent identification CM: centering-based model, SM: sentence-based model, DCM: dynamic cache model

  26. Conclusion • Proposed a machine learning-based cache model in order to reduce the computational cost of anaphora resolution • Recast discourse status updates as ranking problems of discourse entities by using anaphoric relations annotated in corpus as clues • Our learning-based cache model drastically reduces search space while preserving accuracy

  27. Future work • The procedure for zero-anaphora resolution is carried out linearly • i.e. antecedent is independently selected without taking into account any other zero-pronouns • Trends in anaphora resolution have shifted to more sophisticated approaches which globally optimize the interpretation of all referring expressions in a text • Poon & Domingos (2008): Markov Logic Network  Incorporate our caching mechanism into such global approaches

  28. Thank you for your kind attention

  29. Feature set used in antecedent identification models

  30. Overall zero-anaphora resolution • Investigate the effects of introducing the cache model on overall zero-anaphora resolution including intra-sentential zero-anaphora resolution • Compare the zero-anaphora resolution model with different cache sizes • Iida et al (2006)’s model • Exploit syntactic patterns as features

  31. Results of overall zero-anaphora resolution All models achieved almost the same performance

  32. Static cache model (SCM) • Grosz & Sidner (1995)’s global focus • Entity or set of entities salient throughout the entire discourse • Characteristics of SCM • Does not capture dynamics of the text • Select N most salient candidates according to the rank based on the global focus of the text

  33. SCM: Training and test phase • Test phase S1 S2 S3 S4 Ci: candidate antecedentφj: zero-pronoun ranker Training instances 1st: C1 C4 C7 N most salient candidates 1st: C’1 2nd: C’6 .. Nth:C’3 2nd: C2 C3 C5 C6 C8 C9 C10 Training phase

  34. Zero-anaphora resolution process For a given zero-pronoun φin sentence S • Intra-sentential anaphora resolution • Search for an antecedent A in S • If Ai is found, return Ai; otherwise go to step 2 • Inter-sentential anaphora resolution • Search for an antecedent Aj in the cache • If Aj is found, return Aj; otherwise φ is judged as exophoric • Cache update • Take into account the candidates in S as well as the already retained candidates in the cache

  35. Zero-anaphora • Zero-anaphor: a gap with an anaphoric function • Zero-anaphora resolution becoming important in many applications • In Japanese, even obligatory arguments of predicates are often omitted when they are inferable from the context • 45% nominatives are omitted in newspaper articles

  36. Zero-anaphora(Cont’d) Maryi-waJohnj-ni (φj-ga) tabako-o yameru-youni it-ta . Maryi-TOP Johnj-DAT (φj-NOM) smoking-OBJ quit-COMP say-PAST PUNCMaryi told Johnj to quit smoking. (φi-ga) tabako-o kirai-dakarada . (φi-NOM) smoking-OBJ hate-BECAUSE PUNC Because (shei) hates people smoking. • Two sub-tasks • Anaphoricity determination • Determine whether a zero-pronoun is anaphoric • Antecedent identification • Select an antecedent for a given zero-pronoun

More Related