1 / 41

Japanese Abbreviation Expansion with Query and Clickthrough Logs

Japanese Abbreviation Expansion with Query and Clickthrough Logs. Kei Uchiumi † , Mamoru Komachi ‡ , Keigo Machinaga , Toshiyuki Maezawa † , Toshinori Satou † , Yoshinori Kobayashi † † : Yahoo Japan Corporation ‡ : Nara Institute of Science and Technology.

henry
Download Presentation

Japanese Abbreviation Expansion with Query and Clickthrough Logs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Japanese Abbreviation Expansion with Query and Clickthrough Logs Kei Uchiumi†, Mamoru Komachi‡, KeigoMachinaga, Toshiyuki Maezawa†, ToshinoriSatou†, Yoshinori Kobayashi† † : Yahoo Japan Corporation ‡ : Nara Institute of Science and Technology

  2. Query expansion improves recall for search engines “cod” “Call of Duty”

  3. Once: Using handmaid dictionary • Lexicographers detected pairs of queries and expansions

  4. Recently : Hard to compile manually • Time consuming to construct a dictionary • Requires domain knowledge • The web grows rapidly • Even harder to maintain an up-to-date dictionary

  5. Our purpose:Generating an abbreviation dictionaryfrom web search logs Excellent resource for many NLP applications in web domain • Clickthrough logs • Learning semantic categories [Komachi et al. 2009] • Named entity extraction [Jain et al. 2010] • Search query logs • Query alteration [Hagiwara et al. 2009] • Acquiring semantic categories [Sekine et al. 2007]

  6. The main contribution This method is used as assistant tool for making dictionary in Yahoo! Japan Novel re-ranking method to combine web query and clickthroughlogs First attempt to automatically recognize full spellings given Japanese abbreviation

  7. Agenda • Introduction • Query reformulation based on noisy channel model • Query Abbreviation model • Query Language Model • Evaluation • Related work • Conclusion

  8. Agenda • Introduction • Query reformulation based on noisy channel model • Query Abbreviation model • Query Language Model • Evaluation • Related work • Conclusion

  9. Noisy Channel Modelfor query reformulation Query Abbreviation Model Query Language Model

  10. Reformulation flow Clickthroughgraph Query : q Clickthrough logs Candidates : c1,c2,c3,… Query Abbreviation Model Search query logs Query language model Reranking Outputs: ca, cb, cc, … Offline part Online part

  11. Label propagation on clickthroughgraph www.abc-tokyo.com abc americanbroadcasting corporation abcnews.go.com alphabetsong www.alphabetsong.org austrianballetcompany en.wikipedia.org The depth of the color of lines indicates relatedness between each node.The depth of the color of nodes represents relatedness to the seed.

  12. Problems of adopting [Komachi et al. 2009] to our query reformulation task Preliminary experiments showed that [Komachi et al. 2009] cannot be directly applied to our task Extracted not only synonymous expressions but also semantically Failed to alleviate semantic drift because of using normalized frequency

  13. One step approximation prevents extracting non-synonymous expressions The one step approximation extracts queries landing on the same URL by 1-hoplabel propagation. These queries are possibly synonyms of the seed and thus possible to correct without semantic transformation.

  14. Using normalized PMI [Bounma, 2009] as countermeasure against semantic drift • PMI assigns high scores to low-frequency events • Using naively makes clickthrough graph dense

  15. Cutting off the negative values Edges are represented as (i,j)-th element of matrix W • Cut off the values lower than threshold θ (θ≥0) • The range of Wijcan be nomalized to [0,1] • Prevents W from being dense • Reduces the noise in the data

  16. Reformulation flow Clickthroughgraph Query : q Clickthrough logs Candidates : c1,c2,c3,… Query Abbreviation Model Search query logs Query language model Reranking Outputs: ca, cb, cc, … Offline part Online part

  17. Character n-gram query language model C is a contiguous sequence of N characters. c = {x0,x1,…,xn-1} • A language model estimated from search query logs • P(c) represents likelihood of c as a query

  18. Character n-gram is robust for Japanese web NLP • Hard to compute the likelihood of neologisms by word n-gram language model • Characters themselves carry essential semantic information in Chinese and Japanese [Asahara and Matsumoto, 2004][Huang and Zhao, 2006] • Using character 5-grams for query language model

  19. Agenda • Introduction • Query reformulation based on noisy channel model • Query Abbreviation model • Query Language Model • Evaluation • Related work • Conclusion

  20. Japanese abbreviation expansion data set • Test set • 1916 of ‘Acronym’, ’Kanji’, ‘Kana’ abbreviations • Collected from the Japanese version of Wikipedia • Removed single letters and duplications • Training set • Clickthrough logs • 2009/10/22 – 2009/11/9, 2010/1/1 – 2010/1/16 • About 17,000,000 pairs of queries and URLs • Cut off pairs occurred less than 10 times • Web search query logs • 2009/8/1 – 2010/1/27 • About 52,000,000 unique queries • Cut off queries occurred less than 10 times

  21. Judgment guideline Table1: Correction patterns for abbreviation expansion Table2: Examples of abbreviations and corrections pairs

  22. Evaluation measure • The agreement rate of judgment of abbreviation/expansion pair: 47.0 % • Cohen’s kappa measure κ = 0.63

  23. Comparison methods • Evaluated reranking performance of 50 candidates extracted from clickthrough logs • Candidates are extracted by one step approximation • Compared three reranking methods • Ranking using abbreviation model (AM) only • Reranking using language model (LM) only • Reranking using both AM and LM

  24. Reranking with query language model improves both precision and coverage at top-10 The result of using only QAM is equivalent to the method of Komachi et al. (2009) using NPMI instead of raw frequency

  25. Examples of input and candidates or its correction Blue: Correct Red: Incorrect

  26. Error Analysis Table3: types of errors Beside above reason: 280 out of 1,916 queries did not exist in clickthrough logs

  27. A partial correct query • The likelihood of the partial query becomes higher than that of its correct spelling • Although the likelihood was divided by the length of candidate’s string, still fail to filter fragments of queries

  28. A correct query but with an additional attribute word • Include the combination of correct queries and commonly used attribute words • e.g. “* 意味(* meaning)”, “* とは(what does * mean?)”, etc. • 857 queries were classified as incorrect that co-occurred with these attribute words.

  29. A related but not abbreviated term • A number of abbreviations coincide with other general nouns • e.g. “dog (DOG: Disk Original Group)” • Hard to expand these abbreviations correctly at present

  30. Agenda • Introduction • Query reformulation based on noisy channel model • Query Abbreviation model • Query Language Model • Evaluation • Related work • Conclusion

  31. Related Work • Spelling Correction based on edit distance • Using noisy channel model with a language model created from query logs[Cucerzan and Brill, 2004] • Reranking method applying neural net to the spelling correction candidates obtained from Cucerzan’s method[Gao et al. 2010][Sun et al. 2010] • Synonym extraction • Using similarity based on JS divergence of commonly clicked URL distribution between queries[Wei et al. 2009] • Query expansion • Proposed a unified approach using CRFs with extended feature function[Guo et al. 2008]

  32. Agenda • Introduction • Query reformulation based on noisy channel model • Query Abbreviation model • Query Language Model • Evaluation • Related work • Conclusion

  33. 6. Conclusion • Have proposed a query expansion method using the web search logs • In experiment, found that a combination of label propagation and language model outperformed other methods using either label propagation or language model • In the future, will address this task using discriminative learning as a ranking problem

  34. Any Questions?

  35. PageRank on a query graph 国際労働機関とは 国際労働機関意味 国際労働機関 国際労働機関役割(role) Partial queries do not co-occur with attribute words frequently • Edges represent common co-occurring words between queries • Will assign higher scores to correct queries than a QLM and QAM

  36. Parameters • Construction of a clickthrough graph • The threshold θ of elements Wijwas set to 0.1 • The parameter α for label propagation was set to 0.0001 • Construction of a language model • Character 5-gram • Likelihood was divided by the length of candidate’s string

  37. Correct candidates types Table: correct candidate types

  38. Cohen’s kappa Kappa = 0.63

  39. [Komachi et al. 2009] • Suggested that normalized frequency causes semantic drift • Suggested using relative frequency as countermeasure against semantic drift

  40. P-values of Wilcoxon’s signed rank test Comparison of harmonic mean between precision and coverage each model with k ranking from 1 to 50

  41. Query abbreviation model • Uses the label propagation method on a clickthrough graph (based on [Komachi et al. 2009] ) • The probability of the label propagation can be regarded as the conditional probability P(q|c) • The label propagation is mathematically identical to the random walk with restart[Tong and Faloustos KDD 06]

More Related