1 / 13

Automatic Question Answering

Automatic Question Answering. Introduction Factoid Based Question Answering. Uses of Automatic Question Answering. FAQ (Frequently Asked Questions) Help Desks / Customer Service Phone Centers Accessing Complex set of Technical Maintenance Manuals

tara-nelson
Download Presentation

Automatic Question Answering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic Question Answering Introduction Factoid Based Question Answering

  2. Uses of Automatic Question Answering • FAQ (Frequently Asked Questions) • Help Desks / Customer Service Phone Centers • Accessing Complex set of Technical Maintenance Manuals • Integrating QA in Knowledge Management and Portals • Wide variety of Other E-Business Applications

  3. Introduction

  4. Evaluating Extracted Answers

  5. A Brief Overview of QA systems

  6. General architecture Question Classification/ Preprocessing Information Retrieval Answer Extraction question Answer answer Query=“Calvados is” Text retrieva l=“…Calvados is often used in cooking…Calvados is a dry apple brandy made in… /A is : a dry apple brandy Answer: /Q is /A: “Calvados” is ”a dry apple brandy” e.g. What is Calvados? /Q is /A where:/Q=“(Calvados)”

  7. Approaches: Structured Knowledge-Base Approach • Create comprehensive • Knowledge Base(s) or • other Structured Data • Base(s) • At the 10K Axiom • Level -- Capable of • Answering factual • questions within • domain • At the 100K Axiom • Level -- Answer cause • & effect/capability • Questions • At the 1000K Axiom • Level -- Answer Novel • Questions; ID • alternatives Deepest QA but Limited to Given Subject Domain Ref: Dr. John D. Prange; AAAI Symposium 2002

  8. Different ApproachesQuestion: What is the largest city in England? • Text Match • Find text that says “London is the largest city in England” (or paraphrase). Confidence is confidence of NL parser * confidence of source. Find multiple instances and confidence of source -> 1. • “Superlative” Search • Find a table of English cities and their populations, and sort. • Find a list of the 10 largest cities in the world, and see which are in England. • Uses logic: if L > all objects in set R then L > all objects in set E < R. • Find the population of as many individual English cities as possible, and choose the largest. • Heuristics • London is the capital of England. (Not guaranteed to imply it is the largest city, but this is very frequently the case.) • Complex Inference • E.g. “Birmingham is England’s second-largest city”; “Paris is larger than Birmingham”; “London is larger than Paris”; “London is in England”. Reference: IBM TJ Watson AQUAINT Briefing

  9. Problem Statements and supporting approaches

  10. Named-Entity (NE) based approach • The idea is that factoid questions fall into several distinctive types, such as “location”, “date”, “person”, etc. Assuming that we can recognize the question type correctly, then the potential answer candidates can be limited down to a few NE types that correspond to the question type. • Intuitively, if the question is asking for a date, then an answer string that is identified to be a location type named-entity is not likely to be the correct answer. • However, it is important to bear in mind that neither question type classification nor NE recognition are perfect in the real world. Such systems can be harmful when classification and recognition errors occur. • The aforementioned two types of information – sentence surface text patterns and answer candidate NE type – both come from the answer sentence side. The only information we have extracted from the question side is the question type, which is used for selecting patterns and NE types for matching. • The structural information in the question sentence, which is not available in inputs in many other tasks (e.g. ad-hoc document retrieval), has not yet been fully-utilized. • Recognizing this unique input source information, there have been many recent work on finding ways to better utilize structural information, and we will review them next.

  11. Models for combining multiple sources of information in answer scoring • Nearly all QA systems use evidence from multiple sources to decide what the best answer is for a given question. Lee et al.(2005) first decided the NE type of answer candidates based on the question type, and then ranked answer candidates based on four types of evidence: • The count of NE types that occurred in the question occurring in the answer sentence, normalized by the number of NE types occurred in the question. • The count of constraints other than NE type constraint that occurred in the question occurring in the answer sentence, normalized by the number of constraints occurred in the question. • A pre-set score when the answer candidate string contains the identified question focus string. • A pre-set score the answer candidate string appear in the answer sentence next to a term that contains the question focus string. Each listed condition will give a normalized score if satisfied, and the final scoring function is the sum of these scores.

  12. Ad-Hoc Feature Weighting • Another example of ad-hoc feature weighting is the TREC-9system described in (Ittycheriah et al. 2001). In the answer selection module, the following distance metrics are computed for each 3 sentences window: • Matching words: the sum of TFIDF scores of the question words appear in the 3 sentence window. • Thesaurus match: the sum of TFIDF scores of the question words whose thesaurus matching by WordNet appear in the 3 sentence window. • Mis-matching words: the sum of TFIDF scores of the question words that do not appear in the 3 sentence window. • Dispersion: the number of words in the answer sentence that occur between matched question words. • Cluster words: the number of words that occur next to each other in both the answer sentence window and the question • Having computed these distance metrics, they used an ad-hoc scoring function which was not detailed in the paper to combine these metrics.

  13. Reference • Mengqiu Wang; A Survey of Answer Extraction Techniques in Factoid Question Answering; Association for Computational Linguistics; 2006. • M. W. Bilotti. Linguistic and Semantic Passage Retrieval Strategies for Question Answering. PhD thesis, Carnegie Mellon University, 2009. • M. W. Bilotti, P. Ogilvie, J. Callan, and E. Nyberg. Structured retrieval for question answering. In Proc. SIGIR, 2007. • H. Cui, R. Sun, K. Li, M.-Y. Kan, and T.-S. Chua. Question answering passage retrieval using dependency relations. In Proc. SIGIR, 2005. • X. Geng, T. Liu, T. Qin, and H. Li. Feature selection for ranking. In Proc. SIGIR, 2007. • M. Kaisser and J. B. Lowe. Creating a research collection of question answer sentence pairs with amazon’s mechanical turk. In Proc. LREC, 2008. • D. Shen and M. Lapata. Using semantic roles to improve question answering. In Proc. EMNLP, 2007.

More Related