1 / 57

MACHINE READING AND QUESTION ANSWERING

MACHINE READING AND QUESTION ANSWERING. Heng Ji jih@rpi.edu April 8, 2019 Acknowledgement: Many slides from Ruiqi Yang, Julia Hirschberg, Niranjan Balasubramanian , Chris Manning. Google now. Wolfram Alpha. What is reading comprehension?.

jeanned
Download Presentation

MACHINE READING AND QUESTION ANSWERING

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MACHINE READING AND QUESTION ANSWERING HengJi jih@rpi.edu April 8, 2019 Acknowledgement: Many slides from Ruiqi Yang, Julia Hirschberg, NiranjanBalasubramanian, Chris Manning

  2. Google now

  3. Wolfram Alpha

  4. Introduction to Machine Reading Comprehension Whatisreadingcomprehension? • Providedwithoneormorepiecesofevidence,givetheanswertoaquestion.

  5. Introduction to Machine Reading Comprehension DifferencefromotherQAsettings • GeneralQA:useeverything • rules/templates,knowledgebases/dictionaries,rawtext,... • Readingcomprehension:asingleknowledgesource • forces the model to“comprehend”

  6. Introduction to Machine Reading Comprehension 4Phases • Smallrealdata/largesyntheticdata:bAbI,... • Casestudy:MemNN • Cloze-styleReadingComprehension:CNN/DailyMail,CBT,... • Casestudy:AoAReader • Singleevidence,acontinuousspanastheanswer:SQuAD,... • Casestudy:BiDAF • Multievidence,norestrictiontoanswers:MSMACRO,TriviaQA,... • Casestudy:DrQA

  7. Introduction to Machine Reading Comprehension bAbIDataset • Syntheticdata • 20tasks • SingleSupporting Fact • Two Supporting Facts • Counting • Simple Negation • TimeReasoning • ...... Weston, Jason, et al. "Towards AI-complete question answering: A set of prerequisite toy tasks." (2015).

  8. Introduction to Machine Reading Comprehension Casestudy:MemNN

  9. Introduction to Machine Reading Comprehension CNN/DailyMail&CBT • Realdata,automaticgenerated • CNN/DailyMail:fillintheblankwithentityinsummary • Children’sBookTest:selecttofillintheblankinthenextsentence • Largeinscale • CNN/DailyMail:1.4MQ-Apairs • Children’sBookTest:688KQ-Apairs • Notnatural

  10. Introduction to Machine Reading Comprehension CNN/DailyMail-example

  11. Introduction to Machine Reading Comprehension Children’sBook-example

  12. Introduction to Machine Reading Comprehension Casestudy:AoAReader

  13. Introduction to Machine Reading Comprehension SQuAD • “ImageNetinNLP” • Hiddentestsetwithleaderboard • FromWikipedia,human-labeled • Humangeneratedquestion,acontinuousspanastheanswer • Evaluation:EM&F1 • answersindev&testset • Humanperformance:EM 82%,F1 91% Rajpurkar, Pranav, et al. "Squad: 100,000+ questions for machine comprehension of text." (2016).

  14. Introduction to Machine Reading Comprehension Casestudy:BiDAF

  15. Introduction to Machine Reading Comprehension MSMACRO • Morediversepassages • 100Kqueries,thesameasSQuAD • #documents:200K+vs536 • NaturalQA • Questionsfromuserlog • Answersnotlimitedtospan • Evaluation:Rouge-L&Bleu-1 • humanperformace:47,46 Nguyen, Tri, et al. "MS MACRO: A human generated machine reading comprehension dataset." (2016).

  16. Introduction to Machine Reading Comprehension Casestudy:DrQA

  17. Introduction to Machine Reading Comprehension Casestudy:DrQA-Retriever • Baseline • Wikipedia Search API: TF-IDF weightedBOWvectors • Improvement • HashedBigram

  18. Introduction to Machine Reading Comprehension Casestudy:DrQA-Reader(1) • ParagraphEmbedding • GloVevectors • exactmatch • POS,NER,normalizedTF • alignedquestionembedding:project-dot-softmax-weightedaverage • ParagraphEncoding • 3-layerBiLSTM • QuestionEncoding • 3-layerBiLSTM+attentionpooling

  19. Introduction to Machine Reading Comprehension Casestudy:DrQA-Reader(2) • AnswerPrediction • bilinearscore(unnormalizedexponential): • findtheanswerspan: • take argmax over all considered paragraph spans forthefinal prediction.

  20. Introduction to Machine Reading Comprehension ImplementationofDrQA https://github.com/hitvoice/DrQA

  21. Question-Answering Systems • Beyond retrieving relevant documents -- Do people want answers to particular questions? • Three kinds of systems • Finding answers in document collections • Interfaces to relational databases • Mixed initiative dialog systems • What kinds of questions do people want to ask? • Reading-based QA • AskMSR • FALCON • Watson

  22. Factoid Questions

  23. Typical Q/A Architecture

  24. Extracts and ranks passages using surface-text techniques Captures the semantics of the question Selects keywords for PR Extracts and ranks answers using NL techniques UT Dallas System Architecture Question Semantics Passage Retrieval Answer Extraction Q Question Processing A Passages Keywords WordNet WordNet Document Retrieval Parser Parser NER NER

  25. Question Processing What is the tallest mountain in the world? Question Processing Question Type: WHAT Answer Type: MOUNTAIN Keywords: tallest, mountain, world

  26. Question Type Identification Knowing question type helps in many ways. -- Provides a way to filter out many candidates. -- Often type specific matching and handling are implemented.

  27. Question Type Identification • Hand-generated patterns When … => Date/Time type question Where … => Location type question • Supervised Classification

  28. Question Processing • Two main tasks • Question classification: Determine the type of the answer • Query formulation: Extract keywords from the question and formulate a query

  29. Answer Types • Factoid questions… • Who, where, when, how many… • Answers fall into limited, fairly predictable set of categories • Who questions will be answered by… • Where questions will be answered by … • Generally, systems select answer types from a set of Named Entities, augmented with other types that are relatively easy to extract

  30. Answer Types Can Be More Complicated • Who questions can have organizations or countries as answers • Who sells the most hybrid cars? • Who exports the most wheat? • Which questions can have people as answers • Which president went to war with Mexico?

  31. Contains ~9000 concepts reflecting expected answer types Merges NEs with the WordNet hierarchy Taxonomy of Answer Types

  32. Answer Type Detection • Use combination of hand-crafted rules and supervised machine learning to determine the right answer type for a question • But how do we make use of this answer type once we hypothesize it?

  33. Questions approximated by sets of unrelated words (lexical terms) Similar to bag-of-word IR models Query Formulation: Extract Terms from Query

  34. Passage Retrieval Extracts and ranks passages using surface-text techniques Captures the semantics of the question Selects keywords for PR Extracts and ranks answers using NL techniques Question Semantics Passage Retrieval Answer Extraction Q Question Processing A Passages Keywords WordNet WordNet Document Retrieval Parser Parser NER NER

  35. Extracts and ranks passages using surface-text techniques Captures the semantics of the question Selects keywords for PR Extracts and ranks answers using NL techniques Answer Extraction Question Semantics Passage Retrieval Answer Extraction Q Question Processing A Passages Keywords WordNet WordNet Document Retrieval Parser Parser NER NER

  36. Ranking Candidate Answers Q066: Name the first private citizen to fly in space. • Answer type: Person • Text passage: “Among them was Christa McAuliffe, the first private citizen to fly in space. Karen Allen, best known for her starring role in “Raiders of the Lost Ark”, plays McAuliffe. Brian Kerwin is featured as shuttle pilot MikeSmith...”

  37. Ranking Candidate Answers Q066: Name the first private citizen to fly in space. • Answer type: Person • Text passage: “Among them was Christa McAuliffe, the first private citizen to fly in space. Karen Allen, best known for her starring role in “Raiders of the Lost Ark”, plays McAuliffe. Brian Kerwin is featured as shuttle pilot MikeSmith...” • Best candidate answer: Christa McAuliffe • How is this determined?

  38. Features Used in Answer Ranking • Number of question terms matched in the answer passage • Number of question terms matched in the same phrase as the candidate answer • Number of question terms matched in the same sentence as the candidate answer • Flag set to 1 if the candidate answer is followed by a punctuation sign • Number of question terms matched, separated from the candidate answer by at most three words and one comma • Number of terms occurring in the same order in the answer passage as in the question • Average distance from candidate answer to question term matches SIGIR ‘01

  39. Answer Extraction Question Type: WHAT Answer Type: MOUNTAIN Keywords: tallest, mountain, world Answer Extraction [Mount Everest] is called the world's highest mountain because it has the highest elevation above sea level. [Mauna Kea] is over 10,000 meters tall compared to 8,848 meters for Mount Everest - making it the "world's tallest mountain". However, [Chimborazo] has the distinction of being the ""highest mountain above Earth's center".

  40. Answer Extraction • Question type and Answer type guide extraction. • “Where …” questions candidates are typically locations. • You can learn the question type to answer type mapping from data. • A standard back-off is to consider all noun phrases in the passage or sentence. • With additional pruning tricks (e.g., remove pronouns). • Sequence labeling problem • i.e., predict a sequence of ANSWER labels in sentences. • Over generation is the norm. • Don’t want to miss answers. • We’ve already pruned the search space to a small set of passages.

  41. Answer Scoring [Mount Everest] is called the world's highest mountain because it has the highest elevation above sea level. [Mauna Kea] is over 10,000 meters tall compared to 8,848 meters for Mount Everest - making it the "world's tallest mountain". However, [Chimborazo] has the distinction of being the “highest mountain above Earth's center". Question Type: WHAT Answer Type: MOUNTAIN Keywords: tallest, mountain, world Answer Scoring 0.9 Mauna Kea 0.5 Mount Everest 0.3 Chimborazo

  42. Answer Scoring • Most critical component of most QA systems. • Frequency based solutions when using large scale collections such as web. • State-of-the-art is to use supervised learning over many features. • BOW similarity between question and context around answer. • Syntactic similarity • Semantic similarity • Graph-matching • Learning to Rank approach used by Watson.

  43. AskMSR • Rewrite questions to turn them into statements and search for the statements • Simple rewrite rules to rewrite original question into form of a statement • Must detect answer type • Do IR on statement • Extract answers of right type based on frequency of occurrence

  44. AskMSR Example

  45. Question-Rewriting • Intuition: User’s question often syntactically close to sentences containing the answer • Where istheLouvreMuseumlocated? • TheLouvreMuseumislocated in Paris • Who createdthecharacterofScrooge? • Charles DickenscreatedthecharacterofScrooge

  46. Question Classification • Classify question into one of seven categories • Who is/was/are/were…? • When is/did/will/are/were …? • Where is/are/were …? Hand-crafted category-specific transformation rules e.g.: For where questions, move ‘is’ to all possible locations Look to the right of the query terms for the answer. “Where is the Louvre Museum located?”  “is the Louvre Museum located”  “the is Louvre Museum located”  “the Louvre is Museum located”  “the Louvre Museum is located”  “the Louvre Museum located is”

  47. Query the Search Engine • Send all rewrites to Web search engine • Retrieve top N answers (100-200) • For speed, rely just on search engine’s snippets, not full text of the actual document

  48. Gather Ngrams • Enumerate all Ngrams (N=1,2,3) in all retrieved snippets • Weight of ngrams: occurrence count, each weighted by reliability (weight) of rewrite rule that fetched the document • Example: “Who created the character of Scrooge?” • Dickens 117 • Christmas Carol 78 • Charles Dickens 75 • Disney 72 • Carl Banks 54 • A Christmas 41 • Christmas Carol 45 • Uncle 31

  49. Filter Ngrams • Each question type associated with one or more data-type filters (regular expressions for answer types) • Boost score of ngrams that match expected answer type • Lower score of ngrams that don’t match • E.g. • Filter for how-many queries prefers a number • How many dogs pull a sled in the Iditarod? • So… disprefer candidate ngrams like • Dog race, run, Alaskan, dog racing • Prefer candidiate ngrams like • Pool (of)16 dogs

  50. Tiling the Answers: Concatenate Overlaps Scores 20 15 10 merged, discard old n-grams Charles Dickens Dickens Mr Charles Score 45 Mr Charles Dickens

More Related