1 / 16

Overview of the KBP 2012 Slot-Filling Tasks

Overview of the KBP 2012 Slot-Filling Tasks. Hoa Trang Dang ( National Institute of Standards and Technology Javier Artiles (Rakuten Institute of Technology) James Mayfield (Johns Hopkins University) Joe Ellis, Xuansong Li, Kira Griffitt, Stephanie Strassel, Jonathan Wright

rue
Download Presentation

Overview of the KBP 2012 Slot-Filling Tasks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of the KBP 2012Slot-Filling Tasks Hoa Trang Dang (National Institute of Standards and Technology Javier Artiles (Rakuten Institute of Technology) James Mayfield (Johns Hopkins University) Joe Ellis, Xuansong Li, Kira Griffitt, Stephanie Strassel, Jonathan Wright (Linguistic Data Consortium)

  2. Slot-filling Tasks • Goal: Augment a reference knowledge base (KB) with info about target entities as found in a diverse collection of documents • Reference KB: Oct 2008 Wikipedia snapshot. Each KB node corresponds to a Wikipedia and contains: • Infobox • Wiki_text (free text not in infobox) • English source documents: • 2.3 M news docs (1.2 M docs in 2011) • 1.5 M Web and other docs (0.5 M docs in 2011) • [Spanish source documents] • Diagnostic task: Slot Filler Validation

  3. Slots derived from Wikipedia infobox

  4. Slot-Filling Task Requirements • Task: given target entity and predefined slots for each entity type (PER, ORG), return all new slot fillers for that entity that can be found in the source documents, and a supporting document for each filler • Non-redundant • Don’t return a slot filler if it’s already in the KB • Don’t return more than one instance of a slot filler • Exact boundaries of filler string, as found in supporting document • Text is complete (e.g., “John Doe” rather than “John”) • No extraneous text (e.g., “John Doe” rather than “John Doe’s house” • Evaluation based on TREC-QA pooling methodology, combine • Candidate slot fillers from non-exhaustive manual search • Candidate slot fillers from fully automatic systems Answer “key” is incomplete, coverage depends on number, quality, and diversity of contributing systems.

  5. Differences from KBP 2011 • Offsets provided for target entity mention in query • Increased number of submissions (up to 5) • Require normalization of slot fillers that are dates (“yesterday” -> “2012-11-04”) • Request each proposed slot filler to include • A confidence value • Offsets for justification (usually a sentence) • Offsets for the raw (unnormalized) slot filler in the document • Move toward more precise justifications • Improved usability (for humans) in end applications • Improved training data for systems • Offsets and confidence values did not affect official scores • But confidence values were used to rank and truncate extremely lengthy submissions

  6. Slot-Filling Evaluation • Pool responses from submitted runs and from manual search -> • Set of [docid, answer-string] pairs for each target entity and slot • Assessment: • Each pair judged as one of correct, redundant, inexact, or wrong (credit given only for correct responses) • Correct pairs grouped into equivalence classes (entities); each single-valued slot has at most one equivalence class for a given target entity • Scoring: • Recall: number of correct equivalence classes returned / number of known equivalence classes • Precision: number of correct equivalence classes returned / number of [docid, answer-string] pairs returned • F1 = (P*R)/(R+P)

  7. Slot Filling Participants * first-time slot-filling team

  8. Top 6 KBP 2012 Slot-Filling teams

  9. Top 4 KBP 2012 Slot-Filling teams

  10. Slot-Filling Approaches • IIRG: (+ling, -ML) • Stanford CoreNLP for POS, NER, parse. • Sentence retrieval by exact match with named mention of target entity. • Rule-based pattern matching and keyword matching to identify slot fillers. • lsv: (-ling, +ML) • Shallow approach – no parse or coref • Query expansion via Wikipedia redirect links • SVM and Freebase for distant supervision • NYU: (+ling, +ML) • POS, parse, NER, time expression tagging, coref • Query expansion via small set of handcrafted rules, Wikipedia redirect links • MaxEnt and Freebase for distant supervision. • Combination of: hand-coded rules, patterns generated by bootstrapping and then manually reviewed, and classifier trained by distant supervision • PRIS: (+ling, +ML) • Stanford CoreNLP for POS, NER, SUTime, parse, coref; • Query expansion via small set of handcrafted rules, coref’d names; • Adaboost for finding new extraction patterns (word sequence patterns and dependency path patterns)

  11. Distribution of slots in answer key

  12. Slot productivity

  13. Slot filler Validation (SFV) • Goals • Improve precision of full slot-filling systems (without reducing recall) • Allow teams without a full slot-filling system to participate, focus on answer validation rather than document retrieval • SFV input: • All input to slot-filling task • Submission files from all Slot Filling runs, containing candidate slot fillers • No information about “past performance” of each slot filling system • SFV output: • Binary classification (Correct / Incorrect) of each candidate slot filler • Evaluation: • Filter out “Incorrect” slot fillers from each run, and score; compare to score for original run • Submissions: 1 team (Blender_CUNY)

  14. Filtering candidate slot fillers

  15. Answer Justification • Goals • Improve training data for systems – narrow down location of answer patterns • Reduce assessment effort (for correct answers with correct justifications) • Improve usability (for humans) in end applications • Task guidelines: • For each slot filler, provide start and end offsets for the sentence or clause that provides justification for the relation. For example, for query per:spouse of “Michelle Obama” and the sentence “He is married to Michelle Obama” (“He” referring to Barack Obama mentioned earlier in the document), the filler … should be “Barack Obama”, the offsets for filler must point to “He” and the offsets for justification must point to “He is married to Michelle Obama”. • Slight mismatch with LDC assessment guidelines (require antecedent of relevant pronouns in justification, otherwise judged as inexact) • Need additional discussion/refinement of guidelines

  16. LDC Data, Annotation, and Assessment

More Related