1 / 6

Evaluation and STS

Evaluation and STS. Workshop on a Pipeline for Semantic Text Similarity (STS) March 12, 2012 Sherri Condon The MITRE Corporation. Evaluation and STS Wish-list. Valid: measure what we think we’re measuring (definition) Replicable: same results for same inputs (annotator agreement)

bedros
Download Presentation

Evaluation and STS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation and STS Workshop on a Pipeline for Semantic Text Similarity (STS) March 12, 2012 Sherri Condon The MITRE Corporation

  2. Evaluation and STS Wish-list • Valid: measure what we think we’re measuring (definition) • Replicable: same results for same inputs (annotator agreement) • Objective: no confounding biases (from language or annotator) • Diagnostic • Not all evaluations achieve this • Understanding factors and relations • Generalizable • Makes true predictions about new cases • Functional: if not perfect, good enough • Understandable: • Meaningful to stakeholders • Interpretable components • Cost effective

  3. Quick Foray into Philosophy • Meaning as extension: same/similar denotation • Anaphora/coreference and time/date resolution • The evening star happens to be the morning star • “Real world” knowledge = true in this world • Meaning as intension • Truth (extension) in the same/similar possible worlds • Compositionality: inference and entailment • Meaning as use • Equivalence for all the same purposes in all the same contexts • “Committee on Foreign Affairs, Human Rights, Common Security and DefencePolicy” vs “Committee on Foreign Affairs” • Salience, application specificity, implicature, register, metaphor • Yet remarkable agreement in intuitions about meaning

  4. DARPA Mind’s Eye Evaluation • Computer vision through a human lens • Recognize events in video as verbs • Produce text descriptions of events in video • Comparing human descriptions to system descriptions raises all the STS issues • Salience/importance/focus (A gave B a package. They were standing.) • Granularity of description (car vs. red car, woman vs. person) • Knowledge and inference (standing with motorcycle vs. sitting on motorcycle: motorcycle is stopped) • Unequal text lengths • Demonstrates value of/need for understanding these factors

  5. Mind’s Eye Text Similarity • Basic similarity scores based on dependency parses • Scores increase for matching predicates and arguments • Scores decrease for non-matching predicates and arguments • Accessible syn-sets and ontological relations expand matches • Salience/importance • Obtain many “reference” descriptions • Weight predicates and arguments based on frequency • Granularity of description • Demonstrates influence of application context • Program focus is on verbs, so nouns match loosely • Regularities in evaluation efforts that depend on semantic similarity promise common solutions (This is work with Evelyne Tzoukermann and Dan Parvaz)

  6. Test Sentences

More Related