1 / 1

Douglas Oard, Tamer Elsayed, Yejun Wu, Pengyi Zhang, Eileen Abels, Jimmy Lin, and Dagobert Soergel

QA Track: ciQA Task. Enterprise Track: Expert Search Task. Blog Track: Opinion Retrieval Task. Task Goal : locating blog posts that express an opinion about a target. Retrieval Unit : Permalinks (postings + comments): 3,215,171 documents. “Cleaned” Docs. Permalink Docs.

shalin
Download Presentation

Douglas Oard, Tamer Elsayed, Yejun Wu, Pengyi Zhang, Eileen Abels, Jimmy Lin, and Dagobert Soergel

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QA Track: ciQA Task Enterprise Track: Expert Search Task Blog Track: Opinion Retrieval Task Task Goal: locating blog posts that express an opinion about a target. Retrieval Unit: Permalinks (postings + comments): 3,215,171 documents “Cleaned” Docs Permalink Docs Cleaning rules based on top 5 blog hosting sites cleaning Email-based Scoring Window = 50 words Overlap = 10 words Paragraphs Fixed sized Passages query Thread-based Scoring Indri Index Indri Index Topic relevance evaluation Top 1000 paragraphs Ranked List merge merge Docs Docs Lemmatize; Remove: stop words & Not in dictionary by “spell” DF<=40 Compute SO of words (-3 ~ -0.05) negative (0.05 ~ 5) positive 16773 lemmas Compute Semantic Orientation of words (Turney & Littman, 2002): 0.2 lemmatized Par0001 Par0002 Par0003 Par0004 Par0005 0.21 0.12 0.44 0.56 0.32 1 2 3 4 5 Demotion by 2 or 3 times If <0.15 normalized SO-PMI(w) = PMI(w, {positive paradigms}) – PMI(w, {negative paradigms}) 1.0 8221 lemmas Wilson & Wiebe’s lexicon Opinion relevance evaluation Docs merge BLOG06-20051224-029-0001622821.cln Notify Blogger about objectionable content. What does this mean? Blogger Get your own blog Flag Blog Next blog rabbit + crow blog It's like anything. Tuesday, December 20, 2005 Blue Planet "...Whoa!...Wow!...WOW!...Holy shit!...WOW!!!..." (my wife and I watching the first episode of Sir David Attenborough's "The Blue Planet" tonight) posted by Neal Romanek at 9:18 PM - permalink 0 Comments: Comment? About Me My Photo Name:Neal Romanek Location:Los Angeles Example Topic: <num> Number: 851 <title> "March of the Penguins" <desc> Description: Provide opinion of the film documentary "March of the Penguins". <narr> Narrative: Relevant documents should include opinions concerning the film documentary "March of the Penguins". Articles or comments about penguins outside the context of this film documentary are not relevant. The Previous Posts Archives SUBSCRIBE to the Rabbit + Crow Blog! WEEKLEY POALE In our last Weeklie Poll, we asked which was your favorite Ceratopsian. The winner, amid spiky competition, was, of course... ...STYRACOSAURUS. THIS WEEK... If you found you could no longer walk, which mode of ambulation would you instead adopt? (_) Bustling (_) Charging (_) Creeping (_) Tip-Toeing (_) All of the above in various combinations (_) None of the above. I would adopt a stony stillness. buy the new shirt questions? suggestions? confessions? ask@nealromanek.com Listed on BlogShares blog search directory Comparison at Opinion Relevance Runs MAP Bpref P@10 R-Prec ParTitDesDef 0.1882 0.2521 0.3780 0.2441 ParTiDesDmt2 0.1887 0.2573 0.3780 0.2421 ParTiDesDmt3 0.1873 0.2568 0.3780 0.2417 ParTiDef 0.1547 0.2256 0.3360 0.2106 ParrTiDesDef 0.1631 0.2274 0.3460 0.2264 TREC-2006 at Maryland: Blog, Enterprise and QA Tracks Douglas Oard, Tamer Elsayed, Yejun Wu, Pengyi Zhang, Eileen Abels, Jimmy Lin, and Dagobert Soergel College of Information Studies /Computer Science Department / UMIACS, University of Maryland, College Park, USA Participation Goals • Building an expert search baseline system • Applying models of identity to public mailing lists • Building a reference-resolution infrastructure Participation Goals • To explore the effectiveness of single-iteration written clarification dialogs; • To explore different strategies for clarifying user needs in question answering; • To better understand the nature of complex, template-based questions. Candidate List Methods …………………………………………………………………. Three types of interaction: 1 Interaction Questions Topic 026 1. What types of smuggled disks are you interested in? Check all that apply: □ VCDs □ CDs □ DVDs □ Other. Please specify: … Example Question: Topic 26. Question: What evidence is there for transport of [smuggled VCDs] from [Hong Kong] to [China]? Narrative: The analyst is particularly interested in knowing the volume of smuggled VCDs and also the ruses used by smugglers to hide their efforts. Email Addresses Full Names Nicknames Enriched Candidate Models Models of Identity Candidate Scoring Reference Recognition 2 Ranked List Importance of Answer Types Topic 042 Please rate the importance of following types of evidence. 1. General claim of effects of aspirin. ○Important. ○ Somewhat important. ○ Not needed at all. 2. Guideline of how aspirin can be used to treat heart diseases. ○ Important. ○ Somewhat important. ○ Not needed at all. … Topic Retrieval Engine • External resources: • CIA World Fact Book • Google • WordNet • Roget’s Thesaurus • Wikipedia Analysis of Interaction Responses Questions Duplicate Removal W3C Mailing Lists Queries Email and Thread Index Interaction Forms Generation Document Retrieval Results 3 Relevance Feedback Topic 055 Please indicate the relevance of the following answers. 1. Most of Sierra Leone's diamonds were and still are smuggled into neighboring Liberia for sale, according to several human rights groups and diamond industry experts. ○Relevant. ○ Somewhat relevant. ○ Not relevant. … Top 20 relevant documents Ordered Answers Answer Generation Refined Answers Answer Ranking Unordered Answers Retrieval Results Supported Retrieval Results Results and Analysis Analysis 1: Interaction Performances by Type of Interaction Hom Much Email Support Over Topics? Comparison at Topic Relevance Runs MAP Bpref P@10 R-Prec ParTitDesDef 0.2849 0.3998 0.6200 0.3490 Sample relevance feedback Clarification questions Importance of answer types ParTiDesDmt2 0.2845 0.4040 0.6200 0.3501 ParTiDesDmt3 0.2812 0.4034 0.6200 0.3542 Performance Relative to Email Support Analysis 2: Consistency in Judgment Analysis 3: Relevant Sentences vs. Answer Nuggets ParTiDef 0.2362 0.3580 0.5280 0.3162 PasTiDesDef 0.2733 0.3866 0.5800 0.3516 • Conclusions • Paragraphs better for both topic and opinion retrieval. • Title+Description queries beat title only. • Demoting non-opinionated documents had little effect. • Future Work • Parameter tuning for: • Low frequency words. • Paragraph detection, passage size. • Aggregation of opinion scores. • Threshold of opinion scores. Conclusion Future Work Conclusion Future Work • Relevance feedback does not always work for QA • The error margin of nugget judgments is ~10% • Relevant sentence ≠ answer nugget • Examination of possible systematic errors in nugget judgments • Exploration of the relationship between relevant sentences and answer nuggets • Average performance • Threads help in short queries • More email support  more accurate • Improved reference resolution • Parameter tuning for weighted-field credit • Learning from reply features

More Related