1 / 20

Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.)

Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.). (c) Wolfgang Hürst, Albert-Ludwigs-University. Organizational Remarks. Exercises:

kalila
Download Presentation

Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Web Search – Summer Term 2006II. Information Retrieval (Basics Cont.) (c) Wolfgang Hürst, Albert-Ludwigs-University

  2. Organizational Remarks Exercises: Please, register for the exercises by sending me (huerst@informatik.uni-freiburg.de) an email till Friday, May 5th, with- Your name,- Matrikelnummer,- Studiengang (BA, MSc, Diploma, …)- Plans for exam (yes, no, undecided) This is just to organize the exercises but has no effect if you decide to drop this course later.

  3. DOCS. RESULTS RESULT REPRESENTATION RANKING SEARCHING Recap: IR System & Tasks Involved INFORMATION NEED User Interface DOCUMENTS QUERY SELECT DATA FOR INDEXING QUERY PROCESSING (PARSING & TERM PROCESSING) PARSING & TERM PROCESSING INDEX LOGICAL VIEW OF THE INFORM. NEED PERFORMANCE EVALUATION

  4. Evaluation of IR Systems Standard approaches for algorithm and computer system evaluationSpeed / processing timeStorage requirementsCorrectness of used algorithms and their implementation But most importantlyPerformance, effectiveness Another important issue:Usability, users’ perception Questions: What is a good / better search engine? How to measure search engine quality? How to perform evaluations? Etc.

  5. What does Performance/Effectivenessof IR Systems mean? Typical questions:How good is the quality of a system?Which system should I buy? Which one is better?How can I measure the quality of a system?What does quality mean for me? Etc. Their answer depends on users, application, … Very different views and perceptionsUser vs. search engine provider, developer vs. manager, seller vs. buyer, … And remember: Queries can be ambiguous, unspecific, etc. Hence, in practice, use restrictions and idealization, e.g. only binary decisions

  6. A C B D E H F G J I Precision & Recall RESULT: DOCUMENTS: 1. DOC. B 2. DOC. E 3. DOC. F 4. DOC. G 5. DOC. D 6. DOC. H Restrictions: 0/1 Relevance,Set instead of order/ranking But: We can use this for eval. of ranking, too(via top N docs.) # FOUND & RELEVANT PRECISION = # FOUND # FOUND & RELEVANT RECALL = # RELEVANT

  7. Calculating Precision & Recall Precision:Can be calculated directly from the result Recall:Requires relevance ratings for whole (!) data collectionIn practice: Approaches to estimate recall1.) Use a representative sample instead of whole data collection2.) Document-source method3.) Expanding queries4.) Compare result with external sources5.) Pooling method

  8. C A B D D D D B C D Precision & Recall – Special cases Special treatment is necessary, if no doc. is found or no relevant docs. exist (division by zero) NO REL. DOC. EXISTS: A = C = 0 1st CASE:B = 0 2nd CASE:B > 0 EMPTY RESULT SET: A = B = 0 1st CASE:C = 0 2nd CASE:C > 0

  9. PRECISION RECALL Precision & Recall Graphs Comparing 2 systems:System 1: Prec 1 = 0.6, Rec 1 = 0.3System 2: Prec 2 = 0.4, Rec 2 = 0.6 Which one is better? Prec.-Recall-Graph:

  10. (2 + 1) * p * r F = 2* p + r The F Measure Alternative measures exist, including ones combining Prec. p and Rec. r in 1 single value Example:The F Measure( = rel. weight forrecall, manually set) Example for different  SOURCE: N. FUHR (UNIV. DUISBURG) SKRIPTUM ZUR VORLESUNG INFORMATION RETRIEVAL, SS 2006

  11. Calculating Average Prec. Values 1. Macro assessment Estimates the expected value for the precision of a randomly chosen query (query or user oriented) Problem: Queries with empty result set 2. Micro assessment Estimates the likelihood of a randomly chosen doc. being relevant (document or system oriented) Problem: Does not support monotony

  12. Monotony of Precision & Recall Monotony: Adding a query that delivers the same results for both systems does not change their quality assessment. Example (Precision):

  13. Precision & Recall for Rankings Distinguish between linear and weak ranking Basic idea: Evaluate precision and recall by looking at the top n results for different n Generally: Precision decreases and recall increases with growing n PRECISION RECALL

  14. Precision & Recall for Rankings (Cont.)

  15. Realizing Evaluations Now we have a system to evaluate and:Measures to quantify performanceMethods to calculate them What else do we need?Documents dj (test set)Tasks (information needs) and respective queries qiRelevance judgments rij (normally binary)Results (delivered by the system) Evaluation = comparison ofGiven, perfect result: (qi, dj, rij)with result from the system: (qi, dj, rij(S1))

  16. The TREC Conference Series In the old days: IR evaluation critical becauseNo good (i.e. big) test setsNo comparability because of different test sets Motivation for initiatives such as TREC:Text REtrieval Conference (TREC), since 1992,see http://trec.nist.gov/ Goals of TREC:Create realistic, significant test setsAchieve comparability of different systemsEstablish common basics for IR evaluationIncrease technology transfer between industries and research

  17. The TREC Conf. Series (Cont.) TREC offersVarious collections of test dataStandardized retrieval tasks (queries & topics)Related relevance measuresDifferent tasks (tracks) for certain problems Examples for Tracks targeted by TREC:Traditional text retrievalSpoken document retrievalNon-English or multilingual retrievalInformation filteringUser interactionsWeb search, SPAM (since 2005), Blog (since 2005)Video retrievaletc.

  18. Advantages and Disadv. of TREC TREC (and other IR initiatives)Very successful, progress which otherwise might probably not have happened But disadvantages exist as well, e.g.Only compares performance but not actual reasons for different behaviorUnrealistic data (e.g. still too small, not represen- tative enough)Often just batch mode evaluation, no interactivity or user experience (Note: There are interactivity tracks!)Often no analysis of significance Note: Most of these arguments are general problems of IR evaluation and not necessarily TREC specific

  19. TREC Home Page Visit the TREC site at http://trec.nist.govand browse the different Tracks (gives you an idea about what is going on in the IR community)

  20. DOCS. RESULTS RESULT REPRESENTATION RANKING SEARCHING Recap: IR System & Tasks Involved INFORMATION NEED User Interface DOCUMENTS QUERY SELECT DATA FOR INDEXING QUERY PROCESSING (PARSING & TERM PROCESSING) PARSING & TERM PROCESSING INDEX LOGICAL VIEW OF THE INFORM. NEED PERFORMANCE EVALUATION

More Related