Information retrieval
This presentation is the property of its rightful owner.
Sponsored Links
1 / 56

Information Retrieval PowerPoint PPT Presentation


  • 52 Views
  • Uploaded on
  • Presentation posted in: General

Information Retrieval. CSE 8337 (Part D) Spring 2009 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza -Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/

Download Presentation

Information Retrieval

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Information retrieval

Information Retrieval

CSE 8337 (Part D)

Spring 2009

Some Material for these slides obtained from:

Modern Information Retrieval by Ricardo Baeza-Yates and BerthierRibeiro-Netohttp://www.sims.berkeley.edu/~hearst/irbook/

Data Mining Introductory and Advanced Topics by Margaret H. Dunham

http://www.engr.smu.edu/~mhd/book

Introduction to Information Retrieval by Christopher D. Manning, PrabhakarRaghavan, and HinrichSchutze

http://informationretrieval.org


Cse 8337 outline

CSE 8337 Outline

  • Introduction

  • Simple Text Processing

  • Boolean Queries

  • Web Searching/Crawling

  • Indexes

  • Vector Space Model

  • Matching

  • Evaluation


Why system evaluation

Why System Evaluation?

  • There are many retrieval models/ algorithms/ systems, which one is the best?

  • What does best mean?

  • IR evaluation may not actually look at traditional CS metrics of space/time.

  • What is the best component for:

    • Ranking function (dot-product, cosine, …)

    • Term selection (stopword removal, stemming…)

    • Term weighting (TF, TF-IDF,…)

  • How far down the ranked list will a user need to look to find some/all relevant documents?


Measures for a search engine

Measures for a search engine

  • How fast does it index

    • Number of documents/hour

    • (Average document size)

  • How fast does it search

    • Latency as a function of index size

  • Expressiveness of query language

    • Ability to express complex information needs

    • Speed on complex queries

  • Uncluttered UI

  • Is it free?


Measures for a search engine1

Measures for a search engine

  • All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise

  • The key measure: user happiness

    • What is this?

    • Speed of response/size of index are factors

    • But blindingly fast, useless answers won’t make a user happy

  • Need a way of quantifying user happiness


Happiness elusive to measure

Happiness: elusive to measure

  • Most common proxy: relevance of search results

  • But how do you measure relevance?

  • We will detail a methodology here, then examine its issues

  • Relevant measurement requires 3 elements:

    • A benchmark document collection

    • A benchmark suite of queries

    • A usually binary assessment of either Relevant or Nonrelevant for each query and each document


Difficulties in evaluating ir systems

Difficulties in Evaluating IR Systems

  • Effectiveness is related to the relevancy of retrieved items.

  • Relevancy is not typically binary but continuous.

  • Even if relevancy is binary, it can be a difficult judgment to make.

  • Relevancy, from a human standpoint, is:

    • Subjective: Depends upon a specific user’s judgment.

    • Situational: Relates to user’s current needs.

    • Cognitive: Depends on human perception and behavior.

    • Dynamic: Changes over time.


How to perform evaluation

How to perform evaluation

  • Start with a corpus of documents.

  • Collect a set of queries for this corpus.

  • Have one or more human experts exhaustively label the relevant documents for each query.

  • Typically assumes binary relevance judgments.

  • Requires considerable human effort for large document/query corpora.


Ir evaluation metrics

IR Evaluation Metrics

  • Precision/Recall

    • P/R graph

      • Regular

      • Smoothing

        • Interpolating

        • Averaging

      • ROC Curve

    • MAP

    • R-Precision

    • P/R points

  • F-Measure

  • E-Measure

  • Fallout

  • Novelty

  • Coverage

  • Utility

  • ….


Precision and recall

retrieved & irrelevant

Not retrieved & irrelevant

Entire document collection

irrelevant

Relevant documents

Retrieved documents

retrieved & relevant

not retrieved but relevant

relevant

retrieved

not retrieved

Precision and Recall


Precision and recall1

Precision and Recall

  • Precision

    • The ability to retrievetop-ranked documents that are mostly relevant.

  • Recall

    • The ability of the search to find all of the relevant items in the corpus.


Determining recall is difficult

Determining Recall is Difficult

  • Total number of relevant items is sometimes not available:

    • Sample across the database and perform relevance judgment on these items.

    • Apply different retrieval algorithms to the same database for the same query. The aggregate of relevant items is taken as the total relevant set.


Trade off between recall and precision

Returns relevant documents but

misses many useful ones too

The ideal

Returns most relevant

documents but includes

lots of junk

Desired areas

Trade-off between Recall and Precision

1

Precision

0

1

Recall


Recall precision graph example

Recall-Precision Graph Example


A precision recall curve

A precision-recall curve


Recall precision graph smoothing

Recall-Precision Graph Smoothing

  • Avoid sawtooth lines by smoothing

  • Interpolate for one query

  • Average across queries


Interpolating a recall precision curve

Interpolating a Recall/Precision Curve

  • Interpolate a precision value for each standard recall level:

    • rj {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}

    • r0 = 0.0, r1 = 0.1, …, r10=1.0

  • The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level between the j-th and (j + 1)-th level:


Interpolated precision

Interpolated precision

  • Idea: If locally precision increases with increasing recall, then you should get to count that…

  • So you max of precisions to right of value(Need not be at only standard levels.)


Precision across queries

Precision across queries

  • Recall and Precision are calculated for a specific query.

  • Generally want a value for many queries.

  • Calculate average precision recall over a set of queries.

  • Average precision at recall level r:

  • Nq – number of queries

  • Pi(r) - precision at recall level r for ith query


Average recall precision curve

Average Recall/Precision Curve

  • Typically average performance over a large set of queries.

  • Compute average precision at each standard recall level across all queries.

  • Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.


Compare two or more systems

Compare Two or More Systems

  • The curve closest to the upper right-hand corner of the graph indicates the best performance


Operating characteristic curve roc curve

Operating Characteristic Curve (ROC Curve)


Roc curve data

ROC Curve Data

  • False Positive Rate vs True Positive Rate

  • True Positive Rate

    • Sensitivity

    • Recall

    • tn/(fp+tn)

  • False Positive Rate

    • fp/(fp+tn)

    • 1-Specificity

    • Specificity -


Yet more evaluation measures

Yet more evaluation measures…

  • Mean average precision (MAP)

    • Average of the precision value obtained for the top k documents, each time a relevant doc is retrieved

    • Avoids interpolation, use of fixed recall levels

    • MAP for query collection is arithmetic ave.

      • Macro-averaging: each query counts equally

  • R-precision

    • If have known (though perhaps incomplete) set of relevant documents of size Rel, then calculate precision of top Reldocs returned

    • Perfect system could score 1.0.


Variance

Variance

  • For a test collection, it is usual that a system does crummily on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7)

  • Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query.

  • That is, there are easy information needs and hard ones!


Evaluation

Evaluation

  • Graphs are good, but people want summary measures!

    • Precision at fixed retrieval level

      • Precision-at-k: Precision of top k results

      • Perhaps appropriate for most of web search: all people want are good matches on the first one or two results pages

      • But: averages badly and has an arbitrary parameter of k

    • 11-point interpolated average precision

      • The standard measure in the early TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them

      • Evaluates performance at all recall levels


Typical good 11 point precisions

Typical (good) 11 point precisions

  • SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)


Computing recall precision points

Computing Recall/Precision Points

  • For a given query, produce the ranked list of retrievals.

  • Adjusting a threshold on this ranked list produces different sets of retrieved documents, and therefore different recall/precision measures.

  • Mark each document in the ranked list that is relevant according to the gold standard.

  • Compute a recall/precision pair for each position in the ranked list that contains a relevant document.


Computing recall precision points an example modified from salton83

Computing Recall/Precision Points: An Example (modified from [Salton83])

Let total # of relevant docs = 6

Check each new recall point:

R=1/6=0.167;P=1/1=1

R=2/6=0.333;P=2/2=1

R=3/6=0.5; P=3/4=0.75

R=4/6=0.667; P=4/6=0.667

Missing one

relevant document.

Never reach

100% recall

R=5/6=0.833;p=5/13=0.38


F measure

F-Measure

  • One measure of performance that takes into account both recall and precision.

  • Harmonic mean of recall and precision:

  • Calculated at a specific document in the ranking.

  • Compared to arithmetic mean, both need to be high for harmonic mean to be high.

  • Compromise between precision and recall


A combined measure f

A combined measure: F

  • Combined measure that assesses precision/recall tradeoff is F measure (weighted harmonic mean):

  • People usually use balanced F1measure

    • i.e., with  = 1 or  = ½


F 1 and other averages

F1 and other averages


E measure parameterized f measure

E Measure (parameterized F Measure)

  • A variant of F measure that allows weighting emphasis on precision or recall:

  • Value of  controls trade-off:

    •  = 1: Equally weight precision and recall (E=F).

    •  > 1: Weight precision more.

    •  < 1: Weight recall more.


Fallout rate

Fallout Rate

  • Problems with both precision and recall:

    • Number of irrelevant documents in the collection is not taken into account.

    • Recall is undefined when there is no relevant document in the collection.

    • Precision is undefined when no document is retrieved.


Fallout

Fallout

  • Want fallout to be close to 0.

  • In general want to maximize recall and minimize fallout.

  • Examine fallout-recall graph. More systems oriented than recall-precision.


Subjective relevance measure

Subjective Relevance Measure

  • Novelty Ratio: The proportion of items retrieved and judged relevant by the user and of which they were previously unaware.

    • Ability to find new information on a topic.

  • Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search.

    • Relevant when the user wants to locate documents which they have seen before (e.g., the budget report for Year 2000).


Utility

Utility

  • Subjective measure

  • Cost-Benefit Analysis for retrieved documents

    • Cr – Benefit of retrieving relevant document

    • Cnr – Cost of retrieving a nonrelevant document

    • Crn – Cost of not retrieving a relevant document

    • Nr – Number of relevant documents retrieved

    • Nnr – Number of nonrelevant documents retrieved

    • Nrn – Number of relevant documents not retrieved


Other factors to consider

Other Factors to Consider

  • User effort: Work required from the user in formulating queries, conducting the search, and screening the output.

  • Response time: Time interval between receipt of a user query and the presentation of system responses.

  • Form of presentation: Influence of search output format on the user’s ability to utilize the retrieved materials.

  • Collection coverage: Extent to which any/all relevant items are included in the document corpus.


Experimental setup for benchmarking

Experimental Setup for Benchmarking

  • Analytical performance evaluation is difficult for document retrieval systems because many characteristics such as relevance, distribution of words, etc., are difficult to describe with mathematical precision.

  • Performance is measured by benchmarking. That is, the retrieval effectiveness of a system is evaluated on a given set of documents, queries, and relevance judgments.

  • Performance data is valid only for the environment under which the system is evaluated.


Benchmarks

Evaluation

Algorithm under test

Benchmarks

  • A benchmark collection contains:

    • A set of standard documents and queries/topics.

    • A list of relevant documents for each query.

  • Standard collections for traditional IR:

    TREC: http://trec.nist.gov/

Precision and recall

Retrieved result

Standard document collection

Standard queries

Standard result


Benchmarking the problems

Benchmarking  The Problems

  • Performance data is valid only for a particular benchmark.

  • Building a benchmark corpus is a difficult task.

  • Benchmark web corpora are just starting to be developed.

  • Benchmark foreign-language corpora are just starting to be developed.


The trec benchmark

The TREC Benchmark

  • TREC: Text REtrieval Conference (http://trec.nist.gov/)

  • Originated from the TIPSTER program sponsored by

  • Defense Advanced Research Projects Agency (DARPA).

  • Became an annual conference in 1992, co-sponsored by the

  • National Institute of Standards and Technology (NIST) and

  • DARPA.

  • Participants are given parts of a standard set of documents

  • and TOPICS (from which queries have to be derived) in

  • different stages for training and testing.

  • Participants submit the P/R values for the final document

  • and query corpus and present their results at the conference.


The trec objectives

The TREC Objectives

  • Provide a common ground for comparing different IR

  • techniques.

    • Same set of documents and queries, and same evaluation method.

  • Sharing of resources and experiences in developing the

  • benchmark.

    • With major sponsorship from government to develop large benchmark collections.

  • Encourage participation from industry and academia.

  • Development of new evaluation techniques, particularly for

  • new applications.

    • Retrieval, routing/filtering, non-English collection, web-based collection, question answering.


Test collections

Test Collections


From document collections to test collections

From document collections to test collections

  • Still need

    • Test queries

    • Relevance assessments

  • Test queries

    • Must be germane to docs available

    • Best designed by domain experts

    • Random query terms generally not a good idea

  • Relevance assessments

    • Human judges, time-consuming

    • Are human panels perfect?


Unit of evaluation

Unit of Evaluation

  • We can compute precision, recall, F, and ROC curve for different units.

  • Possible units

    • Documents (most common)

    • Facts (used in some TREC evaluations)

    • Entities (e.g., car companies)

  • May produce different results. Why?


Kappa measure for inter judge dis agreement

Kappa measure for inter-judge (dis)agreement

  • Kappa measure

    • Agreement measure among judges

    • Designed for categorical judgments

    • Corrects for chance agreement

  • Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ]

  • P(A) – proportion of time judges agree

  • P(E) – what agreement would be by chance

  • Kappa = 0 for chance agreement, 1 for total agreement.


Kappa measure example

Kappa Measure: Example

P(A)? P(E)?


Kappa example

Kappa Example

  • P(A) = 370/400 = 0.925

  • P(nonrelevant) = (10+20+70+70)/800 = 0.2125

  • P(relevant) = (10+20+300+300)/800 = 0.7878

  • P(E) = 0.2125^2 + 0.7878^2 = 0.665

  • Kappa = (0.925 – 0.665)/(1-0.665) = 0.776

  • Kappa > 0.8 = good agreement

  • 0.67 < Kappa < 0.8 -> “tentative conclusions” (Carletta ’96)

  • Depends on purpose of study

  • For >2 judges: average pairwisekappas


Interjudge agreement trec 3

Interjudge Agreement: TREC 3


Impact of inter judge agreement

Impact of Inter-judge Agreement

  • Impact on absolute performance measure can be significant (0.32 vs 0.39)

  • Little impact on ranking of different systems or relative performance

  • Suppose we want to know if algorithm A is better than algorithm B

  • A standard information retrieval experiment will give us a reliable answer to this question.


Critique of pure relevance

Critique of pure relevance

  • Relevance vsMarginal Relevance

    • A document can be redundant even if it is highly relevant

    • Duplicates

    • The same information from different sources

    • Marginal relevance is a better measure of utility for the user.

  • Using facts/entities as evaluation units more directly measures true relevance.

  • But harder to create evaluation set


Can we avoid human judgment

Can we avoid human judgment?

  • No

  • Makes experimental work hard

    • Especially on a large scale

  • In some very specific settings, can use proxies

    • E.g.: for approximate vector space retrieval, we can compare the cosine distance closeness of the closest docs to those found by an approximate retrieval algorithm

  • But once we have test collections, we can reuse them (so long as we don’t overtrain too badly)


Evaluation at large search engines

Evaluation at large search engines

  • Search engines have test collections of queries and hand-ranked results

  • Recall is difficult to measure on the web

  • Search engines often use precision at top k, e.g., k = 10

  • . . . or measures that reward you more for getting rank 1 right than for getting rank 10 right.

    • NDCG (Normalized Cumulative Discounted Gain)

  • Search engines also use non-relevance-based measures.

    • Clickthrough on first result

      • Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate.

    • Studies of user behavior in the lab

    • A/B testing


A b testing

A/B testing

  • Purpose: Test a single innovation

  • Prerequisite: You have a large search engine up and running.

  • Have most users use old system

  • Divert a small proportion of traffic (e.g., 1%) to the new system that includes the innovation

  • Evaluate with an “automatic” measure like clickthrough on first result

  • Now we can directly see if the innovation does improve user happiness.

  • Probably the evaluation methodology that large search engines trust most

  • In principle less powerful than doing a multivariate regression analysis, but easier to understand


  • Login