evaluation of ir systems
Download
Skip this Video
Download Presentation
Evaluation of IR Systems

Loading in 2 Seconds...

play fullscreen
1 / 93

Evaluation of IR Systems - PowerPoint PPT Presentation


  • 160 Views
  • Uploaded on

Evaluation of IR Systems. Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford). This lecture. Results summaries: Making our good results usable to a user How do we know if our results are any good? Evaluating a search engine Benchmarks

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Evaluation of IR Systems' - opa


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
evaluation of ir systems

Evaluation of IR Systems

Adapted from Lectures by

Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)

L10Evaluation

this lecture
This lecture
  • Results summaries:
    • Making our good results usable to a user
  • How do we know if our results are any good?
    • Evaluating a search engine
      • Benchmarks
      • Precision and recall

L10Evaluation

result summaries
Result Summaries
  • Having ranked the documents matching a query, we wish to present a results list.
  • Most commonly, a list of the document titles plus a short summary, aka “10 blue links”.
summaries
Summaries
  • The title is typically automatically extracted from document metadata. What about the summaries?
    • This description is crucial.
    • User can identify good/relevant hits based on description.
  • Two basic kinds:
    • A static summary of a document is always the same, regardless of the query that hit the doc.
    • A dynamic summary is a query-dependent attempt to explain why the document was retrieved for the query at hand.

L10Evaluation

static summaries
Static summaries
  • In typical systems, the static summary is a subset of the document.
    • Simplest heuristic: the first 50 (or so – this can be varied) words of the document
      • Summary cached at indexing time
    • More sophisticated: extract from each document a set of “key” sentences
      • Simple NLP heuristics to score each sentence
      • Summary is made up of top-scoring sentences.
    • Most sophisticated: NLP used to synthesize a summary
      • Seldom used in IR (cf. text summarization work)
dynamic summaries
Dynamic summaries
  • Present one or more “windows” within the document that contain several of the query terms
    • “KWIC” snippets: Keyword in Context presentation
    • Generated in conjunction with scoring
      • If query found as a phrase, all or some occurrences of the phrase in the doc
      • If not, document windows that contain multiple query terms
  • The summary itself gives the entire content of the window – all terms, not only the query terms.
generating dynamic summaries
Generating dynamic summaries
  • If we have only a positional index, we cannot (easily) reconstruct context window surrounding hits.
  • If we cache the documents at index time, then we can find windows in it, cueing from hits found in the positional index.
    • E.g., positional index says “the query is a phrase in position 4378” so we go to this position in the cached document and stream out the content
  • Most often, cache only a fixed-size prefix of the doc.
    • Note: Cached copy can be outdated

L10Evaluation

dynamic summaries8
Dynamic summaries
  • Producing good dynamic summaries is a tricky optimization problem
    • The real estate for the summary is normally small and fixed
    • Want snippets to be long enough to be useful
    • Want linguistically well-formed snippets
    • Want snippets maximally informative about doc
  • But users really like snippets, even if they complicate IR system design

L10Evaluation

alternative results presentations
Alternative results presentations?
  • An active area of HCI research
  • An alternative: http://www.searchme.com / copies the idea of Apple’s Cover Flow for search results

L10Evaluation

measures for a search engine
Measures for a search engine
  • How fast does it index
    • Number of documents/hour
    • (Average document size)
  • How fast does it search
    • Latency as a function of index size
  • Expressiveness of query language
    • Ability to express complex information needs
    • Speed on complex queries
  • Uncluttered UI
  • Is it free?

L10Evaluation

measures for a search engine12
Measures for a search engine
  • All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise
  • The key measure: user happiness
    • What is this?
      • Speed of response/size of index are factors
      • But blindingly fast, useless answers won’t make a user happy
    • Need a way of quantifying user happiness

L10Evaluation

data retrieval vs information retrieval
Data Retrieval vs Information Retrieval
  • DR Performance Evaluation (after establishing correctness)
    • Response time
    • Index space
  • IR Performance Evaluation
    • How relevant is the answer set? (required to establish functional correctness, e.g., through benchmarks)

L10Evaluation

measuring user happiness
Measuring user happiness
  • Issue: who is the user we are trying to make happy?
    • Depends on the setting/context
  • Web engine: user finds what they want and return to the engine
    • Can measure rate of return users
  • eCommerce site: user finds what they want and make a purchase
    • Is it the end-user, or the eCommerce site, whose happiness we measure?
    • Measure time to purchase, or fraction of searchers who become buyers?

L10Evaluation

measuring user happiness15
Measuring user happiness
  • Enterprise (company/govt/academic): Care about “user productivity”
    • How much time do my users save when looking for information?
    • Many other criteria having to do with breadth of access, secure access, etc.

L10Evaluation

happiness elusive to measure
Happiness: elusive to measure
  • Most common proxy: relevance of search results
      • But how do you measure relevance?
    • We will detail a methodology here, then examine its issues
  • Relevant measurement requires 3 elements:
      • A benchmark document collection
      • A benchmark suite of queries
      • A usually binary assessment of either Relevant or Nonrelevant for each query and each document
      • Some work on more-than-binary, but not the standard

L10Evaluation

evaluating an ir system
Evaluating an IR system
  • Note: the information need is translated into a query
  • Relevance is assessed relative to the information need,not thequery
    • E.g., Information need: I'm looking for information on whether drinking red wine is more effective at reducing heart attack risks than white wine.
    • Query: wine red white heart attack effective
  • You evaluate whether the doc addresses the information need, not whether it has these words

L10Evaluation

difficulties with gauging relevancy
Difficulties with gauging Relevancy
  • Relevancy, from a human standpoint, is:
    • Subjective: Depends upon a specific user’s judgment.
    • Situational: Relates to user’s current needs.
    • Cognitive: Depends on human perception and behavior.
    • Dynamic: Changes over time.

L10Evaluation

standard relevance benchmarks
Standard relevance benchmarks
  • TREC - National Institute of Standards and Technology (NIST) has run a large IR test bed for many years
  • Reuters and other benchmark doc collections used
  • “Retrieval tasks” specified
    • sometimes as queries
  • Human experts mark, for each query and for each doc, Relevant or Nonrelevant
    • or at least for subset of docs that some system returned for that query
unranked retrieval evaluation precision and recall
Unranked retrieval evaluation:Precision and Recall
  • Precision: fraction of retrieved docs that are relevant = P(relevant|retrieved)
  • Recall: fraction of relevant docs that are retrieved = P(retrieved|relevant)
  • Precision P = tp/(tp + fp)
  • Recall R = tp/(tp + fn)

L10Evaluation

precision and recall in practice
Precision and Recall in Practice
  • Precision
    • The ability to retrievetop-ranked documents that are mostly relevant.
      • The fraction of the retrieved documents that are relevant.
  • Recall
    • The ability of the search to find all of the relevant items in the corpus.
      • The fraction of the relevant documents that are retrieved.

L10Evaluation

should we instead use the accuracy measure for evaluation
Should we instead use the accuracy measure for evaluation?
  • Given a query, an engine classifies each doc as “Relevant” or “Nonrelevant”
  • The accuracy of an engine: the fraction of these classifications that are correct
    • Accuracy is a commonly used evaluation measure in machine learning classification work
  • Why is this not a very useful evaluation measure in IR?

L10Evaluation

why not just use accuracy
Why not just use accuracy?
  • How to build a 99.9999% accurate search engine on a low budget….
  • People doing information retrieval want to findsomething and have a certain tolerance for junk.

Snoogle.com

Search for:

0 matching results found.

L10Evaluation

precision recall
Precision/Recall
  • You can get high recall (but low precision) by retrieving all docs for all queries!
  • Recall is a non-decreasing function of the number of docs retrieved
  • In a good system, precision decreases as either the number of docs retrieved or recall increases
    • This is not a theorem, but a result with strong empirical confirmation

L10Evaluation

trade offs
Returns relevant documents but

misses many useful ones too

The ideal

Returns most relevant

documents but includes

lot of junk

Trade-offs

1

Precision

0

1

Recall

L10Evaluation

difficulties in using precision recall
Difficulties in using precision/recall
  • Should average over large document collection/query ensembles
  • Need human relevance assessments
    • People aren’t reliable assessors
  • Assessments have to be binary
    • Nuanced assessments?
  • Heavily skewed by collection/authorship
    • Results may not translate from one domain to another

L10Evaluation

a combined measure f
A combined measure: F
  • Combined measure that assesses precision/recall tradeoff is F measure (harmonic mean):
  • Harmonic mean is a conservative average
    • See CJ van Rijsbergen, Information Retrieval

L10Evaluation

aka e measure parameterized f measure
Aka E Measure (parameterized F Measure)
  • Variants of F measure that allow weighting emphasis on precision over recall:
  • Value of  controls trade-off:
    •  = 1: Equally weight precision and recall (E=F).
    •  > 1: Weight precision more.
    •  < 1: Weight recall more.

L10Evaluation

breakeven point
Breakeven Point
  • Breakeven point is the point where precision equals recall.
  • Alternative single measure of IR effectiveness.
  • How do you compute it?
evaluating ranked results
Evaluating ranked results
  • Evaluation of ranked results:
    • The system can return any number of results
    • By taking various numbers of the top returned documents (levels of recall), the evaluator can produce a precision-recall curve

L10Evaluation

computing recall precision points an example
Computing Recall/Precision Points: An Example

Let total # of relevant docs = 6

Check each new recall point:

R=1/6=0.167; P=1/1=1

R=2/6=0.333; P=2/2=1

R=3/6=0.5; P=3/4=0.75

R=4/6=0.667; P=4/6=0.667

Missing one

relevant document.

Never reach

100% recall

R=5/6=0.833; p=5/13=0.38

L10Evaluation

interpolating a recall precision curve
Interpolating a Recall/Precision Curve
  • Interpolate a precision value for each standard recall level:
    • rj {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
    • r0 = 0.0, r1 = 0.1, …, r10=1.0
  • The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level above the j-th level:
average recall precision curve
Average Recall/Precision Curve
  • Typically average performance over a large set of queries.
  • Compute average precision at each standard recall level across all queries.
  • Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.
evaluation metrics cont d
Evaluation Metrics (cont’d)
  • Graphs are good, but people want summary measures!
    • Precision at fixed retrieval level
      • Precision-at-k: Precision of top k results
      • Perhaps appropriate for most of web search: all people want are good matches on the first one or two results pages
      • But: averages badly and has an arbitrary parameter of k
    • 11-point interpolated average precision
      • The standard measure in the early TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them
      • Evaluates performance at all recall levels

L10Evaluation

typical good 11 point precisions
Typical (good) 11 point precisions
  • SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)
11 point precisions
11 point precisions

L10Evaluation

receiver operating characteristics roc curve
Receiver Operating Characteristics (ROC) Curve
  • True positive rate =
  • tp/(tp+fn) = recall = sensitivity
  • False positive rate = fp/(tn+fp). Related to precision.
    • fpr=0 <-> p=1
  • Why is the blue line “worthless”?
mean average precision map
Mean average precision (MAP)
  • MAP for a query
    • Average of the precision value for each (of the k top) relevant document retrieved
      • This approach weights early appearance of a relevant document over later appearance
  • MAP for query collection is the arithmetic average of MAP for each query
    • Macro-averaging: each query counts equally

L10Evaluation

mean average precision map43
Mean Average Precision (MAP)
  • Mean Average Precision (MAP)
    • summarize rankings from multiple queries by averaging average precision
    • most commonly used measure in research papers
    • assumes user is interested in finding many relevant documents for each query
    • requires many relevance judgments in text collection
summarize a ranking map
Summarize a Ranking: MAP
  • Given that n docs are retrieved
    • Compute the precision (at rank) where each (new) relevant document is retrieved => p(1),…,p(k), if we have k rel. docs
      • E.g., if the first rel. doc is at the 2nd rank, then p(1)=1/2.
    • If a relevant document never gets retrieved, we assume the precision corresponding to that rel. doc to be zero
  • Compute the average over all the relevant documents
    • Average precision = (p(1)+…p(k))/k
cont d
(cont’d)
  • This gives us (non-interpolated) average precision, which captures both precision and recall and is sensitive to the rank of each relevant document
  • Mean Average Precisions (MAP)
    • MAP = arithmetic mean average precision over a set of topics
    • gMAP = geometric mean average precision over a set of topics (more affected by difficult topics)
discounted cumulative gain
Discounted Cumulative Gain
  • Popular measure for evaluating web search and related tasks
  • Two assumptions:
    • Highly relevant documents are more useful than marginally relevant document
    • the lower the ranked position of a relevant document, the less useful it is for the user, since it is less likely to be examined
discounted cumulative gain48
Discounted Cumulative Gain
  • Uses graded relevance as a measure of usefulness, or gain, from examining a document
  • Gain is accumulated starting at the top of the ranking and may be reduced, or discounted, at lower ranks
  • Typical discount is 1/log (rank)
    • With base 2, the discount at rank 4 is 1/2, and at rank 8 it is 1/3
summarize a ranking dcg
Summarize a Ranking: DCG
  • What if relevance judgments are in a scale of [1,r]? r>2
  • Cumulative Gain (CG) at rank n
    • Let the ratings of the n documents be r1, r2, …rn (in ranked order)
    • CG = r1+r2+…rn
  • Discounted Cumulative Gain (DCG) at rank n
    • DCG = r1 + r2/log22 + r3/log23 + … rn/log2n
      • We may use any base for the logarithm, e.g., base=b
discounted cumulative gain50
Discounted Cumulative Gain
  • DCG is the total gain accumulated at a particular rank p:
  • Alternative formulation:
    • used by some web search companies
    • emphasis on retrieving highly relevant documents
dcg example
DCG Example
  • 10 ranked documents judged on 0-3 relevance scale:

3, 2, 3, 0, 0, 1, 2, 2, 3, 0

  • discounted gain:

3, 2/1, 3/1.59, 0, 0, 1/2.59, 2/2.81, 2/3, 3/3.17, 0

= 3, 2, 1.89, 0, 0, 0.39, 0.71, 0.67, 0.95, 0

  • DCG:

3, 5, 6.89, 6.89, 6.89, 7.28, 7.99, 8.66, 9.61, 9.61

summarize a ranking ndcg
Summarize a Ranking: NDCG
  • Normalized Cumulative Gain (NDCG) at rank n
    • Normalize DCG at rank n by the DCG value at rank n of the ideal ranking
    • The ideal ranking would first return the documents with the highest relevance level, then the next highest relevance level, etc
    • Compute the precision (at rank) where each (new) relevant document is retrieved => p(1),…,p(k), if we have k rel. docs
  • NDCG is now quite popular in evaluating Web search
ndcg example
NDCG - Example

4 documents: d1, d2, d3, d4

ndcg example54
NDCG - Example
  • Graded ranking/ordering:
  • DCG = 4 + 2/log(2) + 0/log(3) + 1/log(4)
    • = 6.5
  • IDCG = 4 + 2/log(2) + 1/log(3) + 0/log(4)
    • = 6.63
  • NDCG = DCG/IDCG = 6.5/6.63 = .98

4

2

0

1

r precision
R- Precision
  • Precision at the R-th position in the ranking of results for a query that has R relevant documents.

R = # of relevant docs = 6

R-Precision = 4/6 = 0.67

L10Evaluation

variance
Variance
  • For a test collection, it is usual that a system does crummily on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7)
  • Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query.
    • That is, there are easy information needs and hard ones!

L10Evaluation

from document collections to test collections
From document collections to test collections
  • Still need
    • Test queries
    • Relevance assessments
  • Test queries
    • Must be germane to docs available
    • Best designed by domain experts
    • Random query terms generally not a good idea
  • Relevance assessments
    • Human judges, time-consuming
    • Are human panels perfect?

L10Evaluation

can we avoid human judgment
Can we avoid human judgment?
  • Not really
  • Makes experimental work hard
    • Especially on a large scale
  • In some very specific settings, can use proxies
  • But once we have test collections, we can reuse them (so long as we don’t overtrain too badly)
  • Example below, approximate vector space retrieval

L10Evaluation

approximate vector retrieval
Approximate vector retrieval
  • Let G(q) be the “ground truth” of the actual k closest docs on query q
  • Let A(q) be the k docs returned by approximate algorithm A on query q
  • For performance we would measure A(q) G(q)
    • Is this the right measure?

L10Evaluation

alternative proposal
Alternative proposal
  • Focus instead on how A(q) compares to G(q).
  • Goodness can be measured here in cosine proximity to q: we sum up qd over d A(q).
  • Compare this to the sum of qd over d G(q).
    • Yields a measure of the relative “goodness” of A vis-à-vis G.
      • Thus A may be 90% “as good as” the ground-truth G, without finding 90% of the docs in G.
      • For scored retrieval, this may be acceptable:
    • Most web engines don’t always return the same answers for a given query.
kappa measure for inter judge dis agreement
Kappa measure for inter-judge (dis)agreement
  • Kappa measure
    • Agreement measure among judges
    • Designed for categorical judgments
    • Corrects for chance agreement
  • Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ]
  • P(A) – proportion of time judges agree
  • P(E) – what agreement would be by chance
  • Kappa = 0 for chance agreement, 1 for total agreement.

L10Evaluation

kappa example
Kappa Example
  • P(A) = 370/400 = 0.925
  • P(nonrelevant) = (10+20+70+70)/800 = 0.2125
  • P(relevant) = (10+20+300+300)/800 = 0.7878
  • P(E) = 0.2125^2 + 0.7878^2 = 0.665
  • Kappa = (0.925 – 0.665)/(1-0.665) = 0.776
    • Kappa > 0.8 : good agreement
    • 0.67< Kappa <0.8 : “tentative conclusions” (Carletta ’96)
    • Depends on purpose of study
  • For >2 judges: average pairwise kappas
other evaluation measures

Other Evaluation Measures

Adapted from Slides Attributed to

Prof. Dik Lee (Univ. of Science and Tech, Hong Kong)

L10Evaluation

fallout rate
Fallout Rate
  • Problems with both precision and recall:
    • Number of irrelevant documents in the collection is not taken into account.
    • Recall is undefined when there is no relevant document in the collection.
    • Precision is undefined when no document is retrieved.

L10Evaluation

subjective relevance measure
Subjective Relevance Measure
  • Novelty Ratio: The proportion of items retrieved and judged relevant by the user and of which they were previously unaware.
    • Ability to find new information on a topic.
  • Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search.
    • Relevant when the user wants to locate documents which they have seen before (e.g., the budget report for Year 2000).

L10Evaluation

other factors to consider
Other Factors to Consider
  • User effort: Work required from the user in formulating queries, conducting the search, and screening the output.
  • Response time: Time interval between receipt of a user query and the presentation of system responses.
  • Form of presentation: Influence of search output format on the user’s ability to utilize the retrieved materials.
  • Collection coverage: Extent to which any/all relevant items are included in the document corpus.

L10Evaluation

skip details

SKIP DETAILS

L10Evaluation

early test collections
Early Test Collections
  • Previous experiments were based on the SMART collection which is fairly small. (ftp://ftp.cs.cornell.edu/pub/smart)

Collection Number Of Number Of Raw Size

Name Documents Queries (Mbytes)

CACM 3,204 64 1.5

CISI 1,460 112 1.3

CRAN 1,400 225 1.6

MED 1,033 30 1.1

TIME 425 83 1.5

  • Different researchers used different test collections and evaluation techniques.

L10Evaluation

critique of pure relevance
Critique of pure relevance
  • Relevance vs Marginal Relevance
    • A document can be redundant even if it is highly relevant
    • Duplicates
    • The same information from different sources
    • Marginal relevance is a better measure of utility for the user.
  • Using facts/entities as evaluation units more directly measures true relevance.
  • But harder to create evaluation set

L10Evaluation

evaluation at large search engines
Evaluation at large search engines
  • Search engines have test collections of queries and hand-ranked results
  • Recall is difficult to measure on the web
  • Search engines often use precision at top k, e.g., k = 10
  • . . . or measures that reward you more for getting rank 1 right than for getting rank 10 right.
    • NDCG (Normalized Cumulative Discounted Gain)
  • Search engines also use non-relevance-based measures.
    • Clickthrough on first result
      • Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate.
    • Studies of user behavior in the lab
    • A/B testing

L10Evaluation

a b testing
A/B testing
  • Purpose: Test a single innovation
  • Prerequisite: You have a large search engine up and running.
    • Have most users use old system
    • Divert a small proportion of traffic (e.g., 1%) to the new system that includes the innovation
    • Evaluate with an “automatic” measure like clickthrough on first result
    • Now we can directly see if the innovation does improve user happiness.
    • Probably the evaluation methodology that large search engines trust most
    • In principle less powerful than doing a multivariate regression analysis, but easier to understand
trec benchmarks

TREC Benchmarks

L10Evaluation

the trec benchmark
The TREC Benchmark
  • TREC: Text REtrieval Conference (http://trec.nist.gov/)
  • Originated from the TIPSTER program sponsored by
  • Defense Advanced Research Projects Agency (DARPA).
  • Became an annual conference in 1992, co-sponsored by the
  • National Institute of Standards and Technology (NIST) and
  • DARPA.
  • Participants are given parts of a standard set of documents
  • and TOPICS (from which queries have to be derived) in
  • different stages for training and testing.
  • Participants submit the P/R values for the final document
  • and query corpus and present their results at the conference.

L10Evaluation

the trec objectives
The TREC Objectives
  • Provide a common ground for comparing different IR
  • techniques.
    • Same set of documents and queries, and same evaluation method.
  • Sharing of resources and experiences in developing the
  • benchmark.
    • With major sponsorship from government to develop large benchmark collections.
  • Encourage participation from industry and academia.
  • Development of new evaluation techniques, particularly for
  • new applications.
    • Retrieval, routing/filtering, non-English collection, web-based collection, question answering.

L10Evaluation

trec advantages
TREC Advantages
  • Large scale (compared to a few MB in the SMART Collection).
  • Relevance judgments provided.
  • Under continuous development with support from the U.S. Government.
  • Wide participation:
    • TREC 1: 28 papers 360 pages.
    • TREC 4: 37 papers 560 pages.
    • TREC 7: 61 papers 600 pages.
    • TREC 8: 74 papers.

L10Evaluation

trec tasks
TREC Tasks
  • Ad hoc: New questions are being asked on a static set of data.
  • Routing: Same questions are being asked, but new information is being searched. (news clipping, library profiling).
  • New tasks added after TREC 5 - Interactive, multilingual, natural language, multiple database merging, filtering, very large corpus (20 GB, 7.5 million documents), question answering.

L10Evaluation

slide80
TREC
  • TREC Ad Hoc task from first 8 TRECs is standard IR task
    • 50 detailed information needs a year
    • Human evaluation of pooled results returned
    • More recently other related things: Web track, HARD
  • A TREC query (TREC 5)

Number: 225

Description:

What is the main function of the Federal Emergency Management Agency (FEMA) and the funding level provided to meet emergencies? Also, what resources are available to FEMA such as people, equipment, facilities?

L10Evaluation

standard relevance benchmarks others
Standard relevance benchmarks: Others
  • GOV2
    • Another TREC/NIST collection
    • 25 million web pages
    • Largest collection that is easily available
    • But still 3 orders of magnitude smaller than what Google/Yahoo/MSN index
  • NTCIR
    • East Asian language and cross-language information retrieval
  • Cross Language Evaluation Forum (CLEF)
    • This evaluation series has concentrated on European languages and cross-language information retrieval.

L10Evaluation

characteristics of the trec collection
Characteristics of the TREC Collection
  • Both long and short documents (from a few hundred to over one thousand unique terms in a document).
  • Test documents consist of:

WSJ Wall Street Journal articles (1986-1992) 550 M

AP Associate Press Newswire (1989) 514 M

ZIFF Computer Select Disks (Ziff-Davis Publishing) 493 M

FR Federal Register 469 M

DOE Abstracts from Department of Energy reports 190 M

L10Evaluation

more details on document collections
More Details on Document Collections
  • Volume 1 (Mar 1994) - Wall Street Journal (1987, 1988, 1989), Federal Register (1989), Associated Press (1989), Department of Energy abstracts, and Information from the Computer Select disks (1989, 1990)
  • Volume 2 (Mar 1994) - Wall Street Journal (1990, 1991, 1992), the Federal Register (1988), Associated Press (1988) and Information from the Computer Select disks (1989, 1990)
  • Volume 3 (Mar 1994) - San Jose Mercury News (1991), the Associated Press (1990), U.S. Patents (1983-1991), and Information from the Computer Select disks (1991, 1992)
  • Volume 4 (May 1996) - Financial Times Limited (1991, 1992, 1993, 1994), the Congressional Record of the 103rd Congress (1993), and the Federal Register (1994).
  • Volume 5 (Apr 1997) - Foreign Broadcast Information Service (1996) and the Los Angeles Times (1989, 1990).

L10Evaluation

trec disk 4 5
TREC Disk 4,5

L10Evaluation

sample document with sgml
Sample Document (with SGML)

WSJ870324-0001

John Blair Is Near Accord To Sell Unit, Sources Say

03/24/87

WALL STREET JOURNAL (J)

REL TENDER OFFERS, MERGERS, ACQUISITIONS (TNM) MARKETING, ADVERTISING (MKT) TELECOMMUNICATIONS, BROADCASTING, TELEPHONE, TELEGRAPH (TEL)

NEW YORK

John Blair & Co. is close to an agreement to sell its TV station advertising representation operation and program production unit to an investor group led by James H. Rosenfield, a former CBS Inc. executive, industry sources said. Industry sources put the value of the proposed acquisition at more than $100 million. ...

L10Evaluation

sample query with sgml
Sample Query (with SGML)

Tipster Topic Description

Number: 066

Domain: Science and Technology

Topic: Natural Language Processing </p><p><desc> Description: Document will identify a type of natural language processing technology which is being developed or marketed in the U.S. </p><p><narr> Narrative: A relevant document will identify a company or institution developing or marketing a natural language processing technology, identify the technology, and identify one of more features of the company's product.</p><p><con> Concept(s): 1. natural language processing ;2. translation, language, dictionary</p><p><fac> Factor(s): </p><p><nat> Nationality: U.S.</nat></p><p></fac></p><p><def> Definitions(s): </p><p></top></p><p>L10Evaluation</p></div></div><div class="tslide"><div><img alt="trec properties" src="https://image.slideserve.com/312721/trec-properties-t.jpg"></div><div><span><a href="https://image.slideserve.com/312721/trec-properties-n.jpg" title="87.trec properties" target="_blank">TREC Properties</a></span><ul><li>Both documents and queries contain many different kinds of information (fields).</li><li>Generation of the formal queries (Boolean, Vector Space, etc.) is the responsibility of the system.</li><ul><li>A system may be very good at querying and ranking, but if it generates poor queries from the topic, its final P/R would be poor.</li></ul></ul><p>L10Evaluation</p></div></div><div class="tslide"><div><img alt="two more trec document examples" src="https://image.slideserve.com/312721/two-more-trec-document-examples-t.jpg"></div><div><span><a href="https://image.slideserve.com/312721/two-more-trec-document-examples-n.jpg" title="88.two more trec document examples" target="_blank">Two more TREC Document Examples</a></span><p>L10Evaluation</p></div></div><div class="tslide"><div><img alt="another example of trec topic query" src="https://image.slideserve.com/312721/another-example-of-trec-topic-query-t.jpg"></div><div><span><a href="https://image.slideserve.com/312721/another-example-of-trec-topic-query-n.jpg" title="89.another example of trec topic query" target="_blank">Another Example of TREC Topic/Query</a></span><p>L10Evaluation</p></div></div><div class="tslide"><div><img alt="evaluation" src="https://image.slideserve.com/312721/evaluation-t.jpg"></div><div><span><a href="https://image.slideserve.com/312721/evaluation-n.jpg" title="90.evaluation" target="_blank">Evaluation </a></span><ul><li>Summary table statistics: Number of topics, number of documents retrieved, number of relevant documents.</li><li>Recall-precision average: Average precision at 11 recall levels (0 to 1 at 0.1 increments).</li><li>Document level average: Average precision when 5, 10, .., 100, … 1000 documents are retrieved.</li><li>Average precision histogram: Difference of the R-precision for each topic and the average R-precision of all systems for that topic.</li></ul><p>L10Evaluation</p></div></div><div class="tslide"><div><img alt="slide91" src="https://image.slideserve.com/312721/slide91-t.jpg"></div><div><span><a href="https://image.slideserve.com/312721/slide91-n.jpg" title="91.slide91" target="_blank">L10Evaluation</a></span></div></div><div class="tslide"><div><img alt="cystic fibrosis cf collection" src="https://image.slideserve.com/312721/cystic-fibrosis-cf-collection-t.jpg"></div><div><span><a href="https://image.slideserve.com/312721/cystic-fibrosis-cf-collection-n.jpg" title="92.cystic fibrosis cf collection" target="_blank">Cystic Fibrosis (CF) Collection</a></span><ul><li>1,239 abstracts of medical journal articles on CF.</li><li>100 information requests (queries) in the form of complete English questions.</li><li>Relevant documents determined and rated by 4 separate medical experts on 0-2 scale:</li><ul><li>0: Not relevant.</li><li>1: Marginally relevant.</li><li>2: Highly relevant.</li></ul></ul><p>L10Evaluation</p></div></div><div class="tslide"><div><img alt="cf document fields" src="https://image.slideserve.com/312721/cf-document-fields-t.jpg"></div><div><span><a href="https://image.slideserve.com/312721/cf-document-fields-n.jpg" title="93.cf document fields" target="_blank">CF Document Fields</a></span><ul><li>MEDLINE access number</li><li>Author</li><li>Title</li><li>Source</li><li>Major subjects</li><li>Minor subjects</li><li>Abstract (or extract)</li><li>References to other documents</li><li>Citations to this document</li></ul><p>L10Evaluation</p></div></div></div> </div> <!-- Presentation Transcript End !--> <input type="hidden" name="video_uid" id="video_uid" value="30294"> <input type="hidden" name="video_vid" id="video_vid" value="312721"> <input type="hidden" name="video_node" id="video_node" value=""> <input type="hidden" name="vido_link" id="vido_link" value="evaluation-of-ir-systems"> <input type="hidden" name="vido_uname" id="vido_uname" value="opa"> </div> <!-- Left Side End !--> <!-- Right Side !--> <div class="col-lg-4 col-md-5 col-sm-12 no_padding rightside"> <div align="center" class="adsecmain"> <div class="adsec2"> <a href="https://www.digitalofficepro.com/powerpoint-templates.html?utm_source=slideserve&utm_medium=website&utm_campaign=slideserve+ppt+promotion"><img src="https://www.slideserve.com/images/ss_1366ad.jpg" alt="ad"></a> </div> <div class="adsec1"> <ins class="adsbygoogle" style="display:inline-block;width:300px;height:250px" data-ad-client="ca-pub-3976764401535897" data-ad-slot="6588102353"></ins> <script> (adsbygoogle = window.adsbygoogle || []).push({}); </script> <!-- adsense responsive code_end 2nd sec--> </div> </div> <!-- Related Tab Start !--> <ul class="nav nav-tabs ssalignL" id="myTab"> <li class="active"><a href="#related" data-toggle="tab" onclick="$('.pager').show();">Related Presentations</a></li> <li><a href="#more" data-toggle="tab" onclick="moreuser();$('.pager').hide();">More by User</a></li> </ul> <div class="tab-content" id="tab-right"> <ul class="tab-pane list-unstyled active" id="related"> <input type="hidden" name="next_url" id="next_url" value="opa/evaluation-of-ir-systems"> <input type="hidden" name="next_img" id="next_img" value="312721"> <input type="hidden" name="next_title" id="next_title" value="Evaluation of IR Systems"> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/opa/evaluation-of-ir-systems" title="Evaluation of IR Systems - 2. this lecture. results summaries:making our good results usable to a userhow do we know if our results are any good? evaluating a search enginebenchmarksprecision and recall. prasad. l10evaluation. 3. result summaries."> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_312721.jpg" alt="Evaluation of IR Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Evaluation of IR Systems </span> -2. this lecture. results summaries:making our good results usable to a userhow do we know if our results are any good? evaluating a search enginebenchmarksprecision and recall. prasad. l10evaluation. 3. result summaries.</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/kalia-cantrell/personnel-evaluation-systems" title="PERSONNEL EVALUATION SYSTEMS - How we help our staff become more effective margie simineo – june, 2010. objectives for participants. to understand the purpose of evaluation to know the background about"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_6298208.jpg" alt="PERSONNEL EVALUATION SYSTEMS" data="xcv "> </div> </div> <div class="list-related-info"><span>PERSONNEL EVALUATION SYSTEMS </span> -How we help our staff become more effective margie simineo – june, 2010. objectives for participants. to understand the purpose of evaluation to know the background about</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/christophe/brand-evaluation-systems" title="Brand Evaluation Systems - Three brand measurement strategies. brand"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_491973.jpg" alt="Brand Evaluation Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Brand Evaluation Systems </span> -Three brand measurement strategies. brand</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/kimi/evaluation-of-complex-systems" title="Evaluation of Complex Systems - Complex systems workshop, september 20-21, 2012. evaluation of complex systems. j. bryan lyles program director cise/cns. credit: monet group at uiuc. global networks are creating extremely important new"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_3988268.jpg" alt="Evaluation of Complex Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Evaluation of Complex Systems </span> -Complex systems workshop, september 20-21, 2012. evaluation of complex systems. j. bryan lyles program director cise/cns. credit: monet group at uiuc. global networks are creating extremely important new</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/shani/evaluation-of-information-systems" title="Evaluation of information systems - Nicolette de keizer dept medical informatics amc - university of amsterdam. outline. significance of evaluation process of evaluation evaluation questions methods – study design, data collection"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_4487893.jpg" alt="Evaluation of information systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Evaluation of information systems </span> -Nicolette de keizer dept medical informatics amc - university of amsterdam. outline. significance of evaluation process of evaluation evaluation questions methods – study design, data collection</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/alka/evaluation-assurance-classified-systems" title="Evaluation, Assurance, Classified Systems - Terminology. security capabilities are what a product is supposed to do for securityassurance is the level of trust that it really doesassurance is the hard problem!evaluation is the process of determining"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_292752.jpg" alt="Evaluation, Assurance, Classified Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Evaluation, Assurance, Classified Systems </span> -Terminology. security capabilities are what a product is supposed to do for securityassurance is the level of trust that it really doesassurance is the hard problem!evaluation is the process of determining</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/liv/interactive-systems-design-evaluation" title="Interactive Systems Design & Evaluation - : john t burns e-mail jtb@dmu.ac.uk mandatory text user interface design and evaluation debbie stone et al morgan kaufman 2005 recommended text : interaction design beyond hci preece rodgers et"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_1650197.jpg" alt="Interactive Systems Design & Evaluation" data="xcv "> </div> </div> <div class="list-related-info"><span>Interactive Systems Design & Evaluation </span> -: john t burns e-mail jtb@dmu.ac.uk mandatory text user interface design and evaluation debbie stone et al morgan kaufman 2005 recommended text : interaction design beyond hci preece rodgers et</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/jaquelyn-dickerson/new-evaluation-systems-to-align-with-the-standards" title="New Evaluation Systems to align with the Standards! - New teacher evaluation system will go into effect september 1, 2009 for all evaluators of teachers over 3500 instructional leaders will be experience a professional study during the 2009-2010"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_6811698.jpg" alt="New Evaluation Systems to align with the Standards!" data="xcv "> </div> </div> <div class="list-related-info"><span>New Evaluation Systems to align with the Standards! </span> -New teacher evaluation system will go into effect september 1, 2009 for all evaluators of teachers over 3500 instructional leaders will be experience a professional study during the 2009-2010</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/saleema/ccsd-performance-evaluation-systems" title="CCSD Performance Evaluation Systems - Agenda. performance evaluationsthe role of an evaluatorprincipalsall evaluatorsevaluating certified staffcobb keys systemsother certified staffevaluating classified staff. 2. performance evaluations. ocga"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_836740.jpg" alt="CCSD Performance Evaluation Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>CCSD Performance Evaluation Systems </span> -Agenda. performance evaluationsthe role of an evaluatorprincipalsall evaluatorsevaluating certified staffcobb keys systemsother certified staffevaluating classified staff. 2. performance evaluations. ocga</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/bunme/models-of-evaluation-systems-in-romania" title="Models of evaluation systems in Romania - 1. evaluation process for the"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_964528.jpg" alt="Models of evaluation systems in Romania" data="xcv "> </div> </div> <div class="list-related-info"><span>Models of evaluation systems in Romania </span> -1. evaluation process for the</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/lyris/quantitative-evaluation-of-embedded-systems" title="Quantitative Evaluation of Embedded Systems - Buffering. buffering in streaming applications. image taken from an online tutorial on the vlc media player. buffering in dataflow graphs. a. c. b. 30ms. s. 10ms. 26ms. invariants in a"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_1875556.jpg" alt="Quantitative Evaluation of Embedded Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Quantitative Evaluation of Embedded Systems </span> -Buffering. buffering in streaming applications. image taken from an online tutorial on the vlc media player. buffering in dataflow graphs. a. c. b. 30ms. s. 10ms. 26ms. invariants in a</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/kai-clayton/ohio-educator-evaluation-systems" title="Ohio Educator Evaluation Systems - Ohio educator evaluation systems. ohio department of education march 2011. developed in 2007 field tested in 2008 adopted by sboe fall 2009 piloted in 19 school districts and 140 schools 2008 – 2010"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_7070068.jpg" alt="Ohio Educator Evaluation Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Ohio Educator Evaluation Systems </span> -Ohio educator evaluation systems. ohio department of education march 2011. developed in 2007 field tested in 2008 adopted by sboe fall 2009 piloted in 19 school districts and 140 schools 2008 – 2010</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/barbie/educator-evaluation-systems-effectiveness-labels" title=" - Educator evaluation systems &amp; effectiveness labels. venessa keesler, ph.d. &amp; carla howe office of psychometrics, accountability, research &amp; evaluation michigan department of education. overview of current plan and issues"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_3042695.jpg" alt="" data="xcv "> </div> </div> <div class="list-related-info"><span> </span> -Educator evaluation systems &amp; effectiveness labels. venessa keesler, ph.d. &amp; carla howe office of psychometrics, accountability, research &amp; evaluation michigan department of education. overview of current plan and issues</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/mahlah/situated-evaluation-of-visual-analytics-systems" title="Situated evaluation of visual analytics systems - Ann blandford professor of human–computer interaction director, ucl interaction centre with thanks to simon attfield, sarah faisal and stephann makri for interesting discussions and examples."> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_4006618.jpg" alt="Situated evaluation of visual analytics systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Situated evaluation of visual analytics systems </span> -Ann blandford professor of human–computer interaction director, ucl interaction centre with thanks to simon attfield, sarah faisal and stephann makri for interesting discussions and examples.</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/lucky/educator-evaluation-systems-effectiveness-labels" title="Educator Evaluation Systems & Effectiveness Labels - Venessa keesler, ph.d. office of evaluation, strategic research and accountability michigan department of education and carla howe west virginia department of education. overview of"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_4313733.jpg" alt="Educator Evaluation Systems & Effectiveness Labels" data="xcv "> </div> </div> <div class="list-related-info"><span>Educator Evaluation Systems & Effectiveness Labels </span> -Venessa keesler, ph.d. office of evaluation, strategic research and accountability michigan department of education and carla howe west virginia department of education. overview of</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/odetta/evaluation-of-course-management-systems-for-improved-learning" title="Evaluation of Course Management Systems for Improved Learning - A thesis proposal by samuel chukwuemeka department of computer science troy university montgomery submitted to dr. irem ozkarahan january, 2013."> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_1689038.jpg" alt="Evaluation of Course Management Systems for Improved Learning" data="xcv "> </div> </div> <div class="list-related-info"><span>Evaluation of Course Management Systems for Improved Learning </span> -A thesis proposal by samuel chukwuemeka department of computer science troy university montgomery submitted to dr. irem ozkarahan january, 2013.</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/chakra/modeling-and-performance-evaluation-of-computer-systems" title=" - Modeling and performance evaluation of computer systems. chapter 3 quantifying performance models. performance by design: computer capacity planning by example. daniel a. menascé, virgilio a.f. almeida, lawrence w. dowdy  prentice"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_3666528.jpg" alt="" data="xcv "> </div> </div> <div class="list-related-info"><span> </span> -Modeling and performance evaluation of computer systems. chapter 3 quantifying performance models. performance by design: computer capacity planning by example. daniel a. menascé, virgilio a.f. almeida, lawrence w. dowdy  prentice</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/alan-schroeder/an-evaluation-tool-for-natural-language-processing-systems" title="An Evaluation Tool for Natural Language Processing Systems - Audrey n. mbeje department of computer science ball state university november 09, 2000. contents. introduction problem description significance of the study definition of terms"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_6371048.jpg" alt="An Evaluation Tool for Natural Language Processing Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>An Evaluation Tool for Natural Language Processing Systems </span> -Audrey n. mbeje department of computer science ball state university november 09, 2000. contents. introduction problem description significance of the study definition of terms</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/serenity-mccarty/overview-of-the-texas-teacher-and-principal-evaluation-systems" title="Overview of the Texas Teacher and Principal Evaluation Systems - Supporting effective principals and teachers on every campus. three keys of evaluation. formative and timely ongoing relationships. new evaluations systems. fall 2013 and spring"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_6435891.jpg" alt="Overview of the Texas Teacher and Principal Evaluation Systems" data="xcv "> </div> </div> <div class="list-related-info"><span>Overview of the Texas Teacher and Principal Evaluation Systems </span> -Supporting effective principals and teachers on every campus. three keys of evaluation. formative and timely ongoing relationships. new evaluations systems. fall 2013 and spring</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/gino/hrsa-model-trauma-systems-planning-evaluation" title="HRSA Model Trauma Systems Planning Evaluation - Common names. hrsa model trauma systems"> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_390755.jpg" alt="HRSA Model Trauma Systems Planning Evaluation" data="xcv "> </div> </div> <div class="list-related-info"><span>HRSA Model Trauma Systems Planning Evaluation </span> -Common names. hrsa model trauma systems</div> </div> </a> </div> </li> <li> <div class="col-lg-4 col-md-4 col-sm-3 col-xs-4 item-grid"> <a href="/ocean/evaluation-of-maternal-smoking-surveillance-systems-in-massachusetts" title="Evaluation of Maternal Smoking Surveillance Systems in Massachusetts - Lizzie harvey, mph cdc/cste applied epidemiology fellow massachusetts department of public health june 14, 2011."> <div class="list-item-related"> <div class="list-image-related"> <div class="img-item list-centerer"> <img class="centered-img thumbr" src="//thumbs.slideserve.com/1_4520658.jpg" alt="Evaluation of Maternal Smoking Surveillance Systems in Massachusetts" data="xcv "> </div> </div> <div class="list-related-info"><span>Evaluation of Maternal Smoking Surveillance Systems in Massachusetts </span> -Lizzie harvey, mph cdc/cste applied epidemiology fellow massachusetts department of public health june 14, 2011.</div> </div> </a> </div> </li> <li> </li> </ul> <div class="clearfix"></div> <ul class="pager" style="padding: 0px;margin: 0px;"> <li><a class="btn btn-default disabled npbuttons" href="javascript:void(0);"><i class="fa fa-chevron-circle-left" aria-hidden="true"></i></a></li> <label class="goToSlideLabel"> <span id="current-slide" class="j-current-slide">1</span> of <span id="total-slides" class="j-total-slides">5</span> </label> <li><a href="javascript:void(0);" class="btn btn-default npbuttons"><i class="fa fa-chevron-circle-right" aria-hidden="true"></i></a></li> </ul> </ul> <div class="tab-pane" id="more"></div> </div> <!-- Related Tab End !--> <div class="clearfix"></div> </div> <!-- Right Side End !--> </div> <div id="suggestion"> <div id="suggestion-box" style='display:none;'> <div class="sugg-close" onclick="$('#suggestion').remove()"><i class="fa fa-times" aria-hidden="true"></i></div> <div class="sugg-mini-title">Today's Free</div> <div class="sugg-title">PowerPoint Template</div> <img src="https://www.slideserve.com/download-template/template.jpg" class="img-responsive img-suggestion" alt="Download Template"/> <p class="small sugg-content">For SlideServe users</p> <a href="https://www.slideserve.com/download-template/template.pot" onclick="ga('send', 'event', 'pot download', 'download', 'viewpage template promotion');" class="btn btn-md sugg-down" title="Download"><i class="fa fa-download sugg-animation" aria-hidden="true"></i> Download Now</a> </div> </div> </div> <!-- Mian container End--> <!-- presentation download box_start --> <div class="liststyle-modal modal fade" id="downloadpop" tabindex="-1" role="dialog" aria-hidden="true"> <div class="modal-content"> <div class="close-modal" data-dismiss="modal"> <div class="lr"> <div class="rl"></div> </div> </div> <div class="container"> <div class="row"> <div class="col-lg-8 col-lg-offset-2"> <div class="modal-body"> <span class="download_presentation">Download Presentation</span> <hr class="slideshow-hr"> <div class="downloadsec1"> <img class="downloadsecimg" height="92" src="https://thumbs.slideserve.com/1_312721.jpg" style="width:122px; border:0;" alt="Download Section"> <div align="center" id="fetch"> <span>Connecting to Server..</span> </div> <br><br> </div> </div> </div> </div> </div> </div> </div> <!-- presentation download box_end --> <!-- Footer --> <div class="clearfix"></div> <footer class="text-center"> <div class="footer-above"> <div class="container"> <div class="row"> <div class="col-lg-12" style="margin-bottom:8px;"> <ul class="list-inline"> <li><a href="https://www.slideserve.com" lang="en" hreflang="en" title="Evaluation of IR Systems - English" >English</a></li> <li><a href="https://fr.slideserve.com" lang="fr" hreflang="fr" title="Evaluation of IR Systems - Français" >Français</a></li> </ul> </div> <div class="col-lg-12"> <a href="/about" rel="nofollow">About Us</a> | <a href="/advertise" rel="nofollow">Advertise</a> | <a href="/terms" rel="nofollow"> Terms of Use</a> | <a href="/privacy" rel="nofollow" >Privacy Policy</a> | <a href="/contact.php" rel="nofollow" >Contact Us</a> | <a href="https://blog.slideserve.com/" >Blog</a> </div> </div> </div> </div> <div class="footer-below"> <div class="container"> <div class="row"> <div class="col-lg-8 text-left"> © 2017 SlideServe. All rights reserved | Powered By DigitalOfficePro </div> <div class="col-lg-4 text-right"> <ul class="list-inline"> <li><a href="https://www.facebook.com/SlideServe" rel="nofollow"><i class="fa fa-facebook"></i></a></li> <li><a href="https://plus.google.com/+SlideServe" rel="nofollow"><i class="fa fa-google-plus"></i></a></li> <li><a href="https://twitter.com/slideserve" rel="nofollow"><i class="fa fa-twitter"></i></a></li> <li><a href="https://www.youtube.com/user/SlideServe" rel="nofollow"><i class="fa fa-youtube-play"></i></a></li> <li><a href="https://www.pinterest.com/slideserve/" rel="nofollow"><i class="fa fa-pinterest"></i></a></li> </ul> </div> </div> </div> </div> </footer> <script> var subdom = window.location.href.split("/")[2].split(".")[0]; <!------- g analytics[ ------> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-31359012-1', 'auto'); ga('send', 'pageview'); <!------- g analytics ] ------> <!------- fb comment[ ------> (function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_GB/sdk.js#xfbml=1&version=v2.6"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); <!------- fb comment] ------> </script> <script async src='//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js'></script> <script> //------------------------------------------------------------------------------------------- //on jquery load event function onjqueryload() { //------------------------------------------------------------------------------------------- //Load javascript files loadHandler.load("/new/script/bootstrap.min.js","script",null); loadHandler.load("/js/player.min.js","script",null); loadHandler.load("/js/winHandler.min.js","script",null); loadHandler.load("/js/viewPageHandler.min.js","script",null); //------------------------------------------------------------------------------------------- $(document ).ready(function() { /*$.post( "/track.php", {vid:312721,uid:30294,vlink:"evaluation-of-ir-systems"}) .done(function( data ) {});*/ }); } //------------------------------------------------------------------------------------------- </script> </body></html>