1 / 42

Search Results Need to be Diverse

Search Results Need to be Diverse. Mark Sanderson University of Sheffield. Mark Sanderson University of Sheffield. How to have fun while running an evaluation campaign. Aim. Tell you about our test collection work in Sheffield How we’ve been having fun building test collections.

deatherage
Download Presentation

Search Results Need to be Diverse

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Search Results Need to be Diverse Mark Sanderson University of Sheffield

  2. Mark Sanderson University of Sheffield How to have fun while running an evaluation campaign

  3. Aim • Tell you about our test collection work in Sheffield • How we’ve been having fun building test collections

  4. Organising this is hard • TREC • Donna, Ellen • CLEF • Carol • NTCIR • Noriko • Make sure you enjoy it

  5. ImageCLEF • Cross language image retrieval • Running for 6 years • Photo • Medical • And other tasks • Imageclef.org

  6. How do we do it? • Organise and conduct research • imageCLEFPhoto 2008 • Study diversity in search results • Diversity?

  7. SIGIR

  8. ACL

  9. Mark Sanderson

  10. Cranfield model

  11. Operational search engine • Ambiguous queries • What is correct interpretation? • Don’t know • Serve as diverse a range as possible

  12. Diversity is studied • Carbonell, J. and Goldstein, J. (1998) The use of MMR, diversity-based reranking for reordering documents and producing summaries. In ACM SIGIR, 335-336. • Zhai, C. (2002) Risk Minimization and Language Modeling in Text Retrieval, PhD thesis, Carnegie Mellon University. • Chen, H. and Karger, D. R. (2006) Less is more: probabilistic models for retrieving fewer relevant documents. In ACM SIGIR, 429-436.

  13. Cluster hypothesis • “closely associated documents tend to be relevant to the same requests” • Van Rijsbergen (1979)

  14. Most test collections • Focussed topic • Relevance judgments • Who says what is relevant? • (almost always) one person • Consideration of interpretations • Little or none • Gap between test and operation

  15. Few test collections • Hersh, W. R. and Over, P. (1999) Trec-8 interactive track report. TREC-8 • Over P. (1997) TREC-5 Interactive Track Report. TREC-5, 29-56 • Clarke, C. L., Kolla, M., Cormack, G. V., Vechtomova, O., Ashkan, A., Büttcher, S., and MacKinnon, I. (2008) Novelty and diversity in information retrieval evaluation. In ACM SIGIR.

  16. Study diversity • What sorts of diversity is there? • Ambiguous query words • How often is it a feature of search? • How often are queries ambiguous? • How can we add it into test collections?

  17. Extent of diversity? • “Ambiguous queries: test collections need more sense”, SIGIR 2008 • How do you define ambiguity? • Wikipedia • WordNet

  18. Disambiguation page

  19. Wikipedia stats • enwiki-20071018-pages-articles.xml • (12.7Gb) • Disambiguation pages easy to spot • “_(disambiguation)” in title Chicago • “{{disambig}}” template George_bush

  20. Conventional source • Downloaded WordNet v3.0 • 88K words

  21. Query logs

  22. Fraction of ambiguous

  23. Conclusions • Ambiguity is a problem • Ambiguity is present in query logs • Not just Web search • Ambiguity present? • Need for IR systems to produce diverse results

  24. Test collections • Don’t test for diversity • Do search systems deal with it?

  25. ImageCLEFPhoto • Build a test collection • Encourage the study of diversity • Study how others deal with diversity • Have some fun

  26. Collection • IAPR TC-12 • 20,000 travel photographs • Text captions • 60 existing topics • Used in two previous studies • 39 used for diversity study

  27. Diversity needs in topic • “Images of typical Australian animals”

  28. Types of diversity • 22 geographical • “Churches in Brazil” • 17 other • “Australian animals”

  29. Relevance judgments • Clustered existing qrels • Multiple assessors • Good level of agreement on clusters

  30. Evaluation • Precision at 20 • P(20) • Fraction of relevant in top 20 • Cluster recall at 20 • CR(20) • Fraction of different clusters in top 20

  31. Track was popular • 24 groups • 200 runs in total

  32. Submitted runs

  33. Compare with past years • Same 39 topics used in 2006, 2007 • But without clustering • Compare cluster recall on past runs • Based on identical P(20) • Cluster recall increased • Substantially • Significantly

  34. Meta-analysis • This was fun • We experimented on participants outputs • Not by design • Lucky accident

  35. Not first to think of this • Buckley and Voorhees • SIGIR 2000, 2002 • Use submitted runs to generate new research

  36. Conduct user experiment • Do users prefer diversity? • Experiment • Build a system to do this • Show users • your system • Baseline system • Measure users

  37. Why bother… • …when others have done the work for you • Pair up randomly sampled runs • High CR(20) • Low CR(20) • Show to users

  38. Animals swimming

  39. Numbers • 25 topics • 31 users • 775 result pairs compared

  40. User preferences • 54.6% more diversified; • 19.7% less diversified; • 17.4% both were equal; • 8.3% preferred neither.

  41. Conclusions • Diversity appears to be important • System don’t do diversity by default • Users prefer diverse results • Test collections don’t support diversity • But can be adapted

  42. and • Organising evaluation campaigns is rewarding • And can generate novel research

More Related