1 / 29

Your Search Returned 0 Results: Improving Digital Library Search Tools

Your Search Returned 0 Results: Improving Digital Library Search Tools. Paul Aumer-Ryan School of Information The University of Texas at Austin November 29, 2006. 1. Foreword. “No Results Found” can have several meanings:

Faraday
Download Presentation

Your Search Returned 0 Results: Improving Digital Library Search Tools

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Your Search Returned 0 Results: Improving Digital Library Search Tools Paul Aumer-Ryan School of Information The University of Texas at Austin November 29, 2006

  2. 1 Foreword • “No Results Found” can have several meanings: • “The explicit assemblage of characters you submitted does not occur anywhere in our index of items in our collection.” • “We don’t understand what you just typed.” • “We understand some of the things you typed, but not all of them.” • “We have what you are looking for, but we call it something else.” • “We don’t have what you are looking for.” • “Go away.”

  3. 1 Foreword • How is a patron supposed to determine which meaning is being conveyed? • “No Results Found” seems pretty authoritative and final; it’s a statement of fact, and it’s coming from a computer. • In a world where information overload has become cliché, how do we react to the opposite?

  4. 1 Let’s Waste Some Time… • http://www.lib.utexas.edu/ • Does it know acronyms? (JCDL) • Does it deal with misspellings? (digitul) • Can it search on subsets of terms? • Does it understand singular/plural?

  5. 2 Introduction • Overview of related work: • Searcher Behaviors, Collection “Behaviors” • Suggestions • Social Computing • Meta Search Engines • Visualizing Search Results • Experiment • Design • Expected Findings • Contributions

  6. 3 Searcher Behaviors • Models of Search Behavior: • Deep Divers vs. Broad Scanners vs. Fast Surfers • Query refiners vs. “I’m Feeling Lucky!”-ers • Expert vs. Novice • Seeking vs. Encountering vs. Exploring • Digital Libraries vs. The Web

  7. 3 Collection Behaviors • Different searchers have different wants, and different collection types call for different search tools • Models of collection “behavior”: • Small vs. Large • Homogeneous vs. Heterogeneous • Interrelated vs. Distinct • Single medium vs. Many media

  8. 3 The Helping Hand: Suggestions • Misspelled word suggestions • Automatic permutation suggestions • Acronym recognition Ebay.com

  9. 3 The Helping Hand: Suggestions • Avoiding the back button • Maintaining a consistent direction of flow • Minimize swapping between keyboard/mouse

  10. 3 Social Computing in Digital Libraries • Personalization • Search results are tailored based on the patron’s history… • With obvious privacy implications • Peer Recommendations • At the very least, links that were followed and/or rated highly by searchers using the same search terms will be preferred • More involved: results from peers with similar interests will be preferred... • With obvious privacy implications

  11. 3 Social Computing in Digital Libraries • Patron Tagging • Objects in the DL can be tagged by patrons, and these tags can be searched • Thumbs Up / Thumbs Down • A simple, patron-driven measure of the applicability of a document to a given search term • Popularity Rankings • “Popular” documents ranked higher; could be measured in many ways

  12. 3 Meta Search Engines • If one search engine returns no results, how about three or five? MetaCrystal

  13. 3 Meta Search Engines • Problems with aggregators: • Always done by a 3rd party • Relies on all engines being available and up-to-date • Only as fast as the slowest member • Adds a layer of complexity (Schwartz’s “The Paradox of Choice”)

  14. 3 Visualizing Search Results • Relational Maps

  15. 3 Visualizing Search Results • Topic Maps

  16. 3 Visualizing Search Results • Concept Maps

  17. 3 Visualizing Search Results • Maps, Maps, Maps, and Complexity

  18. 3 Visualizing Search Results • In general, visualization tends to deal with too much complexity, rather than too little (But for certain circumstances)

  19. 3 Keep the Baby, Not the Bathwater • Rather than performing an end-run around our problem (e.g., visualization maps), the focus here is on classic textual search and retrieval • “No Results Found” is applicable to all types of searches, but visualization adds another layer of complexity that we don’t need to deal with now

  20. 4 Experiment: Reaction to Ø Ø = No Results Found • Broad Questions: • What are the affective implications of encountering a null result set? • What impact does the digital library interface have on the interpretation of its contents? • Focused Question: After encountering a null result set, how do participant’s emotional responses affect further searches on the same topic?

  21. 4 Experimental Design • A mock digital library will be created: • Participants will interact with it via a simple search tool, which they will be told they are evaluating; • Participants will be given a topic to search for and several questions to answer regarding that topic; • The digital library will contain a small set of results pertaining to that topic.

  22. 4 Experimental Design • Participants will be divided into 3 groups: • Control Group: Get appropriate results from their first search term; • Experimental Group 1: Encounter Ø once, then subsequent search will return appropriate results; • Experimental Group 2: Encounter inappropriate results. • There will ideally be at least 50 people in each group

  23. 4 Experimental Design • Before searching the digital library, Participants will: • Answer a set of demographic questions • Rate their general mood (affect) • Rate their familiarity with computers, digital libraries, and research

  24. 4 Experimental Design • After evaluating the results in their own fashion, Participants will: • Answer a set of questions confirming comprehension; • Rate the authoritativeness of the results they found; • Rate their impressions of the digital library and the search tool; • Rate their general mood (affect) • (Would behavioral measures, e.g. skin conductivity and heart rate, be worthwhile during the seeking process?)

  25. 4 Experimental Design • Data Collected: • Pre-test questionnaires (demographics, baseline affect, familiarity measures) • Experimental data (time-on-task, search queries, number of mouse clicks, back button presses, etc., and possible behavioral measures) • Post-test questionnaires (authoritativeness, opinion of DL, affect)

  26. 4 Expected Findings • Participants who encounter Ø will: • Take more time completing the task (of course); • Rank the results as less authoritative; • Have a lower opinion of the search tool; • Exhibit more negative affect (frustration, anger, distress). • Participants who encounter inappropriate results are expected to be similar. • Novice users are expected to be more susceptible than expert users (see Chesney’s First Monday paper)

  27. 4 Contribution to the Field • This study hopes to elucidate the dangers of “no results found” responses by showing the actual effects on digital library users; • If Participants do indeed see results following Ø as less authoritative, it means the contents of a digital library are being evaluated not on their own merit, but by the interface’s effect on them; • If Participants have a lower opinion of a digital library because it returns Ø, then they are likely to go elsewhere; • If Participants exhibit more negative affect because of Ø, that’s just generally bad.

  28. 5 Conclusions • Empty search result pages tend to get ignored in the design and testing process: • Because they are not destinations; • They are just fleeting error messages; • They have little impact other than saying, “Try Again”; and our captive users have no choice, right? • Patrons won’t be spending any time there anyway; • Testers are so familiar with the interface that they hardly ever see them. • Ignore them no more!

  29. 5 Conclusions • In short, it is no longer enough to simply “put a digital library out there” for consumption; we need to make sure that we aren’t misleading patrons by saying we don’t have what we actually do have.

More Related