1 / 22

Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute

Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute. By Paul Matthews (p.matthews@odi.org.uk) and Arne Wunder (a.wunder@lse.ac.uk). Background. Web 2.0 approaches: Communities of Practice share recommended sources and bookmarks

elaine-key
Download Presentation

Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute By Paul Matthews (p.matthews@odi.org.uk) and Arne Wunder (a.wunder@lse.ac.uk)

  2. Background • Web 2.0 approaches: Communities of Practice share recommended sources and bookmarks • Focuss.eu: Initiative of European development think tanks. • Growing popularity of social bookmarking, interest in usage within organisations • Folksonomy over taxonomy and serendipity in addition to traditional search and retrieval

  3. Objective 1 • Comparative relevance assessment of specialised international development search engine Focuss.eu (using Google Coop) against Google web search

  4. Objective 2 • Investigate how staff use bookmarking and test a pilot intranet-based bookmarking system

  5. Overseas Development Institute • ODI is a registered charity and Britain's leading independent think tank on international development and humanitarian issues. • Main task: Policy-focused research and dissemination, mainly for the Department of International Development (DFID) . • 127 staff members, most of them researchers.

  6. Search engines: research design

  7. Search engines: application

  8. Search engines: findings: (1) Mean overall relevance Interpretation: Globally, Focuss outperforms Google web search significantly

  9. Search engines: findings: (2) Term-sensitive relevance Interpretation: The true strength of Focuss lies in dealing with relatively ambiguous terms. In other words: It succeeds in avoiding the noise of unrelated ambiguous results

  10. Findings: (3) Direct case-by-case comparison Interpretation: Focuss outperforms Google web search in a significant number of searches, although this advantage is less clear in searches using strictly development related terms

  11. Search engines: findings: (4) High relevance per search Interpretation: Focuss is slightly more likely to produce at least one highly relevant result for each search than Google web search.

  12. Search engines: findings: (5) Interviews • Search engines used for less complex research tasks or for getting quick results. • Search engines criticised for failing to include the most relevant and authoritative knowledge contained in databases as well as books. • Google Scholar praised for including some relevant scholarly journals but was criticised for its weak coverage and degree of noise. • For more complex research tasks, online journals and library catalogues are preferred research sources. Interpretation: Even specialised search engines are far from being a panacea as they do not solve the “invisible web” issue.

  13. Search engines: Conclusion • Focuss’s strength is its context-specificitiy • Here, Focuss achieves a better overall relevance and has a better likelihood of producing at least one highly relevant result per search. • However, both still have structural limitations. Doing good development research is therefore not about choosing the “right search engine” but about choosing the right tools for each individual research task.

  14. Bookmarking: Design • Survey of user requirements and behaviour • Creation of bookmarking module for intranet (MS SharePoint) • Usability testing • Preliminary analysis

  15. Bookmarking: Survey (n=18)

  16. Bookmarking: Survey (n=18)

  17. Bookmarking: Application

  18. Bookmarking: testing,task completion 1) Manual add (100%) 2) Favourites upload ( 60%) • Non-standard chars in links • Wrong destination URL 3) Bookmarklet (46%) • Pop-up blockers • IE security zones

  19. Bookmarking - testing - feedback • What are incentives for and advantages of sharing? • Preference for structured over free tagging • Public v private bookmarking. Tedious to sort which to share.

  20. Bookmarking - analysis Emergence of a long-tail folksonomy

  21. Bookmarking - conclusions • Use of implicit taxonomy useful & time –saving • User base unsophisticated • Users want both order (taxonomy) and flexibility (free tagging) • We need to prove the value of sharing & reuse (maybe harness interest in RSS)

  22. References • Brophy, J. and D. Bawden (2005) ‘Is Google enough? Comparison of an internet search engine with academic library resources’. Aslib Proceedings Vol. 57(6): 498-512. • Kesselman, M. and S.B. Watstein (2005) ‘Google Scholar™ and libraries: point/counterpoint’. Reference Services Review Vol. 33(4): 380-387. • Mathes, A. (2004) ‘Folksonomies - Cooperative Classification and Communication Through Shared Metadata’ • Millen, D., Feinberg, J., and Kerr, B. (2005) 'Social bookmarking in the enterprise', ACM Queue 3 (9): 28-35.

More Related