1 / 39

An evaluation of enhancing social tagging with a knowledge organization system

An evaluation of enhancing social tagging with a knowledge organization system. Brian Matthews, STFC K. Golub, C. Jones, J. Moon, M. L. Nielsen, B. Puzoń, D. Tudhope. Science and Technology Facilities Council. Provide large-scale scientific facilities for UK Science

eadesm
Download Presentation

An evaluation of enhancing social tagging with a knowledge organization system

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An evaluation of enhancing social tagging with a knowledge organization system Brian Matthews, STFC K. Golub, C. Jones, J. Moon, M. L. Nielsen, B. Puzoń, D. Tudhope

  2. Science and Technology Facilities Council • Provide large-scale scientific facilities for UK Science • particularly in physics and astronomy • E-Science Centre – at RAL and DL • Provides advanced IT development and services to the STFC Science Programme

  3. STFC e-Science and KOS • Remit includes • Library and institutional publication repository • Management of scientific data • Data Interoperability • Digital preservation • Keep the results alive and available • Requires semantically rich metadata • Research into the Semantic Web SKOS • Involved early on in the SKOS activity in the W3C • In SWAD-Europe Project • Then led by Alistair Miles, STFC • Through recommendation process • Now at Proposed Recommendation phase (15/06/09) • Out to vote - Last chance to comment

  4. EnTag Project Enhanced tagging for discovery • JISC funded project • Partners • UKOLN – Koralyka Golub • University of Glamorgan – Doug Tudhope, Jim Moon • STFC – Brian Matthews, Cathy Jones, Bartek Puzon • Intute • Non-funded • OCLC Office of Research, USA • Danish Royal School of Library and Information Science - Marianne Lykke Nielsen Period: 1 Sep 2007 -- 30 Sep 2008 http://www.ukoln.ac.uk/projects/enhanced-tagging/

  5. Controlled Vocabulary • Traditional way of providing subject classification • For shelf-marking • For searching • For association of resources • Different types used, such as • Subject Classification • Keyword lists • Thesaurus

  6. HASSET (I) • UK Data Archive, Univ of Essex • Humanities and Social Science Electronic Thesaurus • Some 1000’s of terms • Structure based on British Standard 5723:1987/ISO 2788-1986 (Establishment and development of monolingual thesauri). • preferred terms, broader-narrower relations, associated terms http://www.data-archive.ac.uk/search/hassetSearch.asp

  7. HASSET (II)

  8. HASSET (III)

  9. Observations on controlled vocabularies • Precise classification of resources • Good for precision and recall • Can exploit the hierarchy to modify query • Using the broader/narrower/related terms • Expensive • Requires investment in specialist expertise to devise the vocabulary • Requires investment in specialist expertise to classify resources. • Hard to maintain currency

  10. Social Tagging • The Web 2.0 way of providing search terms • People “tag” resources with free-text terms of their own choosing • Tags used to associate resources together • del.icio.us, flickr • “Folksonomy” • the terms a community choses to use to tag its resources.

  11. Connotea

  12. Connotea – sharing tags

  13. Connotea –Tag Cloud

  14. Observations on Social Tagging • People often use the same tags or keywords (e.g. Preservation, Digital Library) • this makes things which mean the same thing to people easier to find • Cheap way of getting a very large number of resources classified • Represents the “community consensus” in some sense • “The Wisdom Of Crowds” • Has currency as people update • Tag clouds of popular tags • However, people often use similar but not the same tags: • e.g. Semantic Web, SemanticWeb, SemWeb, SWeb • People make mistakes in tags • mispellings, using spaces incorrectly. • Some tags are more specific than others – tendency to shallow? • E.g. controlled vocabulary, thesaurus, HASSET • Personal meaning • Mine, favourite, useful • People often associate the same words together with particular ideas • these are captured in clusters

  15. EnTag Purpose Investigate the combination of controlled and social tagging approaches to support resource discovery in repositories and digital collections Aim to investigate • whether use of an established controlled vocabulary can help move social tagging beyond personal bookmarking to aid resource discovery To Improve tagging • Relevance of tags • Consistency • Efficiency To Improve retrieval • Effectiveness (degree of match between user and system terminology) In two different contexts: • Tagging by readers • Tagging by authors

  16. Testing Approach Main focus: • free tagging with no instructions Versus • tagging using a combined system and guidance for users Two demonstrators • Intute digital collection http://www.intute.ac.uk • Major development • Tagging by reader • Using a cohort of students to evaluate STFC repository http://epubs.stfc.ac.uk/ • Complementary development • Tagging by author • A more qualitative approach

  17. Intute Studyhttp://www.intute.ac.uk

  18. Intute demonstrator: Enhanced

  19. Intute study Demonstrator • 11,042 stripped records in Politics • Free tagging or DDC / LCSH / Relative Index • Searching, simple and enhanced interfaces Questions • Choice of tag • Retrieval implication Participants • 28 UK politics students with little tagging experience • Thus subjects were searchers. Data collection • Logging • Three questionnaires Four tagging tasks • Two controlled, two free • Tag 15 documents in each task • 5 to 10 min per document • Open document but focus • Try consider enhanced suggestions where appropriate • Paper at JCDL 09

  20. STFC Author study • A study on a Authors of papers • Smaller number - c.10-12. • Regular depositors ( > 10 papers each) • Subject experts • Expect that they would want their papers accurately tagged so that they are precisely found • A more qualitative study

  21. Study Approach • Questions • Do authors appreciate the purpose and use of tags? • Value of controlled vocabulary – better tags? • User interface and ease of use • Supervised sessions • 40 minute observed trial • Statistics Logging • Task worksheet • Tagging own papers – a number of their choice • Tag cloud, own tags, controlled vocabulary • Questions afterwards • ACM Computing Classification Scheme used • Imported in SKOS

  22. Limitations • A number of limitations of this approach: • Small sample size • Small number of papers tagged • Inappropriate controlled vocabulary • Computing and IT specialists too familiar with the concept of semantic annotation. • Single, observed use of the tool – not real life. • Nevertheless, it was felt that the results of the study were illuminating and useful.

  23. STFC Case Study: EPubs

  24. ePubs – a single entry

  25. The Tagger

  26. Browse the Thesaurus

  27. Browse the Thesaurus

  28. Picking terms

  29. Global vs Personal Tag Cloud

  30. Pick terms from Tag Cloud

  31. Figure 5 Figure 6 Figure 4 Some statistics Average 6 terms per item, 2/3 being free text. Little correlation with experience Tagging time reduces with practice.

  32. Term Choice • Chose terms from the bottom of the hierarchy if possible. • Often preferred an appropriate term from the thesaurus over their own • Appreciated the better IR properties • Would like definitions of terms to be available • Would like automatic suggestions • Very little use of the Tag Cloud • Presentation of cloud? • Unfamiliarity? • Limited population?

  33. User interface • Tool generally (though not universally) thought to be easy to use • Some wanted it to be simpler • More suited for a library professional? • Wanted more automation • Tag cloud interface not right • Would be willing to use • Especially if benefit in improved retrieval could be established.

  34. Preferred style Most depositors had a strong preference for the way they interact with the system. • Free text taggers • Enter tags, don’t really use the vocabulary • Thesaurus browsers: • systematically browse controlled vocabulary, • Thesaurus searchers: • Use the vocabulary search tool for preference • only enter free-text term when they can’t find an appropriate term. Speculate that there would also be those who prefer to start from the tag cloud? Contrast to Intute study here.

  35. ACM Computing Classification Scheme • General recognition of this scheme • Used in journals to classify papers • Meant that there was acceptance of its authority • Willingness to use it • Feeling that it was abstract and academic • Feeling that it was not up to date and had much missing.

  36. Comparison of Intute and STFC Results Different user groups and approach to studies Similarities between the Intute STFC users could be identified • Users appreciated the benefits of consistency and vocabulary control • willing to engage with the tagging system; • Support for automated suggestions • Appropriateness of the controlled vocabulary important • Tag cloud hard to use effectively • The user interface and interaction important.

  37. Observations Users are willing to add tags using a controlled vocabulary in conjunction with free text • By and large they understand why its useful • Good search terms = good retrieval • But they need help • Automation, suggestions, good interfaces • Support for different styles of interaction • Produce “better” tags (?) • Also need flexible and targeted controlled vocabularies • “Web 2.0” features need to be thought through very carefully • “tag clouds” not a success • Need much better structuring and presentation • integrated Interaction between tag clouds and structured vocabularies needs further investigation • Develop a flexible user focussed vocab from tags • “structured folksonomy”

  38. Conclusions • Controlled vocabulary and tags complement each other • Controlled vocabulary suggestions valued if appropriate • Future work: • Qualitative analysis • Enhancements • Controlled vocabulary • Auto suggestions • Interface • Motivation for tagging • Would users (enhanced) tag in “real life” ?

  39. Questions? brian.matthews@stfc.ac.uk

More Related