1 / 35

Testing Your Taxonomy

Testing Your Taxonomy. Ron Daniel, Jr. Testing Your Taxonomy. Your taxonomy will not be perfect or complete and will need to be modified based on changing content, user needs, and other practical considerations.

Download Presentation

Testing Your Taxonomy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing Your Taxonomy Ron Daniel, Jr.

  2. Testing Your Taxonomy • Your taxonomy will not be perfect or complete and will need to be modified based on changing content, user needs, and other practical considerations. • Developing a taxonomy incrementally requires measuring how well it is working in order to plan how to modify it. • In this session, you will learn qualitative and quantitative taxonomy testing methods including: • Tagging representative content to see if it works and determining how much content is good enough for validation. • Card-sorting, use-based scenario testing, and focus groups to determine if the taxonomy makes sense to your target audiences and to provide clues about how to fix it. • Benchmarks and metrics to evaluate usability test results, identify coverage gaps, and provide guidance for changes.

  3. Qualitative taxonomy testing methods

  4. Walk-through method—Show & explain ABC Computers.com Content Type Competency Industry Service Product Family Audience Line of Business Region-Country Award Case Study Contract & Warranty Demo Magazine News & Event Product Information Services Solution Specification Technical Note Tool Training White Paper Other Content Type Business & Finance Interpersonal Development IT Professionals Technical Training IT Professionals Training & Certification PC Productivity Personal Computing Proficiency Banking & Finance Communica-tions E-Business Education Government Healthcare Hospitality Manufacturing Petro-chemocals Retail / Wholesale Technology Transportation Other Industries Assessment, Design & Implementation Deployment Enterprise Support Client Support Managed Lifecycle Asset Recovery & Recycling Training Desktops MP3 Players Monitors Networking Notebooks Printers Projectors Servers Services Storage Televisions Other Brands • All • Business • Employee • Education • Gaming Enthusiast • Home • Investor • Job Seeker • Media • Partner • Shopper • First Time • Experienced • Advanced • Supplier All Home & Home Office Gaming Government, Education & Healthcare Medium & Large Business Small Business All Asia-Pacific Canada EMEA Japan Latin America & Caribbean United States

  5. Qualitative taxonomy testing methods

  6. Walk-through method— Editorial rules consistency check • Abbreviations • Ampersands • Capitalization • General…, More…, Other… • Languages & character sets • Length limits • Multiple parents • Plural vs. singular form • Scope notes • Serial comma • Sources of terms • Spaces • Synonyms & acronyms • Term order (Alphabetic or …) • Term label order (Direct vs. inverted) …

  7. Qualitative taxonomy testing methods

  8. Usability Testing Method: Closed Card Sort • For alpha test of a grocery site • 15 Testers put each of 71 best-selling product types into one of 10 pre-defined categories • Categories where fewer than 14 of 15 testers put product into same category were flagged “Cocoa Drinks – Powder” is best categorized in both “Beverages” and “Grocery”. How to improve? Allow products in multiple categories. (Results are for minimum size = 4 votes) Taxonomy Strategies LLC The business of organized information 8

  9. Usability testing method—Task-based card sorting (1) • 15 representative questions were selected • Perspective of various organizational units • Most frequent website searches • Most frequently accessed website content • Correct answers to the questions were agreed in advance by team. • 15 users were tested • Did not work for the organization • Represented target audiences • Testers were asked “where would you look for …” • “under which facet… Topic, Commodity, or Geography?” • Then, “… under which category?” • Then, “…under which sub-category?” • Tester choices were recorded • Testers were asked to “think aloud” • Notes were taken on what they said • Pre- and post questions were asked • Tester answers were recorded

  10. Usability testing method—Task-based card sorting (2) 3. What is the average farm income level in your state? • Topics • Commodities • 3. Geographic Coverage 1. Topics 1.1 Agricultural Economy 1.2 Agriculture-Related Policy 1.3 Diet, Health & Safety 1.4 Farm Financial Conditions 1.5 Farm Practices & Management 1.6 Food & Agricultural Industries 1.7 Food & Nutrition Assistance 1.8 Natural Resources & Environment 1.9 Rural Economy 1.10 Trade & International Markets 1.4 Farm Financial Conditions 1.4.1 Costs of Production 1.4.2 Commodity Outlook 1.4.3 Farm Financial Management & Performance 1.4.4 Farm Income 1.4.5 Farm Household Financial Well-being 1.4.6 Lenders & Financial Markets 1.4.7 Taxes

  11. Analysis of task-based cardsorting (1)

  12. Analysis of task-based cardsorting (2) • In 80% of the trials users looked for information under the categories that we expected them to look for it. • Breaking-up topics into facets makes it easier to find information, especially information related to commodities.

  13. Analysis of task-based card sorting (3) Possible change required. Change required. On these trials, only 50% looked in the right category, & only 27-36% agreed on the category. Policy of “Traceability” needs to be clarified. Use quasi-synonyms. Possible error in categorization of this question because 64% thought the answer should be “Commodity Trade.”

  14. User satisfaction method—Card Sort Questionnaire (1) • Was it easy, medium or difficult to choose the appropriate Topic? • Easy • Medium • Difficult • Was it easy, medium or difficult to choose the appropriate Commodity? • Easy • Medium • Difficult • Was it easy, medium or difficult to choose the appropriate Geographic Coverage? • Easy • Medium • Difficult

  15. User satisfaction method—Card Sort Questionnaire (2) More Difficult Easier

  16. Task-Based Card Sorting “Bakeoff” • Goal: • Compare two different sets of headings – “Blue” and “Orange” • Method: • Scenarios written for 8 general tasks. • 15 users used one set of headings, then the other, to accomplish the task. • Users were surveyed on satisfaction after each task, then again at the end. • Be aware of test design and be sure to counterbalance the order in which people see the different schemes! • This is easier with an even number of participants.

  17. Strengths and Weaknesses of Task-BasedCard Sorts • A task-based card sort is a test of the navigation headings, without additional context from the viewed pages. • Due to the low-fidelity interface, it is easy to create and conduct. • As a pure navigation test, it provides concentrated information about navigation alone. • This makes it particularly appropriate for comparing the two navigation schemes. • It provides concentrated information about the wording of headings and spotlights any confusion they may cause. • A tightly focused method to gather this type of information. • These appear in the qualitative analysis more than in the quantitative. • Due to the lack of context, it is a difficult test of navigation. • Due to the lack of content, users will have limited confidence that they have reached the right spot. • This will be reflected in lower satisfaction scores than for the fully-implemented navigation.

  18. Qualitative taxonomy testing methods

  19. User interface survey— Which search UI is ‘better’? • Criteria • User satisfaction • Success completing tasks • Confidence in results • Fewer dead ends • Methodology • Design tasks from specific to general • Time performance • Calculate success rates • Survey subjective criteria • Pay attention to survey hygiene: • Participant selection • Counterbalancing • T-scores Source: Yee, Swearingen, Li, & Hearst

  20. User interface survey — Results (1) Source: Yee, Swearingen, Li, & Hearst

  21. User interface survey — Results (2) Google-like Baseline Faceted Category Source: Yee, Swearingen, Li, & Hearst

  22. Qualitative taxonomy testing methods

  23. Tagging samples—How many items? • Quantitative methods require large amounts of tagged content. This requires specialists, or software, to do tagging. Results may be very different than how “real” users would categorize content.

  24. Tagging samples—Manually tagged metadata sample

  25. Tagging samples— Spreadsheet for tagging 10’s-100’s of items 1) Clickable URLs for sample content 2) Review small sample and describe 3) Drop-down for tagging (including ‘Other’ entry for the unexpected 4) Flag questions

  26. Rough Bulk Tagging—Facet Demo (1) • Collections: 4 content sources • NTRS, SIRTF, Webb, Lessons Learned • Taxonomy • Converted MultiTes format into RDF for Seamark • Metadata • Converted from existing metadata on web pages, or • Created using simple automatic classifier (string matching with terms & synonyms) • 250k items, ~12 metadata fields, 1.5 weeks effort • OOTB Seamark user interface, plus logo

  27. Rough Bulk Tagging— OOTB Facet Demo (2)

  28. Quantitative Methods • Quantitative methods are possible when: • You have a large quantity of tagged data • You have logs of how people are using your site

  29. Best quantitative process – query log & click trail examination • How can we characterize users and what they are looking for? • Query Log & Click Trail Examination • Only 30-40% of organizations interested in Taxonomy Governance examine query logs* • Basic reports provide plenty of real value • Greatest value comes from: • Identifying a person as responsible for search quality • Starting a “Measure & Improve” mindset • Greatest challenge: • Getting a person assigned (≥ 10%) • Getting logs turned back on • UltraSeek Reporting • Top queries • Queries with no results • Queries with no click-through • Most requested documents • Query trend analysis • Complete server usage summary Click Trail Packages iWebTrack NetTracker OptimalIQ SiteCatalyst Visitorville WebTrends Source: Metadata Maturity Model Presentation, Ron Daniel, ESS’05

  30. Early quantitative method:How evenly does it divide the content? • Documents do not distribute uniformly across categories • Zipf (1/x) distribution is expected behavior • 80/20 rule in action (actually 70/20 rule) Leading candidate for splitting Leading candidates for merging

  31. Early quantitative method:How evenly does it divide the content? (2) • Methodology: 115 randomly selected URLs from corporate intranet search index were manually categorized. Inaccessible files and ‘junk’ were removed. • Results: Slightly more uniform than Zipf distribution. Above the curve is better than expected.

  32. Late quantitative method - How does taxonomy “shape” match that of content? • Background: • Hierarchical taxonomies allow comparison of “fit” between content and taxonomy areas • Methodology: • 25,380 resources tagged with taxonomy of 179 terms. (Avg. of 2 terms per resource) • Counts of terms and documents summed within taxonomy hierarchy • Results: • Roughly Zipf distributed (top 20 terms: 79%; top 30 terms: 87%) • Mismatches between term% and document% flagged Source: Courtesy Keith Stubbs, US. Dept. of Ed.

  33. Conclusion • Simple walkthroughs are only the start of how to test a taxonomy. • Tagging modest amounts of content, and usability tests such as task-based card sorts, provide strong information about problems within the taxonomy. • Caveat: They may tell you which headings need to be changed, they won’t tell you what they should be changed to. • If you are not looking at query logs and click trails, you don’t know what site visitors are doing. • Taxonomy changes do not stand alone: • Search system improvements • Navigation improvements • Content improvements • Process improvements

  34. Questions?Ron Daniel, Jr.rdaniel@taxonomystrategies.comhttp://ww.taxonomystrategies.com

  35. Bibliography • K. Yee, K. Swearingen, K. Li, M. Hearst. "Searching and organizing: Faceted metadata for image search and browsing." Proceedings of the Conference on Human Factors in Computing Systems (April 2003) http://bailando.sims.berkeley.edu/papers/flamenco-chi03.pdf • R. Daniel and J. Busch. "Benchmarking Your Search Function: A Maturity Model.” http://www.taxonomystrategies.com/presentations/maturity-2005-05-17%28as-presented%29.ppt • Donna Maurer, “Card-Based Classification Evaluation”, Boxes and Arrows, April 7, 2003. http://www.boxesandarrows.com/view/card_based_classification_evaluation

More Related