An Introduction to Treejack. Out on a limb with your IA. Dave O ’ Brien Optimal Usability. Dave O ’ Brien Optimal Usability Wellington, New Zealand. Welcome. 22 Jan 2010 36 attendees USA, CA, UK, NZ, AU, BR, CO. Quickie Treejack tour What is tree testing? Planning a tree test
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Out on a limb with your IA
Dave O’BrienOptimal Usability
Wellington, New ZealandWelcome
22 Jan 2010
USA, CA, UK, NZ, AU, BR, CO
What is tree testing?
Planning a tree test
Setting up Treejack
Running a test
Can users find particular items in the tree?
Can they find them directly, without having to backtrack?
Could they choose between topics quickly, without having to think too much?
Which parts of your tree work well?
Which fall down?
LabelingWhat is tree testing, really?
Improving organisation of your site
Improving top-down navigation
Improving your structure’s terminology (labels)
Comparing structures (before/after, or A vs. B)
Isolating the structure itself
Getting user data early (before site is built)
Making it cheap & quick to try out ideas
NOT testing other navigation routes
NOT testing page layout
NOT testing visual design
NOT a substitute for full user testing
NOT a replacement for card sorting
Paper tree testing
“card-based classification”– Donna Spencer
Show lists of topics on index cards
In person, score manually, analyse in Excel
Create a web tool for remote testing
Quick for a designer to learn and use
Simple for participants to do the test
Able to handle a large sample of users
Able to present clear results
Quick turnaround for iterating
Open card sorting is generative
Suggests how your users mentally group content
Helps you create new structures
Closed card sorting – almost not quite
Tree testing is evaluative
Tests a given site structure
Shows you where the structure is strong & weak
Lets you compare alternative structures
Run a baseline tree test (existing structure)
What works? What doesn’t?
Run an open card sort on the content
How do your users classify things?
Come up with some new structures
Run tree tests on them (same tasks)
Compare to each other
Compare to the baseline results
Find out who, what, when, etc.
fill in "planning questions" template
Get the tree(s) in digital format
use Excel tree-import template, etc.Planning a tree test
Import a digital format
Or enter in Treejack
How big are your trees?
Small (less than 50 items) = 25%
Medium (50 - 150 items) = 39%
Large (150 - 250 items) = 22%
Huge (more than 250 items) = 14%
Recommend <1000 items
Bigger? Cut it down by:
Using top N levels (e.g. 3 or 4)
Testing subtrees separately*
Pruning branches that are unlikely to be visited
Remove “helper” topics
e.g. Search, Site Map, Help, Contact Us
Watch for implicit topics!
Create your tree based on the content, not just the page structure.
Entering your tree
Entering the tasks and answers
Less on mechanics, more on tipsSetting up a Treejack project
New vs. Duplicate
Survey name vs. address
The “Other” option
Passing an argument in the URLhttps://demo.optimalworkshop.com/treejack/survey/test1?i=12345
Paste from Excel, Word, text file, etc.
“Top”– how to replace
Not the same as randomising tasks
Changing the tree after entering answers
Edit/review/finalise the tree elsewhere before putting it into Treejack
Preview is surprisingly useful
Multiple correct answers
The “main” answer is usually not enough
Check the entire tree yourself
Must choose bottom-level topics
Workaround: Mark all subtopics correct
Workaround: Remove the subtopics
Choose answers LAST
Randomising tasks – almost always
Limiting the # of tasks
20-30 tasks = 10 per participant
Increase the # of participants to get enough results per task
Eliminate users who didn’t really try
Defaults to 50%
Not previewing/piloting is just plain dumb
Spot mistakes before launch
Preview the entire test yourself
Pilot it with stakeholders and sample users
Launch it, get feedback, duplicate, revise
Task wording (unclear, ambiguous, typos)
Unexpected “correct” answers
Misc. problems (e.g. instructions)
1 – 20 = 44%
21 – 40 = 20%
41 – 100 = 24%
Over 100 = 12%Poll
Recommend >30 users per user group/test
Monitor early results for problems
low # of surveys started
Email invitation not clear? Subject = spam? Not engaging?
low completion rate
email didn’t set expectations? Test too long? Too hard?
Generally less taxing than card sorting
Middling overall score
Often many highs with a few lows
Inspect tasks with low scores (low total or low sub-scores)
Inspect the pie chartsSkimming high-level results
% who chose a correct answer(directly or indirectly)
low Success score
check the spreadsheet to see where they went wrong
% of successful users who did not backtrack
Coming soon: making this independent of success
low Directness score
check the spreadsheet for patterns in their wandering
% who completed this task at about the same speed as their other tasks
% who completed task within 2 standard deviations of their average task time for all tasks
70% Speed score
7/10 users went their “normal” speed
3/10 users took substantially longer than normal for them
Low Speed score
indicates that user hesitated when making choices
e.g. choices are not clear or not mutually distinguishable
Wish: add the raw times to the spreadsheet, so you can do your own crunching as needed.
Overall score uses a grid to combine these scores in a semi-intelligent fashion
# who chose a given topic as the answer
High totals - problem with that topic (perhaps in relation to its siblings)
Clusters of totals – problem with the parent level
For >30 sessions, ignore topics that get <3 clicks.Detailed results – destinations
Look for high “indirect success” rates (>20%)
Check paths for patterns of wandering
Look for high “failure” rates (>25%)
Check the wrong answers above
Look for high skip rates (> 10%)
Check paths for where they bailed out.
Look for "evil attractors"
Topics that get clicks across several seemingly unrelated tasks.
Usually a vague term that needs tightening up
Important for task success
Which sections they visited overall
Did they visit the right section but back out?Detailed results – first clicks
Useful when asking:
How the heck did they get way over there?
Did a lot of them take the same detour?
No web UI for removing participants.
Email Support and we’ll fix you up.Detailed results – paths
Better scoring for Directness, Speed
Improved results (10/100/1000)
General enhancements across Treejack, OptimalSort, and Chalkmark
Whatever you yell loudest for…
GetSatisfaction lets you “vote” for issues
Boxes & Arrows article on tree testinghttp://www.boxesandarrows.com/view/tree-testing
Donna Spencer’s article on paper tree testinghttp://www.boxesandarrows.com/view/card_based_classification_evaluation
Treejack websiteWebinars, slides, articles, user forumhttp://www.optimalworkshop.comTree testing – more resources