Usability with ProjectLecture 3 – 16/9/09 Dr. Simeon Keates
Friday’s exercise – part 1 • Look at your product selection page • Confirm that your products are in the correct families • Done for most groups – I will check the last 2 groups today… • Develop three (brief) personas/descriptions for users of your site. Explain why each person would want to visit your site and complete this task.
A note on your personas/user descriptions • The final stage of the project will be to compare the usability of your original and final designs • In the final report you will have to explain any design changes that you made • These should be rationalised in terms of the personas that you developed • Note: you can modify your personas/descriptions as the project progresses
Interesting persona attributes • Try to identify persona features that affect how the site is used • For example: • “Taste for the high life” or “Get as much as you can”? • “Time to browse” or “Time is money”? • “Children” or “No children”? • “Experienced computer shopper” or “Novice”?
Notes on different search techniques • “Expert” • Knows exactly what product they are looking for • “Novice” • Probably wants to see everything first before deciding • “Forager” • Keeps looking until they find something worth having • [Picture a wild animal hunting for food – keeps going until something worth eating is found] • Need to reflect these approaches in the personas…
Friday’s exercise – part 2 • Estimate the minimum, maximum and expected number of button/key presses for the user to select their desired products • Minimum • Select the 5*, a 4* and a 1* product or 2 x 4* and a 2 *, etc. • 3 presses for a simple list (2 presses if repeat selection allowed) • 6 presses for product clusters (assuming none in the same category) • 5 presses more likely for product clusters (low * product in same category as one of the high * ones) • e.g. 5* category, 5* product, 4* category, 4* product, 1* product • Could be 4 if the 5*, 4* and 1* (or 4*, 3*, 3* or 4*, 4*, 2* …) are in the same category … or if there is a repeat selection (i.e. same 3* product or 4* products chosen) • … and 3 if that is the default category (or 2 if that includes the 5* product)
Friday’s exercise – part 2 • Estimate the minimum, maximum and expected number of button/key presses for the user to select their desired products • Maximum • Assume no exploring or changes of mind • Select 10 x 1* products • 10 presses for a simple list • 20 presses for 10 product clusters (assuming none in the same category) • If <10 product clusters: • 2 x no_of_product_clusters + (10 – no_of_product_clusters)
Friday’s exercise – part 2 • Estimate the minimum, maximum and expected number of button/key presses for the user to select their desired products • Expected • Assume no exploring or changes of mind • Very complex calculation because of 10 * limit!!! • Need to map all of the possible combinations of 10* and then calculate their probabilities… • Need a computer program for this
Friday’s exercise – part 2 • Estimate the minimum, maximum and expected number of button/key presses for the user to select their desired products • Average • 107 stars / 62 products = 1.7258 stars / product • 10 stars => 5.794 products on average • 5.794 presses (on average) for a simple list
Friday’s exercise – part 3 • Sketch at least one other layout for the 62 products • Suggestions: Simple list, menus, product families, etc. • Compare and discuss the merits of your original design and the alternative one sketched here
Layout options • All groups have tried to cluster products • Many groups have included pictures • Is this necessarily good? Tesco.com vs. tesco.com/access
Layout options • All groups have tried to cluster products • Many groups have included pictures • Is this necessarily good? Tesco.com vs. tesco.com/access It depends on your selected personas!!!
Friday’s exercise – part 3 • Q: How would your answer differ for 5 products and for 500 products? • Button press calculations suggest big long list works best for 62 products • Also works best for 5 products • Would not work well for 500 products • Too long to read
The need for critical thinking… “There are three kinds of lies: lies, damned lies, and statistics.”– Benjamin Disreali • Before we look at methods of usability assessment… • Need to consider what we are doing • …and how we are doing it • See “Stable boy” example from last week • Bad experimental design leads to bad data and wrong conclusions
How can we get “bad” data? Examples: • Not using a balanced experimental design • Not using the right participants • Asking the wrong questions
Critical thinking exercise – 1 Read the factsheet on DHMO • Q – Do you support a ban on DHMO? Now read the opposing factsheet • Q – Do you support a ban on DHMO?
Critical thinking exercise – 1 • What is DHMO? • Dihydrogen monoxide Hydrogen H Dihydrogen H2 Monoxide O Dihydrogen monoxide H2O
How can we get “bad” data? Examples: • Not using a balanced experimental design • Not using the right participants • Asking the wrong questions • Asking the right questions in the wrong way http://www.youtube.com/watch?v=2yhN1IDLQjo
Critical thinking - summary • Never take results at face value • Always ask how the data was collected and from whom • Then ask if you believe a sound methodology was used • Have the results been interpreted appropriately? • Have any obvious counter-theories been addressed?
Critical thinking in practice – Speed cameras Wigantoday.net reports: Speed cameras cut road fatalities • Published Date: 09 February 2009 “Speed cameras have helped reduce road deaths in Wigan by half, police have revealed. Six people died on the borough's roads over the past 12 months, compared with a dozen between February 2006 and January 2007. Countywide, say police, the death toll has also been slashed over the past five years.” Q – Do you believe these claims?
Speed cameras – the proof! • UK has highest concentration of speed cameras per mile of highway in the world • Reductions in road deaths are often attributed to speed cameras • European Transport Safety Council's first road safety report states that UK deaths decreased by 7% in past years • Speed cameras have increased significantly in the same time: => Reduction is due to increased use of cameras
Speed cameras – the counter arguments • In the same time period 25% drop in Sweden and the Netherlands and 35% in France • … and no increase in speed camera use • 5 other countries are safer to drive in Netherlands, Sweden, Switzerland, Norway and Malta • …and minimal use of cameras
Speed cameras – contributing factors • Cars are getting safer • Cameras are often used as a cost-cutting tool • Reduced police presence • Cannot deal with unregistered drivers • Differences in driver behaviour
Speed cameras – local data • “But the council put that speed camera in where there were 3 accidents last year and there have been none this year. Surely that shows that it is working!” • No – not necessarily • If you assume that accidents are randomly distributed, any reduction could be pure chance • “Regression to the mean” • Any extreme score - high or low - at one point in time will probably be less extreme the next time it's tested for purely statistical reasons • If you are at the top of a list of accident hotspots, there's only way to go and that's down
Simple list is best? • Button press calculation is very simplistic • No allowance for: • Effects of download time • User decision time • User uncertainty, etc. • We will look at this further later in the course
Better methods of evaluating interfaces • “Button press”-type calculations are too simplistic • Need more comprehensive methods of evaluation • Most common: user observation / user trials / user testing* • However, we also need to be fast and fit in with iterative development cycle • c.f. Inclusive design knowledge loop * Note: try not to use this name… The “t” word is generally best avoided
Q: Why not user observation sessions? A: Users are not always available A: User observation sessions are resource-intensive • Time to find users • Time to find location • Time to set up location • Time to brief users • Time for sessions themselves • Time for debrief • Time for data analysis
So they invented “discount usability”… One interpretation: • How to do usability without users… Can you think of methods of testing a product that do not involve users? • User models/avatars • Self-assessment • Simulation • Expert assessment
User models/avatars Method: • Use a simulated model of the user to test the product design Advantages: • Great for ergonomic issues Disadvantages: • Hard to correct for user cognitive effects, deviations in behavioural patterns, etc.
Self-assessment Method: • Design team tests its own design Advantages: • Very fast • Very cheap • Can be done at any time Disadvantages: • Highly unreliable • Results extremely variable If you can find the problem with a simple test, why did you design it that way in the first place?
Simulation Method: • Assessor physically recreates the context of use and user characteristics Advantages: • More reliable than self-assessment • More repeatable than self-assessment • Can be done at any time, quickly and cheaply Disadvantages: • Only as good as the simulation • Recreates “symptoms” not “causes”
Discount (DIY) simulation • i.e. discount discount usability
Expert assessment Method: • Find and employ an appropriate “expert” to examine the usability of your product Advantages: • Perceived to be cheap (no need to acquire expensive skills in-house) • Can be very cost-effective Disadvantages: • How do you know you have the “right” expert? • Depends on the availability of the expert
Adding rigour to assessments • Need frameworks for ensuring that testing is as reliable, repeatable and robust as possible Examples: • Performance estimates • Checklists
Performance estimates Method: • Estimate user performance based on critical path (typically) + known user performance parameters Advantages: • Great for verifying underlying information architecture Disadvantages: • As discussed with PIP exercise
Checklists Method: • Use a pre-prepared list of questions and ensure that each item is met Advantages: • Fast • Can be completed by a non-expert (maybe!) Disadvantages • Culture of “minimum compliance” • Only as good as the list
Other methods of discount usability • Another interpretation: • Discount usability methods are about simplifying the methods of data collection Examples: • Low-fidelity prototypes • Scenarios • Thinking aloud usability tests • Heuristic evaluation
Low fidelity prototypes • Example: paper prototypes from last week Scroll up Select Category 1 Category 2 Category 3 … Scroll down Help <Back Quit
Low fidelity prototypes – Why use them? • Discount usability is good for checking “obvious” problems • Best to identify them early • …and fix them early • …then test again… • …and fix again… • …and test again… Case studies Design iterations
Scenarios • Scenarios are “extreme” prototyping • They reduce level of functionality and number of features to a bare minimum • As a result, they are cheap to design and implement • … but can only simulate the UI as long as the user follows the chosen path Source: http://www.useit.com/papers/guerilla_hci.html