1 / 60

Cost-Effective Usability Evaluation

Cost-Effective Usability Evaluation. Mark W. Newman, School of Information. CHCR Seminar Mar. 15, 2010. Outline. What is usability? Why evaluate usability? When should I evaluate? How should I evaluate ?. What is Usability?.

Download Presentation

Cost-Effective Usability Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cost-Effective Usability Evaluation Mark W. Newman, School of Information CHCR Seminar Mar. 15, 2010

  2. Outline • What is usability? • Why evaluate usability? • When should I evaluate? • How should I evaluate?

  3. What is Usability? “[Usability refers to] the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” - ISO 9241-11

  4. What’s Not Usability? The “User Experience Honeycomb,” Peter Morville http://semanticstudios.com/publications/semantics/000029.php

  5. Why Evaluate? • Can users accomplish key tasks using the system? • Where do problems occur? • What changes should I make? • Your system will be evaluated whether you like or not!

  6. When Should I Evaluate? • Early and often! • Adopt an Iterative User-Centered Design approach

  7. User-Centered Design • “a design philosophy and a process in which the needs, wants, and limitations of end users … are given extensive attention at each stage of the design process.” [User-Centered Design, Wikipedia]

  8. Know Thy User • Who are the users? • What are the users’ tasks and goals? • What are the users’ • expectations? • skills? • limitations? • relevant prior experiences? • Key insight: it’s not you!

  9. Iterative Design Traditional Software Development Iterative Design

  10. Why Iterate? Traditional Software Development: The “Waterfall” method

  11. Why Iterate? Try to understand users’ needs Traditional Software Development: The “Waterfall” method Assess whether system meets requirements

  12. IterativeUser-Centered Design

  13. Iterative User-Centered Design Build lots of these

  14. IterativeUser-Centered Design From the users’ perspective

  15. Iterative Design Formative evaluation: The purpose is to impact design

  16. Iterative Design Lifecycle Build time Design Evalutate

  17. Iterative Design Lifecycle Build time Design System becomes more refined Tests become more exhaustive Evalutate

  18. How Should I Evaluate? • Usability Testing • Usability Inspection (a.k.a. Discount Evaluation) • Heuristic Evaluation • (Cognitive Walkthrough)

  19. Usability Testing

  20. What is a Usability Test? • Real (or representative) users perform specific tasks • User behavior is observed and captured • User reaction/feedback is elicited and captured • Results are analyzed • Task completion, time, errors • Critical incidents: where something notable happened • User perceptions and feedback • Results are reported • What went wrong (and right)? • What do we need to fix?

  21. Planning a Usability Test • Define target population • Recruit and screen • Develop script: intro, tasks, questionnaires, debrief • Stabilize prototype, develop test data • Develop data collection plan: recording, logging • Incentives

  22. Planning a Usability TestWhat really matters Main criteria: They will make the same mistakes your users will make • Define target population • Recruit and screen • Develop script: intro, tasks, questionnaires, debrief • Stabilize prototype, develop test data • Develop data collection plan: recording, logging • Incentives

  23. Planning a Usability TestWhat really matters Things the system really needs to support • Define target population • Recruit and screen • Develop script: intro, tasks, questionnaires, debrief • Stabilize prototype, develop test data • Develop data collection plan: recording, logging • Incentives

  24. Planning a Usability TestWhat really matters What really happened (according to the user)? • Define target population • Recruit and screen • Develop script: intro, tasks, questionnaires, debrief • Stabilize prototype, develop test data • Develop data collection plan: recording, logging • Incentives

  25. Planning a Usability TestWhat really matters • Define target population • Recruit and screen • Develop script: intro, tasks, questionnaires, debrief • Stabilize prototype, develop test data • Develop data collection plan: recording, logging • Incentives Are you sure you didn’t miss anything important? Do you have the “evidence” you’ll need?

  26. Do I need a fancy lab?

  27. Do I need a fancy lab? • What you get: • User isolation • Better recording equipment • More space for observers

  28. A more reasonable setup

  29. A more reasonable setup • What you probably need • Moderator + Note taker • Screen capture & audio recording • Critical incident logging

  30. Winter 2008: Engage Evaluation

  31. Finding 1: Navigation is difficult • Problems: • Text heaviness • Links to outside pages • Recommendation: • Reduce text to minimum • Provide clear visual cues • Priority: High • Time span: Short/medium-range U04: “If you keep rerouting people through various links, they are not going to read. They are going to get bored. You know very few people are committed to signing up for any research study. They are going to think, why [the hell] do I have to do this and leave.”

  32. Finding 2: Great potential for search • Problems: • Advanced search option is hidden • Recommendation: • Rename “Keyword search” to “Advanced search” • Rename other searches to “Browse studies by…” • Priority: High • Time span: Short-term U03: “If I could search it for what’s required of me...in terms of proce- dures and time frames and clinic visits, compensation...”

  33. Finding 3: Users are confused by the Registry • Problems: • Long enrollment process • Unclear organization • Purpose unclear • Recommendation: • Simplify enrollment process • Priority: Medium • Time span: Long-term

  34. Finding 3, cont’d U04: “The content is not well organized for the purpose of reading. Some of the content is not helpful. The medical terms are not helpful to figure out if you qualify. And it doesn’t say who benefits or why it’s needed...It would be nice if the title were explained like an abstract.”

  35. How many users should I test? Sweet spot: 5-7 users J. Nielsen, Why You Only Need to Test With 5 Users, Mar. 2000. http://www.useit.com/alertbox/20000319.html N(1-(1-L)N) {N = usability problems; L = avg % found by one user (~31%)}

  36. How many users? (Iterative Design) J. Nielsen, Why You Only Need to Test With 5 Users, Mar. 2000. http://www.useit.com/alertbox/20000319.html The most striking truth of the curve is that zero users give zero insights. One user yields almost a third (31%) of all there is to know about the usability of the design. 2nd user does some of the same things as the first user, and adds some amount of new insight (19%), but not nearly as much as the first user did. Nth user will do many things that you already observed and will generate a small amount of new data. (At some point, ) there is no real need to keep observing the same thing multiple times, and you will be very motivated to go back to the drawing board and redesign the site to eliminate the usability problems.

  37. Quick and Dirty: The Hallway Usability Test • Grab a coworker • Ask them to perform a couple of key tasks • Ask them what they thought • Write it down before you forget

  38. Close-Coupled Usability Testing B. Tognazzini, Close-coupled Usability Testing, June 1998. http://www.asktog.com/columns/001closecoupledtesting.html “Run a test subject through the product, figure out what's wrong, change it, and repeat until everything works. Using this technique, I've gone through seven design iterations in three-and-a-half days, testing in the morning, changing the prototype at noon, testing in the afternoon, and making more elaborate changes at night.” “Particularly in the early stages, you are not looking for subtleties. You're lucky if a user can actually get from one end of the application to the other. Problems that may have escaped your attention before a user tried the product now are glaringly obvious once the first user sits down. So fix them! And then move on to the next layer of problems.”

  39. Discount Usability • Evaluate usability without users • Quicker to plan & execute • Turnaround in a few days • Finds fewer and different issues than usability testing

  40. Heuristic Evaluation • A “checklist” inspection method • Inspect each element of a UI to determine whether it complies with known best practices

  41. The Checklist: Nielsen’s Usability Heuristics • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation

  42. Nielsen’s Usability Heuristics The system should always keep users informed about what is going on, through appropriate feedbackwithin reasonable time. • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation

  43. Nielsen’s Usability Heuristics The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation

  44. Nielsen’s Usability Heuristics Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation

  45. Nielsen’s Usability Heuristics Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation

  46. Example Findings: Engage Staff UI Finding 2: Horizontal tags and vertical list do not match Heuristics Violated: Consistency and Standards, Minimalist Design

  47. Example Findings: Engage Staff UI Finding 3: No way to get from “Post/Edit Studies” to “Registry” Heuristic Violated: User Control and Freedom

  48. How to Do a Heuristic Evaluation • Recruit 3-5 evaluators • Have each • Perform an individual evaluation • Assess the severity of findings • Aggregate and prioritize results • Generate recommendations

  49. Why multiple evaluators? Every evaluator doesn’t find every problem Good evaluators find both easy & hard problems

  50. problems found benefits / cost 5 evaluators  75% > 5 evaluators ROI drops 1 evaluator  35% How many evaluators? Note: These graphs are for a specific case.

More Related