1 / 46

Chapter 4 – Expert Reviews, Usability Testing, Surveys, and Continuing Assessments

Chapter 4 – Expert Reviews, Usability Testing, Surveys, and Continuing Assessments. 4.1 Introduction. Designers may fail to evaluate adequately. Experienced designers know that extensive testing is necessary  Many factors influence the evaluation plan

sarahtaylor
Download Presentation

Chapter 4 – Expert Reviews, Usability Testing, Surveys, and Continuing Assessments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4 – Expert Reviews, Usability Testing, Surveys, and Continuing Assessments

  2. 4.1 Introduction • Designers may fail to evaluate adequately. • Experienced designers know that extensive testing is necessary  • Many factors influence the evaluation plan • Evaluations might range from two-years to a few days   • Range of costs might be 10%  1% of a project budget • Customers are more and more expecting usability

  3. 4.2 Expert Reviews • Formal expert reviews have proven to be effective. • Experts may be available on staff or as consultants • Expert reviews may take one-half day to one week plus training • There are a variety of expert review methods to chose from • Expert reviews can be scheduled at several points in the development process • Try not to rely on just one expert.

  4. Expert Review Methods • Heuristic evaluation • Guidelines review • Consistency inspection • Cognitive walkthrough • Formal usability inspection

  5. Using Expert Reviews • Danger: Experts may not have an adequate understanding of the task domain or user communities. • Danger: may get conflicting opinions • Expert reviewers are not typical users, and may not relate completely. • Helps to chose experts who are familiar with the project and the organization. • Beneficial to do usability testing as well

  6. 4.3 Usability Testing and Laboratories • There is increasing attention to usability testing • Has benefits beyond usability • Not controlled experiments

  7. Usability Laboratories • Might be set up to allow observation via one-way mirror • Staffed by expert in usability testing and interface design • IBM was an early leader • Consultants available

  8. Usability Testing Process • Plan ahead • Pilot test • Choice of participants is important • Other factors to be controlled • Participants should be kept informed and respected • Think Aloud protocols useful • Videotaping is useful • Test can be repeated after significant improvements

  9. Field Tests • Real environments instead of labs • Still useful to log • Beta testing is field testing

  10. Paper Prototypes • Obtain very early feedback, inexpensively • Person plays the role of the computer, displaying screens • Allows capturing difficulties with wording, layout, and sequences involved in tasks

  11. Competitive Usability Testing • Closer to controlled experiment • Compare interface to previous version or competitor • Ensure tasks are parallel • “Within Subjects” recommended • Counter balance order

  12. Issues with Usability Testing • Emphasizes first-time usage • Has limited coverage of the interface features. • Also use expert reviews

  13. Heuristic Evaluation and Discount Usability Engineering Taken from the writings of Jakob Nielsen – inventor of both

  14. Heuristic Evaluation • Context – part of iterative design • Goal – find usability problems • Who – small set of evaluators • How – study interface in detail, compare to small set of principles

  15. Ten Usability Heuristics • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation

  16. How to Conduct a Heuristic Evaluation • More than one evaluator to be effective. • Each evaluator inspects the interface by themselves • General heuristics may be supplemented • Results can be oral or written • Evaluator spends 1-2 hours with interface • Evaluator goes through interface > 1 time • Evaluators may follow typical usage scenarios • Interface can be paper

  17. Different Evaluators Find Different Problems

  18. Number of Evaluators

  19. Heuristic Evaluation Results • List of usability problems • With principle violated • With severity • NOT fixes • May have debriefing later to aid fixing • Discount usability

  20. Usability Problem Location • Single Location • Two/Several Locations • Overall Structure • Something Missing

  21. Severity • Help focus repair efforts • Help judge system readiness • Factors in Severity: • Frequency • Impact • Persistence • Market impact • Scale severity to a number • May wait on severity

  22. H.E. Complementary w/ Usability Testing • Each will find problems that the other will miss • H.E. Weakness – finding domain specific problems • Don’t H.E. and Usability Test same prototype version

  23. Discount Usability Engineering • “It is not necessary to change the fundamental way that projects are planned or managed in order to derive substantial benefits from usability inspection” • 6% of project budget on usability • 18% of respondents used usability evaluation methods the way they were taught

  24. More Discount Usability Engineering • Cost projection to focus on usability may be reduced • “Insisting on only the best methods may result in having no methods used at all” • 35% of respondents used 3-6 users for usability testing • Nielsen and others suggest 50-1 ROI

  25. Elements of Discount Usability Engineering • Scenarios • Simplified Thinking Aloud • Heuristic Evaluation

  26. Scenarios • Take prototyping to extreme – reduce functionality AND number of features • Small, can afford to change frequently • Get quick and frequent feedback from users • Compatible with interface design methods

  27. Simplified Thinking Aloud • Bring in some users, give them tasks, have them think out loud • Fewer users in user testing

  28. Heuristic Evaluation • Fewer principles etc to apply • Compare interface to previous version or competitor • Ensure tasks are parallel • “Within Subjects” recommended • Counter balance order

  29. Stages of Views of Usability in Organizations • Usability does not matter. • Usability is important, but good interfaces can surely be designed by the regular development staff as part of their general system design. • The desire to have the interface blessed by the magic wand of a usability engineer. • GUI/WWW panic strikes, causing a sudden desire to learn about user interface issues. • Discount usability engineering sporadically used. • Discount usability engineering systematically used. • Usability group and/or usability lab founded. • Usability permeates lifecycle.

  30. End Nielsen insert for Chapt 4

  31. 4.4 Surveys • Users are familiar with surveys • Surveys can provide lots of responses • Surveys can be inexpensive • Survey results can often be quantified • Surveys can be complementary to usability tests and expert reviews.

  32. Successful Use of Surveys • Clear Goals • Preparation • Don’t forget to gather background info • Other goals concerning learning about the user … • Goals concerning the interface … • reasons for not using an interface • familiarity with features • their feeling state after using an interface

  33. Online Surveys • Online surveys cut cost • Online surveys may boast response rate • Online survey may bias survey

  34. Simple Survey • Use a simple scale • Easy for users • Directly quantifiable for use in statistics • Ask a few questions addressing goals • Few questions lead to higher response rate • Low cost, quantifiable results makes survey repeatable

  35. More Scaling • Some surveys use bipolar alternatives • E.g. Error messages were • Hostile 1 2 3 4 5 6 7 Friendly

  36. Not So Simple Survey • Shneiderman’s Questionnaire for User Interface Satisfaction (QUIS) • Detailed info – gives specific feedback on many things • Response rate will be lower • Response bias to those highly motivated to help, very patient, and/or not that busy • Short form available for less patient • IBM’s Post-Study Usability Questionnaire • Software Usability Measurement Inventory

  37. 4.5 Acceptance Tests • Large (particularly custom) software projects have “acceptance tests” • It’s time for something more specific than “user friendly” for handling of usability • Time to learn specific functions • Speed of task performance • Rate of errors by users • Human retention of commands over time • Subjective user satisfaction • Multiple such tests - different components - different user communities. • After acceptance, field testing before full distribution.. • The goal all usability evaluation is to improve interface in the prerelease phase, when change is relatively easy

  38. 4.6 Evaluation During Active Use • Must continue to evaluate usability under real use • Improvements are possible and are worth pursuing.

  39. 4.6.1 Interviews and Focus Groups • Interviews with individual users • After individual discussions, group discussions

  40. 4.6.2 Continuous User-Performance Data Logging • Software should enable collecting data about system usage • Logged data provides guidance • E.g. Most frequent error message • E.g. Most frequently used capabilities • Pay attention to user’s privacy

  41. 4.6.3 Online or Telephone Consultants • Online or telephone consultants provide assistance to users • Helpful to users when problems arise. • Consultants can provide info about problems users are having

  42. 4.6.4 Online Suggestion Box • Provide facility to allow users to send messages to the maintainers or designers. • Easy access encourages users to make productive comments

  43. 4.6.5 Online Bulletin Board • Electronic bulletin board (newsgroups) permit posting of open messages and questions. • New items can be added by anyone, but usually someone monitors the bulletin board

  44. 4.6.6 User Newsletters and Conferences • Newsletters can help users, include requests for assistance, promote user satisfaction • Printed newsletters can be carried away from the workstation and have respectability. • Online newsletters are less expensive and more rapidly disseminated • Conferences allow workers to exchange experiences with colleagues • Obtaining feedback in these ways allows gauging attitudes and gathering suggestions (as well as being good PR)

  45. 4.7 Controlled Psychologically Oriented Experiments • Scientific and engineering progress aided by precise measurement. • Designs of interfaces will be improved if quality can be quantified • Scientific method as applied to HCI: • Deal with a practical problem – but within a theoretical framework • State a clear and testable hypothesis • Identify a small number of independent variables • Carefully choose the dependent variables • Carefully select subjects and assign to groups • Control for biasing factors • Apply statistical methods to data analysis • Resolve the practical problem, refine the theory, and give advice to future researchers • Controlled experiments useful in fine tuning the interface.

  46. End Chapter 4

More Related