1 / 52

CHAPTER 5: Evaluation and the User Experience

Designing the User Interface: Strategies for Effective Human-Computer Interaction Sixth Edition Ben Shneiderman, Catherine Plaisant, Maxine S. Cohen, Steven M. Jacobs, and Niklas Elmqvist in collaboration with Nicholas Diakopoulos presented by Simon, Hudson, Mounica, Neil, Brianna.

klawton
Download Presentation

CHAPTER 5: Evaluation and the User Experience

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Designing the User Interface: Strategies for Effective Human-Computer Interaction Sixth Edition Ben Shneiderman, Catherine Plaisant, Maxine S. Cohen, Steven M. Jacobs, and Niklas Elmqvist in collaboration with Nicholas Diakopoulos presented by Simon, Hudson, Mounica, Neil, Brianna CHAPTER 5: Evaluation and the User Experience

  2. Evaluation and the User Experience Topics • Introduction • Expert Reviews and Heuristics • Usability Testing and Laboratories • Survey Instruments • Acceptance Tests • Evaluation During Active Use and Beyond • Controlled Psychologically-Oriented Experiments

  3. ***Introduction • Designers can become so entranced with their creations that they may fail to evaluate them adequately • Experienced designers have attained the wisdom and humility to know that extensive testing is a necessity • Things that effect when, where, and how evaluation is performed: • Stage of design (early, middle, late) • Novelty of project (well-defined vs. exploratory) • Number of expected users • Criticality of the interface (life-critical medical system vs. museum exhibit support) • Costs of product and finances allocated for testing • Time available • Experience of the design and evaluation team

  4. ***Introduction (concluded) • Usability evaluators must broaden their methods and be open to non-empirical methods, such as user sketches, consideration of design alternatives, and ethnographic studies. • Recommendations needs to be based on observational findings • The design team needs to be involved with research on the current system design drawbacks • Usability testing has become an established and accepted part of the design process

  5. ***Expert Reviews and Heuristics • Formal expert reviews are more effective than informal ones to customers or colleagues • Expert reviews can happen at any stage of the design process. • Expert reviews can be half a day to a week • Possible Expert Review methods: • Heuristic evaluation • Guidelines review • Consistency inspection • Cognitive walkthrough • Formal usability inspection

  6. ***Heuristic Evaluation • Experts usually critique interfaces through a list of design heuristics. • Eight Golden Rules • Playable heuristics • game usability • mobility heuristics • gameplay heuristics

  7. ***Fallout 3

  8. ***Guidelines Review • Interface is checked through the organizational or other guideline documents. • Standardizes the essentials features of interfaces. • Takes time to go through guidelines depending on the interface. Do Don’t

  9. ***Consistency Inspection • Checking for consistency through a family of interfaces • terminology • fonts • color schemes • layout • input and output formats

  10. ***Cognitive Walkthrough • Walking through the interface, looking at high frequency tasks and rare critical tasks. • Simulating a day in the life of the user. • Assessing the user’s cognitive and physical interaction while using the interface.

  11. ***Questions for Cognitive Walkthrough • Blackmon, Polson, et al. in 2002 in their paper “Cognitive walkthrough for the Web” offer four questions to be asked during a cognitive walkthrough: • Will the user try and achieve the right outcome? • Will the user notice that the correct action is available to them? • Will the user associate the correct action with the outcome they expect to achieve? • If the correct action is performed; will the user see that progress is being made towards their intended outcome?

  12. ***Formal Usability Inspection • Courtroom style meeting with moderator to present interface and discuss merits and weaknesses. • Use guideline documentation to form report • Expert reviewers should be placed in a situation as similar to one that intended users will experience. • Use paper mock ups • Roles: • Moderator: Runs the meeting. Distributes and collects any materials. Schedules meetings. • Owner: Designer of the product to be inspected. Fixes the defects. • Recorder: Logs defects during the meeting. • Inspectors: Expert reviewers. Inspects the design and reports any defects found.

  13. Expert Reviews and Heuristics (concluded) • Expert reviews can be scheduled at several points in the development process when experts are available and when the design team is ready for feedback • Different experts tend to find different problems in an interface, so 3-5 expert reviewers can be highly productive, as can complementary usability testing • Even experienced expert reviewers may have great difficulty knowing how first-time users will really behave.

  14. Usability Testing and Laboratories • The usability lab consists of two areas: the testing room and the observation room • The testing room is typically smaller and accommodates a small number of people • The observation room, can see into the testing room typically via a one-way mirror. The observation room is larger and can hold the usability testing facilitators with ample room to bring in others, such as the developers of the product being tested

  15. ***Usability Testing and Laboratories (continued) • The emergence of usability testing and laboratories began in the early 1980s • Emerged from marketing research • Traditional developers resisted due to time pressures and limited resources • Usability testing not only sped up many projects but that it produced dramatic cost savings • The movement towards usability testing stimulated the construction of usability laboratories (IBM, Microsoft)

  16. ***Usability Testing and Laboratories (continued) • Perform a pilot test->main test • Participants are to be chosen to represent the intended user communities, with attention to: • background in computing and experience with the task • motivation, education, and ability with the natural language used in the interface. • Make it clear: it is not the user who is being tested, it is the software

  17. Step-by-Step Usability Guide from http://usability.gov/

  18. ***Usability Testing and Laboratories (continued) • Methods of recording: • typing • mouse movement • video-taping (subtle is better) • eye-tracking • All recordings should be typed stamped for ease of review • Software like Adobe Prelude Live Logger or Logsquare from Mangold are popular for data-logging

  19. Usability Testing and Laboratories (continued) • The special mobile camera to track and record activities on a mobile device • Note the camera is up and out of the way still allowing the user to use their normal finger gestures to operate the device

  20. Usability Testing and Laboratories (continued) • This shows a picture of glasses worn for eye-tracking • This particular device tracks the participant’s eye movements when using a mobile device • Tobii is one of several manufacturers

  21. Usability Testing and Laboratories (continued) • Eye-tracking software is attached to the airline check-in kiosk • It allows the designer to collect data observing how the user “looks” at the screen • This helps determine if various interface elements (e.g. buttons) are difficult (or easy) to find

  22. ***Usability Testing and Laboratories (continued) • Stanford has created prototypes of glasses that ‘auto-focus’ on user’s target (for presbyopia) From https://news.stanford.edu/2019/06/28/smart-glasses-follow-eyes-focus-automatically/

  23. ***Usability Testing and Laboratories (continued) • Usability testing is human testing; what ethical guidelines need to be followed? From https://news.stanford.edu/2019/06/28/smart-glasses-follow-eyes-focus-automatically/

  24. Usability Testing and Laboratories (continued) • Participation should always be voluntary, and informed consent should be obtained • Professional ethics practice is to ask all subjects to read and sign a statement like this: • I have freely volunteered to participate in this experiment. • I have been informed in advance what my task(s) will be and what procedures will be followed. • I have been given the opportunity to ask questions, and have had my questions answered to my satisfaction. • I am aware that I have the right to withdraw consent and to discontinue participation at any time, without prejudice to my future treatment. • My signature below may be taken as affirmation of all the above statements; it was given prior to my participation in this study. • Institutional Review Boards (IRB) often governs human subject test process

  25. ***Usability Testing and Laboratories (continued) • Deception of users may be necessary to pursue certain research objectives • From the IRB, research using deceptive methods (a hidden ‘alteration’ to their signed waiver) requires: • The research contains no more than minimal risk to the subjects • Any alteration will not adversely affect the rights and welfare of the subjects • The research could not be practically carried out without the alteration • If at all possible, the subjects receive a complete debriefing after the experiment

  26. ***Usability Testing and Laboratories (continued) • Example of deception in usability testing: The Wizard of Oz (The OZ Paradigm) • Involves the user believing they are interacting with a system, when in actuality a trained person (the wizard) is operating the ‘system’ with a select number of tools • Useful for discovering how the user interacts with a ‘speech’ based system

  27. Usability Testing and Laboratories (continued) • Videotaping participants performing tasks is often valuable for later review and for showing designers or managers the problems that users encounter • Use caution in order to not interfere with participants • Invite users to think aloud (sometimes referred to as concurrent think aloud) about what they are doing as they are performing the task • Think-aloud and related techniques • Many variant forms of usability testing have been tried: • Paper mockups and Prototyping • Discount usability testing • Competitive usability testing • A/B testing • Universal usability testing • Field test and portable labs • Remote usability testing • Can-you-break-this tests • Usability test reports

  28. ***Usability Testing and Laboratories (continued) • Think-aloud and related techniques : • Users are invited to think aloud about what they are doing as they are performing the task. • For example, the designer or observer of usability testing may hear comments such as "This webpage text is too small ... so I'm looking for something on the menus to make the text bigger ... maybe it's on the top in the icons ... I can't find it ... so I'll just carry on." • Spontaneous suggestions and comments for improvement are obtained • Another related technique is called retrospective think-aloud. With this technique , after completing a task, users are asked what they were thinking as they performed the task. • The drawback is that the users may not be able to wholly and accurately recall their thoughts after completing the task; however, this approach allows users to focus all their attention on the tasks they are performing and generates more accurate timings.

  29. ***Usability Testing and Laboratories (continued) • The Spectrum of Usability testing • Paper Mockups and Prototyping: Early usability studies to assess reactions to wording, layout and sequencing. In this the test administrator plays the role of computer by flipping the pages while asking a participant user to carry out typical tasks. This informal testing is inexpensive, rapid and productive.

  30. ***Usability Testing and Laboratories (continued) • Discount Usability Testing: This type of testing lowers the barriers to newcomers. Earlier there is a controversial aspect of using only 3-6 test participants which pointed out that some serious problems arise and repeated testing is done.One resolution to the problem is to use discount usability testing as a formative evaluation (while designs are changing substantially) and more extensive usability testing as a summative evaluation (near the end of the design process). Good for small projects. • A/B Testing: This is testing is done with two groups of users which involves to either the control group (no change) or the treatment group (with the change) to record the differences.

  31. ***Usability Testing and Laboratories (continued) • Competitive Usability Testing: Compares new interface to previous versions or similar products. • Universal Usability Testing: This approach tests interfaces with highly diverse users, hardware, software platforms, and networks. When a wide range of international users is anticipated, such as for consumer electronics products, web based information services, or e-government services, ambitious testing is necessary to clean up problems and thereby help ensure success.

  32. ***Usability Testing and Laboratories (continued) • Field Test and Portable labs: This testing method puts new interfaces to work in realistic environments or in a more naturalistic environment in the field for a fixed trial period. These same tests can be repeated over longer time periods to do longitudinal testing. • Remote Usability Testing: web based testing which avoids bringing participants to lab and makes it possible to have larger number of participants from diverse backgrounds.Testing is done either synchronously or asynchronously.

  33. ***Usability Testing and Laboratories (concluded) • Can-you-break-this test:Game designers pioneered the can-you-break-this approach to usability testing by providing energetic teenagers with the challenge of trying to beat new games. This destructive testing approach, in which the users try to find fatal flaws in the system or otherwise destroy it, has been used in other types of projects as well and should be considered seriously. • Usability Test Reports: The U.S. National Institute for Standards and Technology (NIST) took a major step toward standardizing usability test reports in 1997 when it convened a group of software manufacturers and large purchasers to work for several years to produce the Common Industry Format (CIF) for summative usability testing results. The format describes the testing environment, tasks, participants, and results in a standard way so as to enable consumers to make comparisons.

  34. ***Survey Instruments • What is survey • a series of work getting what you want to know • What is survey instrument • an approach to gather data that you need

  35. ***Survey Instruments • Types of survey based on deployment methods: • Online surveys • Paper surveys • Telephonic surveys • One-to-One interviews From https://www.questionpro.com, wikipedia, flickr, pixabay • Keys to successful surveys • Clear goals in advance • Development of focused items that help attain the goals

  36. ***Survey Instruments - preparing • A survey should be : prepared, reviewed and tested before a large-scale survey is conducted • What users could be asked ? Subjective impressions about specific aspects of the interface

  37. ***Survey Instruments About this interface...

  38. ***Acceptance Test - SDLC Requirement Analysis Acceptance Testing From https://www.tutorialspoint.com High LevelDesign System Testing Low LevelDesign IntegrationTesting Coding Unit Testing

  39. ***Acceptance Test Food-shopping website

  40. Evaluation During Active Use and Beyond • Successful active use requires constant attention from dedicated managers, user-services personnel, and maintenance staff • Perfection is not attainable, but percentage improvements are possible • Interviews and focus group discussions • Interviews with individual users can be productive because the interviewer can pursue specific issues of concern • Group discussions are valuable to ascertain the universality of comments

  41. ***Evaluation During Active Use and Beyond (continued) • Interviews and focus groups (Advantages) • Direct contact with users • Constructive suggestions • Can reveal problems otherwise hidden or unexpected usage patterns • Interviews and focus groups (Disadvantages) • Time-consuming • Costly • Only a small fraction of the user community is involved • Outspoken individuals can sway the group • Comments from others may be disregarded by the group

  42. Evaluation During Active Use and Beyond (continued) • Continuous user-performance data logging • The software architecture should make it easy for system managers to collect data about: • The patterns of system usage • Speed of user performance • Rate of errors • Frequency of request for online assistance • A major benefit is guidance to system maintainers in optimizing performance and reducing costs for all participants

  43. ***Evaluation During Active Use and Beyond (continued) Maintaining user privacy when data logging • Do not store information linked to a specific user • Allow users to view analytics • Pay users to allow their website visitation patterns to be logged

  44. ***Evaluation During Active Use and Beyond (continued) Maintaining user privacy when data logging • Make sure users know data is being collected • While there are no laws mandating this in all situations, it is a best practice to follow • COPPA - All online services and commercial websites that collect information about children under 13 must provide a privacy policyhttps://www.ftc.gov/enforcement/rules/rulemaking-regulatory-reform-proceedings/childrens-online-privacy-protection-rule • Gramm-Leach-Bliley Act - Institutions that engage in financial activities must provide information-sharing policies, including what data is collectedhttps://www.ftc.gov/tips-advice/business-center/privacy-and-security/gramm-leach-bliley-act • HIPPA - A notice in writing must be provided regarding privacy policies of health care services https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/index.html

  45. Evaluation During Active Use and Beyond (continued) • Online or chat consultants, e-mail, and online suggestion boxes • Many users feel reassured if they know there is a human assistance available • On some network systems, the consultants can monitor the user's computer and see the same displays that the user sees • Online suggestion box or e-mail trouble reporting • Electronic mail to the maintainers or designers • For some users, writing a letter may be seen as requiring too much effort

  46. ***Evaluation During Active Use and Beyond (continued) • Discussion groups, wikis and newsgroups • Permit postings of open messages and questions • Some are independent, e.g. America Online and Yahoo! • Topic list • Sometimes moderators • Social systems • Comments and suggestions should be encouraged

  47. ***Evaluation During Active Use and Beyond (continued) • Automated evaluation tools • Incorporation of testing apps and widgets to provide feedback for the designer • Allows for quick error detection and correction prior to user testing • Commonly used to detect if all items are labelled, report number of pages • More complex evaluation tools are capable of finding errors including menu tree redundancy

  48. Evaluation During Active Use and Beyond (concluded) • Example output of an automated evaluation tool from TechSmith’s Morae • The item being measured is mouse clicks. • This shows the view for task 2 (selected in the tabbed bar). Obviously, the other 3 tasks could also be displayed. These are the values for participant 4. • The drop down list box would allow the evaluator to choose the mouse clicks for other participants. • Across the horizontal axis time is shown

  49. Controlled Psychologically-oriented Experiments • Scientific and engineering progress is often stimulated by improved techniques for precise measurement • Rapid progress in the designs of interfaces will be stimulated as researchers and practitioners evolve suitable human-performance measures and techniques

  50. Controlled Psychologically-oriented Experiments (continued) • The outline of the scientific method as applied to human-computer interaction might comprise these tasks: • Deal with a practical problem and consider the theoretical framework • State a lucid and testable hypothesis • Identify a small number of independent variables that are to be manipulated • Carefully choose the dependent variables that will be measured • Judiciously select subjects and carefully or randomly assign subjects to groups • Control for biasing factors (non-representative sample of subjects or selection of tasks, inconsistent testing procedures) • Apply statistical methods to data analysis • Resolve the practical problem, refine the theory, and give advice to future researchers

More Related