1 / 41

Chapter 7 - Evaluation

Chapter 7 - Evaluation. HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane Carey Ping Zhang. Context. Foundation. Application. 4. 1. Physical . 7. 8 . Introduction. Engineering. Evaluation. Principles & . Guidelines. 3. 5. 11. Interactive . Cognitive.

taran
Download Presentation

Chapter 7 - Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7 - Evaluation HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane Carey Ping Zhang Copyright 2006 John Wiley and Sons, Inc.

  2. Context Foundation Application 4 1 Physical 7 8 Introduction Engineering Evaluation Principles & Guidelines 3 5 11 Interactive Cognitive Methodology 2 Technologies Engineering Org & 9 10 Business 6 Organizational Componential Context Affective Tasks Design Engineering 12 13 Relationship, Collaboration, Social & & Organization Global Issues 14 Changing Needs of IT Development & Use Additional Context Road Map Copyright 2006 John Wiley and Sons, Inc.

  3. Learning Objectives • Explain what evaluation is and why it is important. • Understand the different types of HCI concerns and their rationales. • Understand the relationships of HCI concerns with various evaluations. • Understand usability, usability engineering, and universal usability. Copyright 2006 John Wiley and Sons, Inc.

  4. Learning Objectives • Understand different evaluation methods and techniques. • Select appropriate evaluation methods for a particular evaluation need. • Carry out effective and efficient evaluations. • Critique reports of studies done by others. • Understand the reasons for setting up industry standards. Copyright 2006 John Wiley and Sons, Inc.

  5. Evaluation • Evaluation: the determination of the significance, worth, condition, or value by careful appraisal and study. Copyright 2006 John Wiley and Sons, Inc.

  6. HCI Methodology and Evaluation Copyright 2006 John Wiley and Sons, Inc.

  7. What to evaluate? Four levels of HCI concerns

  8. Why evaluate? • The goal of the evaluation is to provide feedback in software development thus suporting an iterative development process (Gould and Lewis 1985). Copyright 2006 John Wiley and Sons, Inc.

  9. When to evaluate • Formative Evaluation: conducted during the development of a product in order to form or influence design decisions. • Summative Evaluation: conducted after the product is finished to ensure that it posses certain quality, meets certain standards or satisfies certain requirements set by the sponsors or other agencies. Copyright 2006 John Wiley and Sons, Inc.

  10. When to evaluate Figure 7.1 Evaluation as the Center of Systems Development Copyright 2006 John Wiley and Sons, Inc.

  11. When to evaluate • Use and Impact Evaluation: conducted during the actual use of the product by real users in real context. • Longitudinal Evaluation: involving the repeated observation or examination of a set of subjects over time with respect to one or more evaluation variables. Copyright 2006 John Wiley and Sons, Inc.

  12. Issues in Evaluation • Evaluation Plan • Stage of design (early, middle, late) • Novelty of product (well defined versus exploratory) • Number of expected users • Criticality of the interface (e.g., life-critical medical system versus museum-exhibit support) • Costs of product and finances allocated for test • Time available • Experience of the design and evaluation team Copyright 2006 John Wiley and Sons, Inc.

  13. Usability and Usability Engineering • Usability: the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Copyright 2006 John Wiley and Sons, Inc.

  14. Usability and Usability Engineering Figure 7.2 System Acceptability and Usability

  15. Table 7.4 Nielsen’s Definitions • Usefulness: is the issue whether the system can be used to achieve some desired goal. • Utility: the question of whether the functionality of the system in principle can do what is needed. • Usability: the question of how well users can use that functionality. • Learnability: the system should be easy to learn so that the user can rapidly start getting some work done with the system. • Efficiency: the system should be efficient to use, so that once the user has learned the system, a high level of productivity is possible. Copyright 2006 John Wiley and Sons, Inc.

  16. Table 7.4 Nielsen’s Definitions • Memorability: the system should be easy to remember, so that the casual user is able to return to the system after some period of not having used it, without having to learn everything all over again. • Errors: the system should have a low error rate, so that users make few errors during the use of the system, and so that if they do make errors they can easily recover from them. Further, catastrophic errors much not occur. • Satisfaction: the system should be pleasant to sue, so that users are subjectively satisfied when using it; they like it. Copyright 2006 John Wiley and Sons, Inc.

  17. Usability Engineering • Usability Engineering: a process through which usability characteristics are specified, quantitatively and early in the development process, and measured throughout the process. Copyright 2006 John Wiley and Sons, Inc.

  18. Evaluation Methods Laboratory Experiments Controlled Experiments Computer Simulation Human Information Processing Theory Copyright 2006 John Wiley and Sons, Inc.

  19. Analytical Methods • Heuristic Evaluation • Heuristics: higher level design principles when used in practice to guide designs. Heuristics are also called rules-of-thumb. • Heuristic evaluation: a group of experts, guided by a set of higher level design principles or heuristics, evaluate whether interface elements conform to the principles. Copyright 2006 John Wiley and Sons, Inc.

  20. Usability Heuristics Copyright 2006 John Wiley and Sons, Inc.

  21. Usability Heuristics

  22. Eight Golden Rules

  23. Eight Golden Rules

  24. HOMERUN Heuristics for Websites Copyright 2006 John Wiley and Sons, Inc.

  25. Cognitive Walkthrough • The following steps are involved in cognitive walkthroughs: • The characteristics of typical users are identified and documented and sample tasks are developed that focus on the aspects of the design to be evaluated. • A designer and one or more expert evaluators then come together to do the analysis. • The evaluators walk through the action sequences for each task, placing it within the context of a typical scenario, and as they do this they try to answer the following questions: • Will the correct action be sufficiently evident to the user? • Will the user notice that the correct action is available? • Will the user associate and interpret the response from the action correctly? Copyright 2006 John Wiley and Sons, Inc.

  26. Cognitive Walkthrough • As the walkthrough is being done, a record of critical information is complied in which the assumptions about what would cause problems and why are recorded. This involves explaining why users would face difficulties. Notes about side issues and design changes are made. A summary of the results is compiled. • The design is then revised to fix the problems presented. Copyright 2006 John Wiley and Sons, Inc.

  27. Pluralistic Walkthroughs • Pluralistic walkthroughs are “another type of walkthrough in which users, developers and usability experts work together to step through a task scenario, discussing usability issues associated with dialog elements involved in the scenario steps.” (Nielsen and Mack 1994) Copyright 2006 John Wiley and Sons, Inc.

  28. Inspection with Conceptual Frameworks such as the TSSL model • Another structured analytical evaluation method is to use conceptual frameworks as bases for evaluation and inspection. One such framework is the TSSL model we have introduced earlier in the book. Copyright 2006 John Wiley and Sons, Inc.

  29. Example 1 - Evaluating option/configuration specification interfaces Figure 7.3 A Sample Dialog Box Copyright 2006 John Wiley and Sons, Inc.

  30. Evaluating option/configuration specification interfaces Figure 7.4 A Sample Tabbed Dialog Box

  31. Evaluating option/configuration specification interfaces Figure 7.5 The Preferences Dialog Box with Tree Menu Copyright 2006 John Wiley and Sons, Inc.

  32. Evaluating option/configuration specification interfaces Copyright 2006 John Wiley and Sons, Inc.

  33. Example 2Yahoo, Google, and Lycos web portals and search engines Compare and contrast displays for top searches of 2003. Which uses color most effectively? Layout? Ease of understanding? Why? Copyright 2006 John Wiley and Sons, Inc.

  34. Empirical Methods • Surveys and Questionnaires • Used to collect information from a large group of respondents. • Interviews (including focus groups) • Used to collect information from a small key set of respondents. • Experiments • Used to determine the best design features from many options. • Field studies • Results are more generalizable since they occur in real settings. Copyright 2006 John Wiley and Sons, Inc.

  35. Standards • Standards: are concerned with prescribed ways of discussing, presenting, or doing things to achieve consistency across same type of products. Figure 7.10 Categories of HCI Related Standards Copyright 2006 John Wiley and Sons, Inc.

  36. Standards • Sources of Standards

  37. Common Industry Format (CIF) • Common Industry Format (CIF): a standard method for reporting summative usability test findings. • The type of information and level of detail that is required in a CIF report is intended to ensure that: • Good practice in usability evaluation had been adhered to. • There is sufficient information for a usability specialist to judge the validity of the results. • If the test was replicated on the basis of the information given in the CIF, it should produce essentially the same results. • Specific effectiveness and efficiency metrics must be used, • Satisfaction must also be measured. Copyright 2006 John Wiley and Sons, Inc.

  38. Common Industry Format (CIF) • According to NIST, the CIF can be used in the following fashion. For purchased software: • Require that suppliers provide usability test reports in CIF format. • Analyze for reliability and applicability. • Replicate within agency if required. • Use data to select products. • For developed software (in house or subcontract): • Define measurable usability goals. • Conduct formative usability testing as part of user interface design activities. • Conduct summative usability test using the CIF to ensure goals have been met. Copyright 2006 John Wiley and Sons, Inc.

  39. Summary • Evaluations are driven by the ultimate concerns of human–computer interaction. • In this chapter, we presented four types of such concerns along the following four dimensions of human needs: agronomical, cognitive, affective, and extrinsic motivational (usefulness). • Evaluations should occur during the entire system development process, after system is finished, and during the period the system is actually used. • This chapter introduced several commonly used evaluation methods. Their pros and cons were compared and discussed. • The chapter also provided several useful instruments and heuristics. Standards play an important role in practice. This is discussed in the chapter. A particular standard, Common Industry Format, is described and the detailed format is given in the appendix. Copyright 2006 John Wiley and Sons, Inc.

More Related