1 / 86

HCI460: Week 3 Lecture

HCI460: Week 3 Lecture. September 23, 2009. Outline. Project 1 recap Q & A Research overview Defining usability Usability measures Testing with users Preparation for a usability study Assignment for next week. Project 1 Recap. Project 1 Recap.

Download Presentation

HCI460: Week 3 Lecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HCI460: Week 3 Lecture • September 23, 2009

  2. Outline • Project 1 recap • Q & A • Research overview • Defining usability • Usability measures • Testing with users • Preparation for a usability study • Assignment for next week

  3. Project 1 Recap

  4. Project 1 Recap Feedback on Project 1a (Individual Notes) • No deductions for late submissions but from now on: • In class students: Projects are due Wed 11:59 pm Central Time • DL students: Projects are due Sun 11:59 pm Central Time • Please submit through COL. • One person should submit for the entire group.

  5. Project 1 Recap Feedback on Project 1a (Individual Notes) • Assignments differed along two dimensions: • Number of issues found • Some found only a few, some found a lot. • Level of polish • Some turned in polished deliverables, some turned in “quick and dirty” work. • Real world: perfection vs. efficiency • Recognize when “quick and dirty” is needed and when professional and final deliverable is needed. • At this stage of the project (individual notes), it was better to have more issues and present them in a “quick and dirty” way than to have fewer issues but a more polished deliverable.

  6. Project 1 Recap Project 1b: Discussion • Project 1b is due today at midnight for in-class students and on Sunday at midnight for DL students. • One person from each group should upload it to COL. • What have you learned? • What was difficult / challenging? • About the evaluation itself • About evaluating as a group • How did you overcome the challenges? • What will you do differently next time? • Did you see any value in conducting the evaluation as a group?

  7. Q & A

  8. Q & A Class Interaction ~ Opportunity • Interaction has been excellent thus far • Clarification will always be provided, just ask • Exercises allow for alternative feedback • Feedback • Opportunities • Push the User eXperience (UX) envelope • Professional development

  9. Q & A Conference Recap • UXalliance.com • mobileHCI • Eating usability • Touch interface debate

  10. Research Overview

  11. Research Overview Why Are You Here? What do we do? What do you want to do?

  12. Research Overview Paths Into UX Research • Research background • Design background

  13. Research Overview Thinking About UX Research • If someone asked, what is it? • What we strive to do is answer questions. • Methodologies and techniques are just tools. • As practitioners, we must be focused on the questions. • Often, finding the answer will require much deeper and complex methodologies than discount usability testing will provide OR NOT! • Let’s explore and quite possibly push the boundaries of user experience research tactics.

  14. Research Overview What Can’t We Evaluate? • May have mentioned…. • Web sites • Software applications • Interactive Voice Response systems (IVRs) • Speech recognition systems • Text-to-speech (TTS) interfaces • Voice mail systems • Unified messaging interfaces • Internet applications • Games and gaming systems • Telecommunication products and services • Call center applications • Consumer products • Wireless devices • Packaging

  15. Research Overview What Can We Test? • Consider an iPhone…

  16. Research Overview Three Dimensions of Feedback • How people evaluate objects • Commercials • Packages • Online advertisements • Products • … 16

  17. Research Overview Attitude • What users “say” ... • Influencers • Social status • Emotion • Coolness / Hip • Reveal • Feature importance • Purchase intent 17

  18. Research Overview Behavior • What users actually “do” • Ultimately, behaviors are what we wish to shape • Give users context, a task and stimuli and then • Observe what users do • Behavior drives usage 18

  19. Research Overview Attention • What users “focus” on… • What happens inside the head • Sometimes users are unaware • Often attention measured by eye tracking 19

  20. Defining Usability

  21. Defining Usability Why Are You Here? What do we do? What do you want to do?

  22. Defining Usability Can Usability Be Defined? We want to make things better. But how do you measure “better?” Can usability be defined?

  23. Defining Usability The UX Honeycomb

  24. Defining Usability What Makes Something Usable? Lights are on. Turn off the light.

  25. Defining Usability Are There Universal Rules or Stereotypes? Turn the knob to move the arrow to the right.

  26. Defining Usability Interaction Not Always Universal Make the shower temperature “just right.”

  27. Defining Usability How About More Complicated Interfaces?

  28. Defining Usability Can Interfaces Be Learned?

  29. Defining Usability Emphasis on Usability in Many Fields

  30. Defining Usability Naysayers: You Cannot Measure Usability • No measures exist. • Why? • What can we measure? • Success/Fail • Time • Accuracy • Satisfaction • ...

  31. Defining Usability Definitions of Usability • From UPA: • Usability is an approach to product development that incorporates direct user feedback throughout the development cycle in order to reduce cost and create products and tools that meet user needs. • From Krug’s “Don’t Make Me Think:” • Usability really means making sure that something works well…so that a person of average ability and experience can use the thing for its intended purpose without being hopelessly frustrated.

  32. Defining Usability Definitions of Usability • From ISO 9241-11: • The extent to which a product can be used by specified users to achieve specific goals with effectiveness, efficiency, and satisfaction in a specific context of use. • Effectiveness: Being able to complete the task • Efficiency: Amount of effort required to complete task • Satisfaction: Degree of happiness or fulfillment while performing task (or rather the absence of pain and frustration)

  33. Defining Usability Naysayers: Usability Data Are Too Noisy • There is just too much going on to get reliable data—NOT! • Measurement must: • Be observable • Be quantifiable • Have sound control • Have understanding of context

  34. Defining Usability Not All Measures Are Created Equal Bad Inappropriate Good

  35. Defining Usability Types of Usability Metrics • Performance metrics • Issues-based metrics • Self-reported metrics • Behavioral and physiological metrics • Combined and comparative metrics

  36. Defining Usability Performance Metrics • Task success • Binary: Success/fail • Levels of success: Complete success/partial success/…/failure • Time on task • Actual time • Thresholds • Errors • Actual number • Efficiency • Deviations from optimal path • Lostness (Smith, 1996) • Combination of task success and time (NIST, 2001) • Learnability • Metrics above taken from trials within same session, with breaks, between sessions

  37. Defining Usability Issues-Based Metrics HIGH MEDIUM LOW • Usability issues • Number of issues found • Percentage of participants who found an issue • Severity ratings • High severity • Medium severity • Low severity

  38. Defining Usability Self-Reported Metrics • Post-task ratings • Likert • Post-session ratings • Likert • System Usability Scale (SUS) (Brook 1996) • Likert • NASA TLX • Likert • Specific attribute • Agreement • Answers to open-ended questions

  39. Defining Usability System Usability Scale (SUS)

  40. Defining Usability Behavioral and Physiological Metrics • Verbal and non-verbal • Positive / negative comments • Facial expressions • Computerized analysis • Eye movements • Attention • Pupil diameter • Workload • Skin conductance and heart rate • Stress

  41. Defining Usability Combined and Comparative Metrics • Usability scorecards • Comparison • Harvey balls • Radar charts

  42. Defining Usability Again, Metric Selection is Non-Trivial Bad Inappropriate Good

  43. Defining Usability Consider a Horse Race: Which Measure is Good?

  44. Defining Usability Different Conditions: Yes, That is Snow

  45. Defining Usability Appropriate Metrics Depend on Many Factors

  46. Defining Usability Exercise: Measurement Scale • Objective: • Tons of user complaints that volume is too soft. • We have three new versions and want to know if the problem has been fixed. • Measure: • Rating of the perceived sound level 1 2 3 4 5 6 7

  47. Defining Usability Evaluating Usability

  48. Testing with Users

  49. Testing with Users Methods with No Users vs. Methods with Users • Inspection methods = no users, only UX experts • Heuristic evaluation • Expert evaluation • Competitive evaluation • Cognitive walkthrough • Pluralistic walkthrough • Methods involving users • User testing (lab, longitudinal) • Eye tracking • Focus groups • Surveys • Ethnographic research

  50. Testing with Users Limitations of Testing (Rubin … Lab Testing) • Artificial situations • Results cannot prove product will work • Participants not necessarily actual target market • Might not be the right thing to do • Others? (Beyond Rubin?) • Sample size considerations • Feature coverage

More Related