1 / 7

Analyzing and Presenting Results Establishing a User Orientation

This guide provides strategies and techniques for analyzing and presenting usability test results to improve the user orientation of a product or service. It covers tabulating and analyzing data, analyzing video and audio recordings, statistical presentation and analysis, identifying usability problems, and communicating the results effectively.

myrat
Download Presentation

Analyzing and Presenting Results Establishing a User Orientation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analyzing and Presenting ResultsEstablishing a User Orientation Alfred Kobsa University of California, Irvine

  2. Tabulating and analyzing data • Tabulate data in spreadsheet(s) per user and per task (“raw data”) • Both quantitative and qualitative data (e.g., comments) • Compute totals per user, and means and medians per task • Find outlier values in the raw data, totals and averages • Try to explain them • get back to the original data source to check for measurement/transcription errors • look at time sheet / protocol and video recording • Outliers may point to infrequent usability problems, or they may derive from “accidental” characteristics of the respective test user (see log book!). In the latter case • Disregard outlier values if this can be justified, or use median instead of average • [Remove subjects with many outlier values completely if this can be justified (very few subjects only!)] • Look at means/medians and possibly standard deviations of tasks to • determine whether usability concerns are confirmed by the data • discover surprises in the data, and determine whether they may point to usability problems

  3. Analyzing video and audio recordings • It is typically easier to analyze video data with concrete questions in mind rather than merely “watching out for usability problems” • This does not so much apply to audio data (interviews and “think aloud”), since subjects often verbalize the problem they encounter • Observations should be noted down (with time stamps) • Categories for observations may already exist, or can be created in the observation process • Often it is advisable to use two independent observers who afterwards compare their notes (and get back to the recordings to resolve disputes)

  4. Statistical presentation and analysis • Results of usability tests are usually presented using • tabulated raw values (possibly with outliers marked out) • Verbatim comments • descriptive statistics (means, medians, maybe standard devs, mode) • visualizations of raw values and statistical values, if this adds value • In rare cases, inferential statistics can be used • Specifically for comparing two competing prototypes, or the “old” and the “new” system (e.g., using Student’s t-test) • Should be done with extreme caution, since • Preconditions for the applicability of statistical tests are often not met (randomness of subject sampling and assignment to conditions, normal distribution of data) • Sample sizes are often very small • Statistical significance of a difference does not mean that the difference is important • Decision makers do not know how to interpret the results of a statistical test (and are not familiar with the preconditions and limits of such tests) • Testers are not well trained in statistics and do not know which test is appropriate

  5. Identifying usability problems • Involve the designers / programmers (particularly if they are going to perform the revisions) • Focus on global problems since they often affect many aspects of an interface • Global problems are more difficult to pinpoint and to correct • Rank problems by level of severity • Level 1: problem may prevent the successful completion of a task • Level 2: problem may create significant delay and frustration • Level 3: problem has minor effect on usability • Level 4: possible enhancement that can be added in the future • Recommend changes (and test those changes later) • Possibly include some positive results as well

  6. Communicating the results “What did we see? What does it mean? What should we do about it?” • Preparing a report / reports See Baxter et al., Chapter 14 Dumas and Reddish, Chapter 22 • Preparing a Powerpoint presentation • Preparing a stand-alone video/multimedia presentation See Dumas and Reddish, Chapter 23

  7. Changing the product and process • Collaborate with designers/developers throughout the evaluation process (and possibly with management) • Prioritize and motivate your recommendations for re-design • Collaborate on finding feasible ways to fix the problems • Make suggestions to improve the design process, such as • earlier involvement of users • earlier testing of designs and prototypes • hiring HCI staff • developing design guidelines • planning re-evaluation early

More Related