chapter 22 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Chapter 22 PowerPoint Presentation
Download Presentation
Chapter 22

Loading in 2 Seconds...

play fullscreen
1 / 15

Chapter 22 - PowerPoint PPT Presentation


  • 39 Views
  • Uploaded on

Chapter 22. Planning who, what, when, and where. Intro. We have da strategy: purpose type of data to collect system constraints Now: choose users (population sample) create timetable (and stick to it) prepare task descriptions (script it!) decide where to evaluate (field or lab).

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Chapter 22' - makya


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
chapter 22

Chapter 22

Planning who, what, when, and where

intro
Intro
  • We have da strategy:
    • purpose
    • type of data to collect
    • system
    • constraints
  • Now:
    • choose users (population sample)
    • create timetable (and stick to it)
    • prepare task descriptions (script it!)
    • decide where to evaluate (field or lab)
choosing participants
Choosing participants
  • Each participant should be:
    • a real (actual) user, or
    • representative user (from requirements), or
    • usability or domain expert
  • Participants should not be:
    • chosen at random
    • (hmm, this is contrary to “traditional” experimental criteria…why?)
screening or pretesting
Screening or pretesting
  • To gage whether user fits desired subject profile, may need to screen users
  • For example, if testing a Spanish language training program, don’t use fluent Spanish language speakers
  • Questionnaires may be used to record users’ experience levels (can be useful in analysis later, e.g., discard experts’ data)
working alone or in pairs
Working alone or in Pairs?
  • Usually users are tested alone
  • When may pairs be a good idea?
    • working cooperatively, sharing a computer
    • different culture (e.g., Japanese)
    • they prefer it (e.g., husband/wife team)
  • Hire a facilitator / caretaker / custodian?
    • when working with children, disabled, etc.
    • when interpreter is needed
how many participants
How many participants?
  • Depends on problem and stage of testing
    • “trivial” or “easy” troublespots will be identified quickly (and often if many users); so need only a few during early stages
    • OTOH, if a couple of users finds no problems, does that mean UI is acceptable in general?
    • Key issue: generalizability
    • Ideally, you’d want to conduct a power analysis of the experiment
    • But one typically goes with “rule of thumb”: start with 5, go to 10, etc………………..
university participants
University participants
  • The book’s discussion seems to be aimed at practitioners
  • What about at the Uni?
    • use the Psych pool, other students, etc.
    • problems:
      • restricted age group
      • users may not have required expertise (e.g., evaluating a Fortran debugger)
      • motivation may be wanting (e.g., doing it for credit, not so much for science or “good of humanity” :)
incentive
Incentive
  • If possible, compensate users:
    • extra credit
    • real credit (e.g., on an e-commerce web site)
    • food
    • nick-nacks (mugs, pens, bla bla)
    • soap (true story :)
    • money is always good, if you have enough to spare
global warming app users
Global Warming App Users
  • How they picked users for the Global Warming App study:
    • email solicitation
    • experienced users (not novices)
    • various disciplines (e.g., not CS necessarily)
    • 10 users
    • no incentives
create a timetable
Create a timetable
  • Timetable:
    • How long do you need per evaluation session?
      • may need to run quick pilot study to determine
    • How much time will the whole process take?
      • quick “back of the envelope” calculation: 100 subjects, 10 minutes each = 16 hours (not counting introductions, filling out questionnaires, lunch, dinner, classes, interruptions, Survivor episodes, etc.)
      • how many sessions can you run per day? Maybe 4-5 hours’ worth?
      • so what’s a realistic estimate for 100 subjects? week? two weeks?
timetable cont
Timetable (cont.)
  • Keep evaluation session to a minimum, try not to exceed 1 hour (subjects will get bored, tired)
  • Create a timetable “sign up sheet”
    • very useful for signing up subjects and reserving lab space (e.g., eye trackers)
  • Allocate time for analysis
    • 80% of time spent in analysis (true? I dunno, just guessing)
    • just like debugging code?
task descriptions
Task Descriptions
  • Create task descriptions
    • similar to idea of scripts for evaluators (so they know what to say and say the same thing to each participant, thereby reducing bias)
    • these are scripts for users
    • I think they’re a good idea (task cards), but so long as they’re not too detailed
      • case study: my Navy usability study: participants read directions from a script. This was too easy; everyone performed similarly; no clear performance problems were identified
where to do evaluation
Where to do evaluation?
  • Field studies
    • observations in the field: most realistic environment, obviously
    • lacks control
  • Controlled studies
    • in a lab, usually
    • or some kind of mock scenario (e.g., “shoot house” for cops, SWAT personnel)
usability lab
Usability Lab
  • How to build a good usability lab:
    • http://www.stcsig.org/usability/topics/usability-labs.html
    • often a separate room is used for participants (one-way mirror)
    • various logging devices, e.g., cameras, keystroke logging software, etc.
    • do we have one at Clemson? We should…