1 / 49

Process

Process. Analyze users. Define goals. Early design. Analysis. Heuristic evaluations cognitive walkthroughs. Site visits. Analyze Tasks. Contextual inquiry. Design on paper. Evaluate. Late design. Develop. User studies, function tests. user studies. Evaluate. Implement.

Download Presentation

Process

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Process Analyzeusers Definegoals Earlydesign Analysis Heuristic evaluationscognitive walkthroughs Site visits AnalyzeTasks Contextualinquiry Designon paper Evaluate Latedesign Develop User studies, function tests user studies Evaluate Implement Evaluate Prototype Field studies, call center,interaction logs Ship Evaluate

  2. What you need(for a cognitive walkthrough) • User personas • who the users will be and what kind of goals and experience they'll bring to the job • A description or a low-fidelity prototype of the user interface. • Task descriptions • A complete, written list of the actions needed to complete the task with the interface • An evaluation team: • Design team • Design team and users together • Design team and other skilled designers

  3. Grading the Cognitive Walkthrough • User personas – 30 pts • Did they observe or analyze actual users doing related activities Grade _____ (15 pts max) • Are the personas complete with relevant (and observed) personal details and goals Grade _____ (15 pts max) • Interface Description – 30 pts • Is there a description of a conceptual model? • Grade _____ (15 pts max) • A description or a low-fidelity prototype of the user interface. – 15 pts • Is the description detailed enough to evaluate cues, affordances, and feedback at each decision point Grade _____ (15 pts max) • Task descriptions – 40 pts • Are they concrete, detailed examples of all (or most) of the activities the system should support? • Grade _____ (20pts max) • A complete, written list of the actions needed to complete the task with the interface – 25 pts. • Do you have a step by step story of how each persona would accomplish each task with the proposed interface Grade _____ (20pts max)

  4. What to do? Tell a story • about why the user would select each action, and critique the story to make sure it's believable with these questions: • Will users be trying to produce whatever effect the action has? • Will users see the control (button, menu, switch, etc.) for the action? • Once users find the control, will they recognize that it produces the effect they want? • After the action is taken, will users understand the feedback they get, so they can go on to the next action with confidence?

  5. Process Analyzeusers Definegoals Earlydesign Analysis Heuristic evaluationscognitive walkthroughs Site visits AnalyzeTasks Contextualinquiry Designon paper Evaluate Latedesign Develop User studies, function tests user studies Evaluate Implement Evaluate Prototype Field studies, call center,interaction logs Ship Evaluate

  6. How to conduct a usability test • This is the single most important thing you will learn in this class • By watching people trying to use the interface answers many of the important questions of user interface design. • Good visibility, adequate feedback, good system model etc… • Usability testing is like conducting an experiment, but has different goals.

  7. Usability testing Improve products Few participants Results inform design Usually not completely replicable Conditions controlled as much as possible Procedure planned Results reported to developers Experiments for research Discover knowledge Many participants Results validated statistically Must be replicable Strongly controlled conditions Experimental design Scientific reported to scientific community Usability testing & research

  8. Usability testing • Goals & questions focus on how well users perform tasks with the product. • Comparison of products or prototypes common. • Focus is on • types of errors. • time to complete task • Data collected by • Video • good note taking • interaction logging.

  9. Basic Elements of Usability Testing • Development of test objectives or problem statement • NOT hypotheses testing • Use of a representative sample of end users • NOT randomly chosen • Representation of the actual work environment • Observation of end users using a representation of the product • Controlled and extensive interrogation and probing of the participants by the test monitor • Collection of quantitative and qualitative performance and preference measures • Recommendation of improvements to the design of the product

  10. Testing conditions • Usability lab or other controlled space. • Emphasis on: • selecting representative users; • developing representative tasks. • 3-5 users typically selected. • Tasks usually last no more than 30 minutes. • The test conditions should be the same for every participant. • Informed consent form explains procedures and deals with ethical issues.

  11. Limitations of Testing • Testing is in an artificial situation • Test results do not prove that a product works • Participants are rarely fully representative of the target population. • You can’t test everything. • Must be combined with other evaluation techniques.

  12. Observation: A Critical Difference • Observing seems easy but is very complicated • Requires careful consideration and skill • Types of observation: • direct observation • video recording • data logging software • Disadvantages of observation?? • experiment effect • Hawthorne effect (1939) • A distortion of research results caused by the response of subjects to the special attention they receive from researchers

  13. Planning a User Test:User Test Proposal • Problem statement or test objective • Participant profile • Scenarios/Tasks to perform • Measures to collect • Data collection methods • Testing environment • Roles of design team members

  14. Choosing Users to Test • The best test users will be people who are representative of the people you expect to have as users • if you can't find any, worry. • surrogates are OK for finding big problems • but listen to real users more

  15. How many participants is enough for user testing? • The number is a practical issue. • Depends on: • schedule for testing; • availability of participants; • cost of running tests. • Typically 3-5 participants. • Some experts argue that testing should continue until no new insights are gained.

  16. Treat you test users carefully and professionally • you are putting them in an uncomfortable situation • guidelines: • voluntary, informed consent • free to stop at any time ( don't ever pressure them) • avoid friends, co-workers, subordinates • monitor their attitude during test stressing that the software is being tested, not them. • protect their privacy

  17. Selecting Tasks for Testing • They should be like real tasks. • But need to be simplified. • They need to be specific • e.g. “Find a book about mars exploration” • Select tasks that exercise all aspects of your interface • especially the weak ones

  18. It’s better to do more tests than to have more users

  19. Test Objectives • What is the focus of each user test? (evaluation) • easy to learn, easy to remember, efficient to use, few errors, aesthetically pleasing • General objective example • will new users be able to navigate through the menus quickly and easily? [Learnability] • Specific objective example • will new users be able to find the right menu path to: read, write, send, respond to, forward, save, and delete a message • What you want to learn from the test will lead to • who are the participants, what tasks will they perform during the evaluation, what measures to collect

  20. Measures to Collect • Two types of data • process data • observations of what users are doing and thinking • bottom-line data (i.e., performance measures) • counts of actions / behaviors that you see • time, errors, successes

  21. The Thinking Aloud Method • the best way to get process data • Instructions: • simple: ``Tell me what you are thinking about as you work.'' • make it clear that it is the system, not the user, that is being tested, so that if they have trouble it's the system's problem, not theirs • Recording is important • Video - most expensive • Audio - not as revealing • Screen capture (e.g. Camtasia) • Note taking - fast & effective

  22. Role of the Test Monitor • Monitor the session impartially - BE OBJECTIVE • Let the feedback come from the product not you • Prompt only to keep comments coming • Help only when hopelessly stuck • If you make a mistake, record it and continue • Make sure the participants are really finished with a task before going to the next • Keep it relaxed and light. Laugh about interface problems. • SUPPORT the test user, protect their self-esteem • Probe when necessary.

  23. Probing • Don't show surprise • Focus on what the participants expected to happen • Don't ask leading or direct questions • e.g. “Why did you go all the way back to the home screen?” • Ask neutral questions • e.g. “What are you thinking right now?” • “You seem surprised/puzzled/frustrated.” • “What did you expect to happen?” • Don' t problem solve, design, or get into long discussions.

  24. When to assist • When a participant is very lost or very confused. • When performing a required task makes a participant feel uncomfortable. • When a participant is exceptionally frustrated and may give up. • When the system is incomplete and you need to fill in. • When a bug occurs.

  25. How to assist • NEVER, EVER blame the participant for the problem • even indirectly. • Find out what the problem is. Don't automatically tell them an answer to what you think is their problem. • Don't revel too much, just enough to get by the obstacle.

  26. Advantages test monitor can see is going on with the participant verbal cues, facial expressions, mannerisms allows interaction with participant in early, exploratory tests may be more natural to think aloud with someone in the room Disadvantages test monitor’s behavior may affect the participant’s behavior there is limited space for observers Simple Single-Room Setup

  27. Modified Single-Room Setup • Advantages • Test monitor can be less concerned about controlling body language, mannerisms, taking notes, etc. • Participant does not feel isolated since monitor is still in the room • Participant more likely to think aloud • Disadvantages • Monitor can’t see subtle facial expressions / mannerisms as well • Monitor location may make user feel self-conscious or uneasy

  28. Electronic Observation-Room Setup • Advantages • Same as single-room setup • Observers don’t interfere with or bias the users • Disadvantages • Monitor behavior can bias user • Requires the use of 2 rooms at a time

  29. Classic Testing Laboratory Setup • Advantages • Unobtrusive data collection (but user still knows she is being videotaped) • Monitors and observes can talk to each other and discuss how to solve problems that come up • Setup can accommodate many observers • Disadvantages • Requires lots of money, resources, and commitment to testing

  30. Testing Environment Trade-Offs • Test monitor access to participant • Accommodations for the observers • location • number of observers allowed • Cost • equipment: video cameras, data-logging equipment, one-way mirrors, etc. • space: number and size of rooms occupied during testing

  31. Simple Observation Setup You can use a screen capture program - Camtasia http://www.techsmith.com/freetrials.asp

  32. Using the Results of Process Data (Think Aloud) • Summarize the data • make a list of all critical incidents (CI) • positive: something they liked or that worked well • negative: difficulties with the UI • include references back to the original data • try to judge why each difficulty occurred • What does the data tell you? • UI work the way you thought it would? • is your model consistent with the users conceptual model? • great way to better understand user’s conceptual model • something missing?

  33. Using the Results (Think Aloud) • Update task analysis and rethink design • rate severity and ease of fixing critical incidents • fix severe problems and make the easy fixes • Will thinking aloud give the right answers • not always • if you ask a question, people will always give an answer, even when it has nothing to do with the facts

  34. Advantages of the ``Thinking Aloud'' Method • Preference and performance information is captured simultaneously so you don't have to remember to ask later. • Can help participants to focus and concentrate. • Can receive early clues about misconceptions and confusions which helps to identify the source of the problems.

  35. Disadvantages of “Thinking Aloud” • May be difficult for some “non-analytical” participants • Slows the thought process • which may actually prevent errors • not realistic of work place behavior • Requires more participant effort and can be exhausting

  36. Measures to Collect • Two types of data • process data • observations of what users are doing and thinking • bottom-line data (i.e., performance measures) • counts of actions / behaviors that you see • time, errors, successes

  37. Measuring Bottom-Line Usability • Situations in which numbers are useful • time requirements for task completion • number of successful completions • number of errors made by users • compare 2 designs on speed or number of errors • Donot combine with think aloud protocol • talking can affect speed and accuracy (neg. and pos.) • your project is an exception to this general rule • Time is easy to record • Error or successful completion is harder • define in advance what this means

  38. Some type of data • Time to complete a task. • Time to complete a task after a specified. time away from the product. • Number and type of errors per task. • Number of errors per unit of time. • Number of navigations to online help or manuals. • Number of users making a particular error. • Number of users completing task successfully.

  39. Typical Performance Measures time to finish a task time spent navigating menus time spent in the online help time to find information in the manual time spent recovering from errors number of wrong menu choices number of incorrect choices in the dialog boxes number of wrong icon choices number of repeated errors (the same error more than once) number of calls to the “help desk” or for “aid” number of screens of on-line help looked at number of repeated looks at the same help screen number of times turned to the manual number of pages looked at in each visit to the manual observations of frustration / confusion / satisfaction Bottom-Line Data Typical Subjective User Preference Measures Ratings of: ease of learning ease of using the product ease of doing a particular task ease of installing the product helpfulness of the on-line help ease of finding information in the manual ease of understanding the information usefulness of the examples in the help Preferences over a previous version and reasons over a competitor’s product for the over the way they are doing their tasks now preferences: Predictions: Would you buy this product? Would you pay extra for the manual? How much would you pay for this product? Spontaneous Comments: “I don’t understand this message!”

  40. Statistical Analysis of Bottom-Line Data • Example: trying to get task time <=30 min. • test gives: 20, 15, 40, 90, 10, 5 • Sample Mean = 30, Median = 17.5, Looks good! • wrong answer, not certain of anything • Factors contributing to our uncertainty • small number of test users (n=6) • results are very variable (standard deviation = 32) • general rule: 95% confident that true mean lies within 2 standard deviations from the sample mean • Confidence Interval is about [-34 minutes, 94 minutes]

  41. Measuring User Preferences • How much users like or dislike the system • Likert scale • Semantic differential scale • can ask users to rate on a scale of 1 to 10 • can have them choose among statements • “Best UI I’ve ever used”, “better than average”... • If you get many low ratings, you are in trouble • Can get some useful data by asking open-ended questions about: • what they liked, disliked, where they had trouble, best part, worst part, etc.

  42. Example tasks • Task 1 – Install the software • (YOU HAVE UP TO 15 MINUTES FOR THIS TASK) • There is an envelope on the desk entitled DiaryMate.  It contains a diskette, and an instruction manual.  • When you are ready, install the software.  All the information you need is provided in the envelope. • LET US KNOW WHEN YOU ARE READY TO MOVE ON • Task 2 – Familiarization  period • Spend as long as you need to familiarise yourself with the diary and address book functions. • (YOU HAVE UP TO 20 MINUTES) • LET US KNOW WHEN YOU ARE READY TO MOVE ON • Task 3 – Add a contact record • (YOU HAVE ABOUT 15 MINUTES FOR THIS TASK) • Use the software to add the following contact details. • NAME -       Dr Gianfranco Zola • COMPANY  Chelsea Dreams Ltd • ADDRESS -            25 Main Street •                   Los Angeles • Califorina 90024 • TEL:           (work)               222 976 3987 •       (home)              222 923 2346 • LET US KNOW WHEN YOU ARE READY TO MOVE ON • Task 4 – Schedule a meeting • (YOU HAVE ABOUT 15 MINUTES FOR THIS TASK) • Use the software to schedule the following meeting. • DATE:                    23 November 2001 • PLACE:                  The Blue Flag Inn, Cambridge • TIME:                     12.00 AM to 1.30 PM • ATTENDEES:          Yourself and Gianfranco Zola. • LET US KNOW WHEN YOU HAVE FINISHED

  43. Example of a finding and recommendations: • Participants were unwilling to read a dense page of text • Finding: 9 of 10 participants who successfully got to the page that had the information they were looking for in this scenario expressed dismay at how much text there was on the page. They said that it was too much to read. (Show a small picture of the page; also include a few actual quotes here.) • When asked what they would do to get the answer to the question in the scenario, five of nine said they would guess the answer; four of nine said they would try to find a person to call or would ask someone they knew. • Recommendations: Break up the information on the page into a series of short questions and answers. Even when using bulleted lists (as there are on the page in this scenario), put space between each bulleted item if the items are longer than a few words. Also have only a few bullets in each list (not 20 as in the list on the page in the scenario).

  44. Usability test report • An example usability test report • Problem • Some Ps thought the week and month views should show Saturday and Sunday. Omitting Saturday and Sunday mean that their labels (e.g., "Week of August 9") sometimes conflict with the first date displayed. Also, some Ps wanted to be able to set and see appointments on weekends, e.g., company picnics, working weekends, etc. And advance notice for ToDos counts weekend days, even though they aren't shown. • Recomendation • Show weekends if possible. Perhaps provide a toggle at the left: "Show: [ ] weekends". • Severity • high • Status • Done. Option allow showing weekends or not. Considering ways to make the setting more visible/accessable

  45. Apple's user observation guidelines • Describe the purpose of the observation • Tell the participant that it's OK to quit at any time • Talk about the equipment in the room • Explain how to “think aloud” • Explain that you will not provide help • Describe the tasks and introduce the product • Ask if there are any questions before you start, then begin • Conclude the observation • Use the results

  46. Several true things about testing • If you want a great site, you got to test • You cannot see you site freshly. Testing reminds you that not everyone thinks like you • Testing one user is 100 percent better than testing none. • Testing one user early is better than testing 50 at the end of a project. • The importance of recruiting represetative users is overrated • The point is not to prove but to inform.

  47. Top Ten List of Things NOT to say to Participants* 10. Saying “Remember, we are not testing you,” more than three times 9. Are you familiar with the term “outlier”? 8. No one's ever done THAT before. 7 HA! HA! HA! 6 That's impossible! I didn't know it could go in upside down! 5 Could we stop for awhile-watching you struggle like this is making me tired. 4 I didn't really mean you could press ANY button. 3. Yes, it is very natural for observers to cry during a test. 2. Don't feel bad, many people take 15 or 16 tries. 1 Are you SURE you've used computers before • From Jeffry Rubin, Handbook of Usability Testing.

  48. Link to book site with example test video

More Related