1 / 36

Evaluating Interface Designs

Evaluating Interface Designs. How do you know your design is any good? When will you know?. Evaluating Interface Designs. Determinates of the evaluation plan Design Stage (early, middle, late) Novelty of the project (well defined vs. exploratory) Number of expected users

Download Presentation

Evaluating Interface Designs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Interface Designs • How do you know your design is any good? • When will you know?

  2. Evaluating Interface Designs • Determinates of the evaluation plan • Design Stage (early, middle, late) • Novelty of the project (well defined vs. exploratory) • Number of expected users • Criticality of the interface (e.g., life-critical medical systems vs. museum-exhibit support) • Costs of product and finances allocated for testing (range of 5% to 20% of the total project budget) • Time available • Experience of the design and evaluation team • Failure to perform and document testing can result in • Failed contract proposals • Malpractice lawsuits

  3. Evaluating Interface Designs • Expert Reviews • Ask colleagues or customers for their feedback • Expert reviews can be conducted on short notice and with little time commitment • Can occur early or late in the design phase • Deliverable can be a formal report with problems identified and recommendations • Deliverable can also be an informal presentation with the development team and managers • Expert reviews may require training on the task domain

  4. Evaluating Interface Designs • Expert Reviews Methods • Heuristic Evaluation http://www.youtube.com/watch?v=hWc0Fd2AS3s&feature=related • Critique of the interface for conformation to a short list of heuristics • Consistency • Universal usability • Informative feedback • Closure • Prevent errors • Easy reversal of actions • Internal locus of actions (user as initiator) • Reduce short-term memory load

  5. Evaluating Interface Designs • Expert Reviews Methods • Guidelines Review • Based on organizational guidelines

  6. Evaluating Interface Designs • Expert Reviews Methods • Consistency Inspection • Terminology, fonts, colors, layout, input/output formats

  7. Evaluating Interface Designs • Expert Reviews Methods • Cognitive walkthrough • Experts simulate users walking through the interface to carry out a typical task. • Start with high-frequency tasks • Critical tasks should definitely be evaluated

  8. Evaluating Interface Designs • Expert Reviews Methods • Bird’s Eye View • Study a complete set of UI screens on the floor (or pinned to walls) • Provides a easy way to see fonts, colors and terminology

  9. Evaluating Interface Designs • Expert Reviews Methods • Expert-Review Report • Can use the guidelines document to structure the report • Comment on novice, intermittent and expert features • Rank recommendations by importance and effort level Effort Level Low High 1, 3, 5 2, 4, 6 Low User Importance 7, 9, 11 8, 10, 12 High

  10. Evaluating Interface Designs • Usability Testing and Laboratories • Controlled experiments • Generally have at least two treatments • Need to show statistically significant differences • Goal is validation or rejection of a hypothesis • Usability tests • Goal is to find flaws in the interface • Fewer participants • Outcome is a report • Both studies include carefully prepared set of tasks

  11. Evaluating Interface Designs • Usability Testing and Laboratories • Having a usability lab on sight shows a commitment to customers, users and employees • Generally contains two 10 x 10 rooms, divided by a half-silver mirror • Staffed by one or more people • Ideally have been involved in early task analysis or design reviews • Example – Display based phones

  12. Evaluating Interface Designs • Usability Testing and Laboratories • Two to six weeks before the usability test • Develop the detailed test plan (list of tasks, subjective satisfaction questions, debriefing questions) • Identify the number, types and source of the participants • Sources: Customer sites, personnel agencies, advertisements • Conduct a pilot test one week ahead of testing • Participants • Notify them that it is the software being evaluated, not them • Inform them of the tasks they will be performing (e.g., ordering a product on a website) • Inform them of how long they will be in the session (normally 1 to 3 hours) • Obtained informed consent

  13. Evaluating Interface Designs • Usability Testing and Laboratories • Informed consent • I have freely volunteered to participate in this study • I have been informed in advance of the tasks and procedures • I have been given the opportunity to ask questions • I am aware that I have the right to withdraw consent and to discontinue participation at any time, without prejudice to my future treatment • My signature below may be taken a affirmation of all above statements; it was given prior to my participation in this study • Post tasks • Participants can make general comments or suggestions, or respond to specific questions • Videotaping • Reviewing can be tedious • Log and annotate during the test • Look for critical incidents

  14. Evaluating Interface Designs • Usability Testing and Laboratories • Eye Tracking – Heat Maps

  15. Evaluating Interface Designs • Usability Testing and Laboratories • Paper mockups • Early is the design phase • Get user reactions to wording, layout, and sequencing

  16. Evaluating Interface Designs • Usability Testing and Laboratories • Discount usability testing • Three to six participants (allows prompt revision and repeated testing) • Formative evaluation – identifies problems that guide re-design • Summative evaluation – provides evidence for product announcement • “99% of our 100 testers completed their tasks without assistance • Competitive usability testing • Compares the new interface to previous versions or similar products from competitors • Within-subjects designs are the most powerful • Think Aloud • http://www.youtube.com/watch?v=tbKnFaW69e0&feature=related

  17. Evaluating Interface Designs • Usability Testing and Laboratories • Field tests and portable labs • Puts new interfaces to work in realistic environments for a fixed trial period • Need portable labs with videotaping and logging facilities • Remote usability testing • Web-based applications tested internationally, on-line • Can recruit testers via email • Less control over user behavior, and less chance to observe their reactions • Usage logs and phone interviews are useful supplements • UserWorks, Inc. • Can-you-break-this tests • Destructive testing approach • Users attempt to find fatal flaws

  18. Evaluating Interface Designs • Usability Testing and Laboratories • Short comings • Limited coverage of interface features • Hard to predict success in long-term usage • The lab environment is different than the real work environment

  19. Evaluating Interface Designs • Survey Instruments • Often a companion to usability testing and expert reviews • Specify survey goals • Ask the users for the subjective impressions about specific aspects of the interface. E.g., representation of: • Task domain objects and actions • E.g., appointments, PAT, treatment series • Interface domain metaphors • E.g., shopping cart • Syntax of inputs and design of displays • E.g., copy, add • User specific information • Background (e.g., age, gender, education, income) • Experience with computers (e.g., software packages, length of time, depth of knowledge, TurboTax)

  20. Evaluating Interface Designs • Survey Instruments • User specific information • Job responsibilities (e.g., trenches, manager) • Personality type (e.g., introvert/extrovert, risk taking, early adopter) • Reasons for not using an interface (e.g., too complex, too slow) • Familiarity with features (e.g., printing, short-cuts, tutorials) • Feelings about using the interface (e.g., confused vs. clear, frustrated vs. in control, bored vs. excited) • Coleman and Williges (1985) – Bipolar Semantically Anchored Items • Hostile 1234567 Friendly • Vague 1234567 Specific • Misleading 1234567 Beneficial • Discouraging 1234567 Encouraging

  21. Evaluating Interface Designs • Survey Instruments • Questionnaire for User Interaction Satisfaction (QUIS) – Shneiderman • Readability of characters • Layout of displays • Meaningfulness of icons • Interface actions (e.g., short-cuts) • Terminology • Screen sequencing

  22. Evaluating Interface Designs • Survey Instruments • QUIS: General Content • System experience (e.g., time spent on the application) • Past experience (e.g., operating systems, devices, software) • Overall reactions (e.g., terrible/wonderful; rigid/flexible) • Screen objects (e.g., characters, highlighting, layouts, sequence) • Terminology (e.g., error messages, amount of system feedback) • Learning (e.g., getting started, time to learn advanced features) • Exploration of features by trial and error • Remembering names and use of commands • Steps to complete a task are in a logical sequence • System capabilities (e.g., speed, reliability) • User manuals, online help, and tutorials • Multimedia (quality of picture and sound) • Teleconferencing (e.g., set-up, image quality, connector indicators) • Software installation

  23. Evaluating Interface Designs • Acceptance Tests • Used for software acceptance today • Specific cases with possible response time requirements • Applied to usability acceptance • Time to learn specific functions • Speed of task completion • Rates of errors • User retention of commands • Subjective user satisfaction • The goal is not to detect flaws, but to verify adherence to requirements

  24. Evaluating Interface Designs • Evaluation During Active Use • Major changes should be announced semi-annually or annually • Interviews and focus-groups • One-on-one interviews and yield comments that can be discussed with a larger audience • Continuous user performance data logging • The software support the collection of: • Patterns of usage (e.g., new vs. existing patient) • Speed of user performance • Rate of errors • Frequency of errors • Can be a candidate for a feature to receive specific attention • Access to help or support on an issue • Simplify access to frequently access features • Rarely accessed features (why are they being avoided) • Potential privacy issues

  25. Evaluating Interface Designs • Evaluation During Active Use • Online or telephone consultants • Excellent source of information about problems users are having • Source of suggested improvements • Blogs to discuss user problems • On-line suggestion box and email trouble reporting

  26. Evaluating Interface Designs • Goal of an index similar to miles-per-gallon, energy efficiency ratings • Learning time estimates • User satisfaction index

  27. Evaluating Interface Designs • Simple Designs? • INFOBAR C01 Japan’s Newest Android Phone

  28. Evaluating Interface Designs • Usability Testing Reviews • http://www.nngroup.com/articles/windows-8-disappointing-usability/

  29. Evaluating Interface Designs • Top Tech Fails of 2013 • http://www.cnn.com/2013/12/27/tech/web/tech-fails-2013/

  30. Evaluating Interface Designs • Controlled Experiments • The scientific method and HCI • Deal with practical problems • State a testable hypothesis • Identify a small number of independent variables • Identify the key dependent variables • Judicially select participants • Control for biasing factors (participants, tasks) • Apply appropriate statistical methods • Resolve practical problems • Fractions of users can be given improvements for a limited amount of time, and compared to a control group. Dependent measures may include: • Performance times • User satisfaction • Error rates • User retention over time

  31. Evaluating Interface Designs • Strength of User Research Evidence – Nielson • http://www.nngroup.com/articles/ux-evidence/ • Usability findings derived from a broad base of diverse studies have higher credibility than those based on many users with a single stimulus • Planning your own research - Increase the probability of deriving a higher-profit design. • Reading about outside research - It’s important to know how much you can rely on other people’s • Sample size, usually known as Nand statistical significance, often known as p. • A bigN or a small p is a horrible indicator of validity when it comes to research findings. • Crucially, this says nothing about whether the experiment was done right or has any predictive power for your design problem.

  32. Evaluating Interface Designs • Simulators - Honda At any moment during daylight hours, 660,000 Americans are using cellphones or other electronic devices while driving - National Highway Traffic Safety Administration

  33. Evaluating Interface Designs • Mobile UI Design Tips • http://www.superconnect.com/blog/multiplatform-layout-considerations-for-mobile-apps-part-1/?goback=%2Egde_79272_member_193437791 • http://alistapart.com/article/responsive-web-design • http://uxdesign.smashingmagazine.com/2011/07/18/seven-guidelines-for-designing-high-performance-mobile-user-experiences/

  34. Evaluating Interface Designs • Usability Testing • http://www.uxforthemasses.com/usability-reviews/ • Mobile Usability Testing • http://go.utest.com/mobile-usability-whitepaper.html?ls=Email&cc=Pd&mc=Email-FierceWireless-Nov2012-Msg1 • UX Score • https://mail.google.com/mail/u/1/?shva=1#inbox/13e7f973238348c7

  35. Evaluating Interface Designs • Emotions • Enchant me. • Simplify my life. • Make me amazing • https://www.youtube.com/watch?v=s0HIP8EdlnE&feature=youtube_gdata_player

  36. Evaluating Interface Designs • Chromecast vs. AirPlay: how do they compare? • http://www.theverge.com/2013/7/24/4554130/google-chromecast-vs-apple-airplay-how-do-they-compare • Google's Chromecast • HDMI stick which brings internet video to your living room after the company stumbled with Google TV • Challenges Apple's AirPlay - easy streaming from a mobile device to TVs • AirPlay when you first power on an Apple TV, you're greeted with a friendly iOS-powered user interface with easy-to-understand menus and navigation

More Related