1 / 18

User-centered design

User-centered design. Theme - “know thy user,” “honor thy user” Involves four major components: Early focus on user and task Empirical measurement Iterative design Participatory design. Norman’s Theory of Action. “Gulfs” of Evaluation and Execution. Design to “bridge the gulfs”.

kelseys
Download Presentation

User-centered design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. User-centered design • Theme - “know thy user,” “honor thy user” • Involves four major components: • Early focus on user and task • Empirical measurement • Iterative design • Participatory design

  2. Norman’s Theory of Action

  3. “Gulfs” of Evaluation and Execution

  4. Design to “bridge the gulfs”

  5. RESEARCH METHODS IN HUMAN FACTORS ENGINEERING • Basic vs Applied • Definitions:

  6. Dimensions of Research Methodologies: • Tradeoff: Experimental Control vs Real-World Relevance • Degree of Control: • over conditions of observation • over "treatment" (independent) variables • Degree to which behavioral (dependent) variables are representative of some larger population • Will the results "scale up" to complex real-life situations?

  7. Research Methodology “In a Nutshell” • Define problem and hypotheses • What questions do you want to answer? • What cause/effect relationships do you expect to see? • Specify the experimental plan • Independent variables (Treatment): e.g., interface characteristics; new vs existing computer system; time on task; target users, locations, functions, etc. • Dependent variables (Effect): e.g., reaction time; accuracy (number or percent error); preferences; attitudes; verbal protocols • Experimental design: two-group, multiple group, factorial, between/within subjects, etc. • Specifics: equipment, subjects, location, method, etc.

  8. Research Methodology “In a Nutshell” (cont.) • Conduct the study • follow the plan you defined • use a “pilot study” if you are unsure of your plan • Analyze the data • Quantitative/objective data (e.g., reaction times, error rates) - appropriate statistical analyses for the type of experiment (recall: EGR 250/252) • Subjective data (e.g., preferences, questionnaire results) - apply any necessary coding algorithm, statistical analysis • Protocol analyses

  9. Research Methodology “In a Nutshell” (cont.) • Draw conclusions • Were the hypotheses supported by the results of the experiment? • Statistically significant? (Usually use p = 0.05 or 0.01, but these are not carved in stone!) • Recall Type I and Type II errors … • If not significant, are there interesting trends in the data? • Interaction effects? (e.g., mouse vs keyboard confounded with type of input required when filling out online forms) • Can you say why? • Will effect the degree to which you can generalize your results. • Can you explain interaction effects, trends that are not statistically significant?

  10. An overview of experimental types: Laboratory Experiment • Example: Carter's 1979 study of the effects of various display characteristics on search time using college students in a dark quiet room. The results were supposed to be relevant for air traffic control research. • High degree of control over everything: • Experimenter selects and randomly assigns people ("subjects") to different treatments defined by the experimenter. Subjects are supposed to be "representative" of some general population. • Experimenter controls the environment: task, physical surroundings, etc. • Typically, performance is measured quantitatively (e.g., reaction time). Thus, statistical analyses can be performed to identify significant effects of the treatment variables. • Relevance to real-world tasks: ?? (Usually the least.)

  11. An overview of experimental types: Field Experiment/Quasi-Experiment • Examples: (1) Comparing new ATC interface with existing system using real air traffic controllers in real-life settings (field experiment). (2) Comparing "New Math" to conventional instruction by assigning programs to different classrooms (quasi-experiment). • Experimentcarried out in a real-world setting. • Experimenter can control treatment and measure performance • cannot control selection of subjects (field experiment) • may not even be able to randomly assign subjects to treatment conditions (quasi-experiment). • More real-world context, so greater confidence that results apply to the real world. • But less experimental control, so treatment effects may be confounded with individual differences between subjects • Can still do statistical analyses, but need to be careful.

  12. An overview of experimental types: Simulation Study • Examples:Flight simulators, power plant control simulators. • Evaluate human performance in the context of a simulation of the system of interest. (Usually this is a computer simulation.) • Simulations vary in their level of fidelity. A high-fidelity simulation captures many salient aspects of the real world. • Often a reasonable compromise between field experiment and a laboratory experiment. You can exert more control over selection and assignment of subjects, as well as experimental conditions. Depending on the fidelity of the simulation, you may be very confident that it captures most of the real-world task. But a simulation NEVER really attains all the complexities of the real world!

  13. An overview of experimental types: Field Study (Case study) • Examples: (1) flight crew task analyses conducted in-flight to determine task allocation, workload, etc. prior to specification and design of automated systems; (2) investigators observation and modeling of satellite ground operations • Investigator observes and records real-life activities without any experimental manipulations. • Typically, such a study seeks to articulate how and why some behavioral phenomenon occurs. • Generally, should be the first step in investigating human interaction with complex systems.

  14. Descriptive Methods • Survey/Questionnaire • Used to gauge attitudes, preferences, etc., often on a numerical scale (e.g., 1=strongly disagree, ..., 5=strongly agree). Amenable to statistical analyses. • Problems: how to avoid biased questions (i.e., how to word questions to avoid "leading" the respondent). Also, respondents may not be totally truthful: they may be afraid of giving socially unacceptable answers or may try to answer questions the way they think they should be answered.

  15. Descriptive Methods (cont.) • Interview • Also used to gauge attitudes, preferences, etc. and also delve into the reasons behind those attitudes. Tricky to analyze. • Problems: Same as above. Bias in the questions or how they are asked, influences on how the interviewee responds. • Can be used with a questionnaire to clarify or expand on responses.

  16. Descriptive Methods • Concurrent Verbal Protocols • While the subject performs a task (in a laboratory, field, or simulation setting), he/she verbalizes- thoughts, reasons for actions, etc. This is often very illuminating in terms of identifying subject strategies, difficulties, etc. But difficult to analyze and may affect task performance itself. • Retrospective Verbal Protocols • After the subject has performed a task (in a laboratory, field, or simulation setting), he/she verbalizes thoughts, reasons for actions, etc. This may also be quite interesting, but again is difficult to analyze. Also this relies on the subject's memory, which is prone to error. • Also: critical incident technique, accident analysis.

  17. General Remarks: • Before you do any experiment, you should have one or more hypotheses in mind that you want to test or explore. • If possible, use multiple methods (e.g., field study and interviews and questionnaires). If you obtain converging evidence from a number of separate sources, you can be more confident that the hypothesis is correct. • Consider reliability and validity issues: • Reliability: Can this study be replicated with the same results. • Construct Validity: Does this study's measure of performance really get at the true underlying construct it ispresumed to measure? (e.g., does the Binet-Stanford IQ test really measure intelligence?)

  18. General Remarks (cont.): • Reliability and validity issues (concl.): • Internal Validity(for causal or explanatory studies): Does the study really show a causal link between variables that cannot be accounted for by spurious relationships? (e.g., does smoking really cause cancer -- not high blood pressure, cholesterol, weight, etc.) • External Validity: Can the study's findings be generalized? • Ethical Issues: • Ethical treatment of human subjects (see pg. 37 of Wickens et al.) • protection from mental and physical harm • privacy • informed consent • Code of Engineering Ethics

More Related