1 / 41

Pop Quiz – Controlled Experiments in HCI

Pop Quiz – Controlled Experiments in HCI. Tuesday, March 17, 2015. 1. If designing an interface for older users, who should you recruit?. Your friends The general population Older users Whoever shows up first. 2. A controlled experiment….

jesseniab
Download Presentation

Pop Quiz – Controlled Experiments in HCI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pop Quiz – Controlled Experiments in HCI Tuesday, March 17, 2015

  2. 1. If designing an interface for older users, who should you recruit? • Your friends • The general population • Older users • Whoever shows up first

  3. 2. A controlled experiment….. • Is best for an intial exploration of a research area • Tests a hypothesis • Both A and B • Neither A nor B

  4. 3. An independent variable is… • The variable that the researcher varies • The variable that the researcher varies ;-0 • Both A and B • Neither A nor B

  5. 4. The value of the dependent variable depends on the value of the independent variable • True • False • That is the hypothesis being tested

  6. 5. One of the challenges of experimental design is to minimize the chances of there being ….. • Confounding variables • Other variables that affect the value of the dependent variable • Stop trying to trick me by giving different forms of the same answer!

  7. 6. In a within subjects design…. • Each participant experiences both experimental conditions • Individual differences are reduced • You need fewer participants than in a between subjects design • All of the above

  8. 7. In a between subjects design…. • Each participant experiences only one of the experimental conditions • Protocols are simpler • You can’t get participants to give a ranking of their preferences • All of the above

  9. 8. Minimize the effects of confounds when evaluating interfaces by controlling: • The order the interfaces are presented • The tasks • The contexts in which the study is run • All of the above

  10. MP2 – Comparative Evaluation • Topic? • Compare two text input techniques • Computer keyboard, mobile keypad • Soft keyboard, mobile keypad • Compare two auto correct techniques • Auto / no auto • Auto 1 / auto 2 • Other?

  11. MP2 format/deliverables • Lectures/class exercises during tutorial time to introduce/experience controlled experimental designs • MP2 deliverable: given an extremely flawed experimental design, identify 4 flaws, justify why each is a flaw, and adapt the experimental design to mitigate the issue

  12. MP2 – Comparative Evaluation • Groups? • Form “groups” of 1-3 students (email Hasmeet with the groups)

  13. MP2Experimental DesignHCI W2015 What is experimental design? What is an experimental hypothesis? How do I plan an experiment? Acknowledgement: Much of the material in this lecture is based on material prepared for similar courses by Saul Greenberg (University of Calgary) as adapted by Joanna McGrenere

  14. Experimental Planning Flowchart Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Problem Planning Conduct Analysis Interpret- definition research ation feedback research define data interpretation pilot idea variables reductions testing generalization literature review controls statistics data reporting collection apparatus hypothesis statement of testing problem procedures hypothesis select development subjects experimental design feedback

  15. What’s the goal? • Overall research goals impact choice of study design • Exploratory research vs. hypothesis confirmation • Ecological validity vs tightly controlled • The stage in the design process impacts the choice of study design • Formative evaluation (to get iterative feedback on initial design and/or design choices) • Summative evaluation (to determine whether the design is better/stronger/faster than alternative approaches)

  16. What’s the research question? • Study research questions impact choice of: • Protocol, task • Experimental conditions (factors) • Constructs (effectiveness) • Measures (task completion, error rate) • Testable hypotheses impact • choice of statistical analysis (also impacted by nature of the data and experimental design)

  17. Experimental Planning Flowchart Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Problem Planning Conduct Analysis Interpret- definition research ation feedback research define data interpretation pilot idea variables reductions testing generalization literature review controls statistics data reporting collection apparatus hypothesis statement of testing problem procedures hypothesis select development subjects experimental design feedback Reality check: does the final design support the research questions

  18. Experimental Design Document • Example experimental design doc in MP1 • Based on a controlled lab experiment • http://www.reganmandryk.com/pubs/HCII2005_Inkpen.pdf • Use it as a reference to think about the types of decisions you may need to make • Very helpful to have check lists

  19. Quantitative system evaluation • Quantitative: • precise measurement, numerical values • bounds on how correct our statements are • Methods • Controlled Experiments • Statistical Analysis • Measures • Objective: user performance (speed & accuracy) • Subjective: user satisfaction

  20. descriptive statistics Quantitative methods 1. User performance data collection • data is collected on system use • frequency of request for on-line assistance • what did people ask for help with? • frequency of use of different parts of the system • why are parts of system unused? • number of errors and where they occurred • why does an error occur repeatedly? • time it takes to complete some operation • what tasks take longer than expected? • collect heaps of data in the hope that something interesting shows up • often difficult to sift through data unless specific aspects are targeted (as in list above)

  21. Quantitative methods ... 2. Controlled experiments The traditional scientific method • clear convincing result on specific issues • In HCI: • insights into cognitive process, human performance limitations, ... • allows comparison of systems, fine-tuning of details ... Strive for • lucid and testable hypothesis (usually a causal inference) • quantitative measurement • measure of confidence in results obtained (inferential statistics) • Ability to replicate the experiment • control of variables and conditions • removal of experimenter bias

  22. File Edit View Insert File New Edit New Open Open View Close Insert Close Save Save The experimental method a) Begin with a lucid, testable hypothesis H0: there is no difference in user performance (time and error rate) when selecting a single item from a pop-up or a pull down menu, regardless of the subject’s previous expertise in using a mouse or using the different menu types

  23. The experimental method b) Explicitly state the independent variables that are to be altered Independent variables • the things you control (independent of how a subject behaves) • two different kinds: • treatment manipulated (can establish cause/effect, true experiment) • subject individual differences (can never fully establish cause/effect) in menu experiment • menu type: pop-up or pull-down • menu length: 3, 6, 9, 12, 15 • expertise: expert or novice (a subject variable – the researcher can not manipulate)

  24. The experimental method c) Carefully choose the dependent variables that will be measured Dependent variables • variables dependent on the subject’s behaviour / reaction to the independent variable • Make sure that what you measure actually represents the higher level concept! in menu experiment • time to select an item • selection errors made • Higher level concept (user performance)

  25. Expert Novice The experimental method d) Judiciously select and assign subjects to groups Ways of controlling subject variability • recognize classes and make them an independent variable • minimize unaccounted anomalies in subject group superstars versus poor performers • use reasonable number of subjects and random assignment

  26. Factors/levels/conditions/groups • IV’s are often called factors • Each factor has 1+ level • In a single factor design, you have 1 condition/group per level of the factor • In a multi-factor design, you have 1 condition for each combination of the levels • A 2x2 between subjects design • 4 conditions, each participant does 1 of them • A 2x2 within subjects design • 4 conditions, each participant does all of them • A 2x2x2 within subjects design • 8 conditions, each participant does all of them

  27. Now you get to do the pop-up menus. I think you will really like them... I designed them myself! The experimental method... e) Control for biasing factors • unbiased instructions + experimental protocols prepare ahead of time • double-blind experiments, ... • Potential confounding variables • Order effects • Learning effects • Counterbalancing (http://www.yorku.ca/mack/RN-Counterbalancing.html)

  28. The experimental method f) Apply statistical methods to data analysis • Confidence limits: the confidence that your conclusion is correct • “The hypothesis that mouse experience makes no difference is rejected at the .05 level” (i.e., null hypothesis rejected) • means: • a 95% chance that your finding is correct • a 5% chance you are wrong g) Interpret your results • what you believe the results mean, and their implications • yes, there can be a subjective component to quantitative analysis

  29. Control • True experiment = complete control over the subject assignment to conditions and the presentation of conditions to subjects • Control over the who, what, when, where, how • Control of the who => random assignment to conditions • Only by chance can other variables be confounded with IV • Control of the what/when/where/how => control over the way the experiment is conducted

  30. Quasi-Experiment • When you can’t achieve complete control • Lack of complete control over conditions • Subjects for different conditions come from potentially non-random pre-existing groups • Experts vs novices • Early adopters vs technophobes?

  31. It’s a matter of control True Experiment Quasi Experiment Selection of subjects for the conditions Observe categories of subjects If the subject variable is the IV, it’s a quasi experiment Don’t know whether differences are caused by the IV or differences in the subjects • Random assignment of subjects to condition • Manipulate the IV • Control allows ruling out of alternative hypotheses

  32. Other features • In some instances cannot completely control the what, when, where, and how • Need to collect data at a certain time or not at all • Practical limitations to data collection, experimental protocol

  33. Validity • Internal validity is reduced due to the presence of controlled/confounded variables • But not necessarily invalid • It’s important for the researcher to evaluate the likelihood that there are alternative hypotheses for observed differences • Need to convince self and audience of the validity

  34. External validity • If the experimental setting more closely replicates the setting of interest, external validity can be higher than a true experiment run in a controlled lab setting • Often comes down to what is most important for the research question • Control or ecological validity?

  35. Experimental designs • Between subjects: Different participants - single group of participants is allocated randomly to each of the experimental conditions. • Within subjects: Same participants - all participants appear in all conditions. • Matched participants – between subjects, but participants are matched in pairs, e.g., based on expertise, gender, etc. • Mixed: Some factors are between, some factors are within 35

  36. Within-subjects • It solves the individual differences issues • Allows participants to make comparisons between conditions • But raises other problems: • Need to look at the impact of experiencing the two conditions

  37. Order Effects • Changes in performance resulting from (ordinal) position in which a condition appears in an experiment (always first?) • Arises from warm-up, learning, learning what they will be asked to reflect upon, fatigue, etc. • Effect can be averaged and removed if all possible orders are presented in the experiment and there has been random assignment to orders

  38. Sequence effects • Changes in performance resulting from interactions among conditions (e.g., if done first, condition 1 has an impact on performance in condition 2) • Effects viewed may not be main effects of the IV, but interaction effects • Can be controlled by arranging each condition to follow every other condition equally often

  39. Counterbalancing • Controlling order and sequence effects by arranging subjects to experience the various conditions (levels of the IV) in different orders • Self-directed learning: investigate the different counterbalancing methods • Randomization • Block Randomization • Reverse counter-balancing • Latin squares and Greco squares (when you can’t fully counterbalance) • http://www.experiment-resources.com/counterbalanced-measures-design.html

  40. Between, within, matched participant design 40

  41. Key points • Usability testing is done in controlled conditions. • Usability testing is an adapted form of experimentation. • Experiments aim to test hypotheses by manipulating certain variables while keeping others constant. • The experimenter controls the independent variable(s) but not the dependent variable(s). 41

More Related