1 / 82

Experiment Basics: Control

This project aims to examine the impact of variability in an independent variable on the variability of the dependent variable. We will explore different methods of experimental control, such as constancy/randomization and comparison, to minimize confounds and random variability.

reiterr
Download Presentation

Experiment Basics: Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiment Basics: Control Psych 231: Research Methods in Psychology

  2. Due this week in labs - Group project: • Methods sections • IRB worksheet (including a consent form) • Recommended/required: • Questionnaires/examples of stimuli, etc. – things that you want to have ready for pilot week (week 10) • Group Project ratings sheet • Exam 2 two weeks from today Announcements

  3. Mythbusters examine: Yawning(4 mins) • What sort of sampling method? • Why the control group? • Should they have confirmed? • Probably not, if you do the stats, with this sample size the 4% difference isn’t big enough to reject the null hypothesis • What the stats do: quantify how much random variability (error) there is compared to observed variability and held you decide if the observed variability is likely due to the error or the manipulated variability Experimental Control

  4. Our goal: • To test the possibility of a systematic relationship between the variability in our IV and how that affects the variability of our DV. • Control is used to: • Minimize excessive variability • To reduce the potential of confounds (systematic variability not part of the research design) Experimental Control

  5. T = NRexp + NRother + R Nonrandom (NR) Variability • Our goal: • To test the possibility of a systematic relationship between the variability in our IV and how that affects the variability of our DV. the variability in our IV NRexp: Manipulated independent variables (IV) • Our hypothesis: the IV will result in changes in the DV NRother: extraneous variables (EV) which covary with IV • Condfounds Random (R) Variability • Imprecision in measurement (DV) • Randomly varying extraneous variables (EV) Experimental Control

  6. Absence of the treatment Control group (NRexp = 0) Treatment group NR NR other other NR “perfect experiment” - no confounds (NRother = 0) R R exp • Variability in a simple experiment: T = NRexp + NRother + R Experimental Control: Weight analogy

  7. Difference Detector NR R R exp Our experiment is a “difference detector” • Variability in a simple experiment: T = NRexp + NRother + R Control group Treatment group Experimental Control: Weight analogy

  8. NR R R exp • If there is an effect of the treatment then NRexp will ≠ 0 Control group Treatment group Difference Detector Our experiment can detect the effect of the treatment Experimental Control: Weight analogy

  9. Potential Problems • Confounding • Excessive random variability Difference Detector Things making detection difficult

  10. Co-vary together • Confound • If an EV co-varies with IV, then NRother component of data will be present, and may lead to misattribution of effect to IV IV DV EV Potential Problems

  11. R R • Confound • Hard to detect the effect of NRexpbecausethe effect looks like it could be from NRexpbut could be due to the NRother NR other NR exp Difference Detector Experiment can detect an effect, but can’t tell where it is from Confounding

  12. R R R R • Confound • Hard to detect the effect of NRexpbecausethe effect looks like it could be from NRexpbut could be due to the NRother These two situations look the same NR other NR exp NR other Difference Detector Difference Detector There is an effect of the IV There is not an effect of the IV Confounding

  13. Excessive random variability • If experimental control procedures are not applied • Then R component of data will be excessively large, and may make NRexp undetectable Potential Problems

  14. NR R R exp • If R is large relative to NRexp then detecting a difference may be difficult Difference Detector Experiment can’t detect the effect of the treatment Excessive random variability

  15. NR R R exp • But if we reduce the size of NRother and R relative to NRexp then detecting gets easier • So try to minimize this by using good measures of DV, good manipulations of IV, etc. Difference Detector Our experiment can detect the effect of the treatment Reduced random variability

  16. How do we introduce control? • Methods of Experimental Control • Constancy/Randomization • Comparison • Production Controlling Variability

  17. Constancy/Randomization • If there is a variable that may be related to the DV that you can’t (or don’t want to) manipulate • Control variable: hold it constant (so there isn’t any variability from that variable, no Rweight from that variable) • Random variable: let it vary randomly across all of the experimental conditions (so the R weight from that variable is the same for all conditions) Methods of Controlling Variability

  18. Comparison • An experiment always makes a comparison, so it must have at least two groups (2 sides of our scale in the weight analogy) • Sometimes there are control groups • This is often the absence of the treatment Training group No training (Control) group • Without control groups if is harder to see what is really happening in the experiment • It is easier to be swayed by plausibility or inappropriate comparisons (see diet crystal example) • Useful for eliminating potential confounds (think about our list of threats to internal validity) Methods of Controlling Variability

  19. Comparison • An experiment always makes a comparison, so it must have at least two groups • Sometimes there are control groups • This is often the absence of the treatment • Sometimes there are a range of values of the IV 1 week of Training group 2 weeks of Training group 3 weeks of Training group Methods of Controlling Variability

  20. Production • The experimenter selects the specific values of the Independent Variables 1 week of Training group 2 weeks of Training group 3 weeks of Training group • selects the specific values variability 1 weeks 2 weeks 3 weeks Duration taking the training program Methods of Controlling Variability

  21. Production • The experimenter selects the specific values of the Independent Variables 1 week of Training group 2 weeks of Training group 3 weeks of Training group • Need to do this carefully • Suppose that you don’t find a difference in the DV across your different groups • Is this because the IV and DV aren’t related? • Or is it because your levels of IV weren’t different enough Methods of Controlling Variability

  22. So far we’ve covered a lot of the general details of experiments • Now let’s consider some specific experimental designs. • Some bad (but not uncommon) designs (and potential fixes) • Some good designs • 1 Factor, two levels • 1 Factor, multi-levels • Factorial (more than 1 factor) • Between & within factors Experimental designs

  23. Bad design example 1: Does standing close to somebody cause them to move? • “hmm… that’s an empirical question. Let’s see what happens if …” • So you stand closely to people and see how long before they move Problem: no control group to establish the comparison group (this design is sometimes called “one-shot case study design”) Fix: introduce a (or some) comparison group(s) Very Close (.2 m) Close (.5 m) Not Close (1.0 m) Poorly designed experiments

  24. Bad design example 2: • Does a relaxation program decrease the urge to smoke? • 2 groups • relaxation training group • no relaxation training group • The participants choose which group to be in Training group No training (Control) group Poorly designed experiments

  25. Random Assignment • Bad design example 2: Non-equivalent control groups Independent Variable Dependent Variable Self Assignment Training group Measure participants No training (Control) group Measure Problem: selection bias for the two groups Fix: need to do random assignment to groups Poorly designed experiments

  26. Bad design example 3: • Does a relaxation program decrease the urge to smoke? • Pre-test desire level • Give relaxation training program • Post-test desire to smoke Poorly designed experiments

  27. Pre-test No Training group Post-test Measure • Bad design example 3: One group pretest-posttest design Dependent Variable Independent Variable Pre vs. Post Dependent Variable participants Pre-test Training group Post-test Measure Fix: Add another factor • Problems include: history, maturation, testing, and more Poorly designed experiments

  28. So far we’ve covered a lot of the general details of experiments • Now let’s consider some specific experimental designs. • Some bad (but not uncommon) designs • Some good designs • 1 Factor, two levels • 1 Factor, multi-levels • Factorial (more than 1 factor) • Between & within factors Experimental designs

  29. Good design example • How does anxiety level affect test performance? • Two groups take the same test • Grp1(low anxiety group): 5 min lecture on how good grades don’t matter, just trying is good enough • Grp2 (moderate anxiety group): 5 min lecture on the importance of good grades for success • What are our IV andDV? • 1 Factor (Independent variable), two levels • Basically you want to compare two treatments (conditions) • The statistics are pretty easy, a t-test 1 factor - 2 levels

  30. Dependent Variable Random Assignment Anxiety Low Test participants Moderate Test • Good design example • How does anxiety level affect test performance? 1 factor - 2 levels

  31. One factor Use a t-test to see if these points are statistically different test performance low moderate low moderate anxiety Two levels • Good design example • How does anxiety level affect test performance? anxiety 60 80 Observed difference between conditions T-test = Difference expected by chance 1 factor - 2 levels

  32. Advantages: • Simple, relatively easy to interpret the results • Is the independent variable worth studying? • If no effect, then usually don’t bother with a more complex design • Sometimes two levels is all you need • One theory predicts one pattern and another predicts a different pattern 1 factor - 2 levels

  33. Interpolation What happens within of the ranges that you test? test performance low moderate anxiety • Disadvantages: • “True” shape of the function is hard to see • Interpolation and Extrapolation are not a good idea 1 factor - 2 levels

  34. Extrapolation What happens outside of the ranges that you test? test performance low moderate anxiety high • Disadvantages: • “True” shape of the function is hard to see • Interpolation and Extrapolation are not a good idea 1 factor - 2 levels

  35. So far we’ve covered a lot of the general details of experiments • Now let’s consider some specific experimental designs. • Some bad (but not uncommon) designs • Some good designs • 1 Factor, two levels • 1 Factor, multi-levels • Factorial (more than 1 factor) • Between & within factors Experimental designs

  36. For more complex theories you will typically need more complex designs (more than two levels of one IV) • 1 factor - more than two levels • Basically you want to compare more than two conditions • The statistics are a little more difficult, an ANOVA (Analysis of Variance) 1 Factor - multilevel experiments

  37. Grp3 (high anxiety group): 5 min lecture on how the students must pass this test to pass the course • Good design example (similar to earlier ex.) • How does anxiety level affect test performance? • Groups take the same test • Grp1(low anxiety group): 5 min lecture on how good grades don’t matter, just trying is good enough • Grp2 (moderate anxiety group): 5 min lecture on the importance of good grades for success 1 Factor - multilevel experiments

  38. Random Assignment Dependent Variable Anxiety Low Test participants Moderate Test High Test 1 factor - 3 levels

  39. anxiety mod high low test performance 60 80 low mod high anxiety 60 1 Factor - multilevel experiments

  40. 2 levels 3 levels test performance test performance low mod high low moderate anxiety anxiety • Advantages • Gives a better picture of the relationship (functions other than just straight lines) • Generally, the more levels you have, the less you have to worry about your range of the independent variable 1 Factor - multilevel experiments

  41. Disadvantages • Needs more resources (participants and/or stimuli) • Requires more complex statistical analysis (ANOVA [Analysis of Variance] & follow-up pair-wise comparisons) 1 Factor - multilevel experiments

  42. The ANOVA just tells you that not all of the groups are equal. • If this is your conclusion (you get a “significant ANOVA”) then you should do further tests to see where the differences are • High vs. Low • High vs. Moderate • Low vs. Moderate Pair-wise comparisons

  43. So far we’ve covered a lot of the about details experiments generally • Now let’s consider some specific experimental designs. • Some bad (but common) designs • Some good designs • 1 Factor, two levels • 1 Factor, multi-levels • Factorial (more than 1 factor) • Between & within factors Experimental designs

  44. B1 B2 B3 B4 A1 A2 • Two or more factors • Some vocabulary • Factors - independent variables • Levels - the levels of your independent variables • 2 x 4 design means two independent variables, one with 2 levels and one with 4 levels • “Conditions” or “groups” is calculated by multiplying the levels, so a 2x4 design has 8 different conditions Factorial experiments

  45. B1 Dependent Variable B2 A1 A2 A • Two or more factors • Main effects - the effects of your independent variables ignoring (collapsed across) the other independent variables • Interaction effects - how your independent variables affect each other • Example: 2x2 design, factors A and B • Interaction: • At A1, B1 is bigger than B2 • At A2, B1 and B2 don’t differ Everyday interaction = “it depends on …” Factorial experiments

  46. Rate how much you would want to see a new movie (1 no interest, 5 high interest) • Ask men and women – looking for an effect of gender Not much of a difference Interaction effects

  47. Maybe the gender effect depends on whether you know who is in the movie. So you add another factor: • Suppose that George Clooney might star. You rate the preference if he were to star and if he were not to star. Effect of gender depends on whether George stars in the movie or not This is an interaction Interaction effects

  48. The complexity & number of outcomes increases: • A = main effect of factor A • B = main effect of factor B • AB = interaction of A and B • With 2 factors there are 8 basic possible patterns of results: 1) No effects at all 2) A only 3) B only 4) AB only • 5) A & B • 6) A & AB • 7) B & AB • 8) A & B & AB Results of a 2x2 factorial design

  49. Interaction of AB A1 A2 B1 mean B1 Main effect of B B2 mean B2 A1 mean A2 mean Marginal means Main effect of A Condition mean A1B1 What’s the effect of A at B1? What’s the effect of A at B2? Condition mean A2B1 Condition mean A1B2 Condition mean A2B2 2 x 2 factorial design

  50. A Main Effect A2 A1 of B B1 30 60 B1 B Dependent Variable B2 B2 30 60 Main Effect A1 A2 of A A 45 45 30 60 Main effect of A ✓ Main effect of B X Interaction of A x B X Examples of outcomes

More Related