1 / 49

Experiment Basics: Variables

Experiment Basics: Variables. Psych 231: Research Methods in Psychology. Journal summary 1 due in labs this week See link on syllabus. Announcements. Independent variables Dependent variables Measurement Scales of measurement Errors in measurement Extraneous variables Control variables

welshj
Download Presentation

Experiment Basics: Variables

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiment Basics: Variables Psych 231: Research Methods in Psychology

  2. Journal summary 1 due in labs this week • See link on syllabus Announcements

  3. Independent variables • Dependent variables • Measurement • Scales of measurement • Errors in measurement • Extraneous variables • Control variables • Random variables • Confound variables Variables

  4. Control variables • Holding things constant - Controls for excessive random variability • Random variables – may freely vary, to spread variability equally across all experimental conditions • Randomization • A procedure that assures that each level of an extraneous variable has an equal chance of occurring in all conditions of observation. • Confound variables • Variables that haven’t been accounted for (manipulated, measured, randomized, controlled) that can impact changes in the dependent variable(s) • Co-varys with both the dependent AND an independent variable Extraneous Variables

  5. Divide into two groups: • men • women • Instructions: Read aloud the COLOR that the words are presented in. When done raise your hand. • Women first. Men please close your eyes. • Okay ready? Colors and words

  6. Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green List 1

  7. Okay, now it is the men’s turn. • Remember the instructions: Read aloud the COLOR that the words are presented in. When done raise your hand. • Okay ready?

  8. Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green List 2

  9. So why the difference between the results for men versus women? • Is this support for a theory that proposes: • “Women are good color identifiers, men are not” • Why or why not? Let’s look at the two lists. Our results

  10. List 2Men List 1Women Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Matched Mis-Matched

  11. IV DV Co-vary together Confound • What resulted in the performance difference? • Our manipulated independent variable (men vs. women) • The other variable match/mis-match? • Because the two variables are perfectly correlated we can’t tell • This is the problem with confounds Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green

  12. What DIDN’T result in the performance difference? • Extraneous variables • Control • # of words on the list • The actual words that were printed • Random • Age of the men and women in the groups • These are not confounds, because they don’t co-vary with the IV Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green

  13. Our goal: • To test the possibility of a systematic relationship between the variability in our IV and how that affects the variability of our DV. • Control is used to: • Minimize excessive variability • To reduce the potential of confounds (systematic variability not part of the research design) Experimental Control

  14. T = NRexp + NRother + R Nonrandom (NR) Variability • Our goal: • To test the possibility of a systematic relationship between the variability in our IV and how that affects the variability of our DV. NRexp: Manipulated independent variables (IV) • Our hypothesis: the IV will result in changes in the DV NRother: extraneous variables (EV) which covary with IV • Condfounds Random (R) Variability • Imprecision in measurement (DV) • Randomly varying extraneous variables (EV) Experimental Control

  15. Absence of the treatment Control group (NRexp = 0) Treatment group NR NR other other NR “perfect experiment” - no confounds (NRother = 0) R R exp • Variability in a simple experiment: T = NRexp + NRother + R Experimental Control: Weight analogy

  16. Difference Detector NR R R exp Our experiment is a “difference detector” • Variability in a simple experiment: T = NRexp + NRother + R Control group Treatment group Experimental Control: Weight analogy

  17. NR R R exp • If there is an effect of the treatment then NRexp will ≠ 0 Control group Treatment group Difference Detector Our experiment can detect the effect of the treatment Experimental Control: Weight analogy

  18. Potential Problems • Confounding • Excessive random variability Difference Detector Things making detection difficult

  19. Co-vary together • Confound • If an EV co-varies with IV, then NRother component of data will be present, and may lead to misattribution of effect to IV IV DV EV Potential Problems

  20. R R • Confound • Hard to detect the effect of NRexpbecausethe effect looks like it could be from NRexpbut could be due to the NRother NR other NR exp Difference Detector Experiment can detect an effect, but can’t tell where it is from Confounding

  21. R R R R • Confound • Hard to detect the effect of NRexpbecausethe effect looks like it could be from NRexpbut could be due to the NRother These two situations look the same NR other NR exp NR other Difference Detector Difference Detector There is an effect of the IV There is not an effect of the IV Confounding

  22. Excessive random variability • If experimental control procedures are not applied • Then R component of data will be excessively large, and may make NRexp undetectable Potential Problems

  23. NR R R exp • If R is large relative to NRexp then detecting a difference may be difficult Difference Detector Experiment can’t detect the effect of the treatment Excessive random variability

  24. NR R R exp • But if we reduce the size of NRother and R relative to NRexp then detecting gets easier • So try to minimize this by using good measures of DV, good manipulations of IV, etc. Difference Detector Our experiment can detect the effect of the treatment Reduced random variability

  25. How do we introduce control? • Methods of Experimental Control • Constancy/Randomization • Comparison • Production Controlling Variability

  26. Constancy/Randomization • If there is a variable that may be related to the DV that you can’t (or don’t want to) manipulate • Control variable: hold it constant • Random variable: let it vary randomly across all of the experimental conditions Methods of Controlling Variability

  27. Comparison • An experiment always makes a comparison, so it must have at least two groups • Sometimes there are control groups • This is often the absence of the treatment Training group No training (Control) group • Without control groups if is harder to see what is really happening in the experiment • It is easier to be swayed by plausibility or inappropriate comparisons • Useful for eliminating potential confounds Methods of Controlling Variability

  28. Comparison • An experiment always makes a comparison, so it must have at least two groups • Sometimes there are control groups • This is often the absence of the treatment • Sometimes there are a range of values of the IV 1 week of Training group 2 weeks of Training group 3 weeks of Training group Methods of Controlling Variability

  29. Production • The experimenter selects the specific values of the Independent Variables 1 week of Training group 2 weeks of Training group 3 weeks of Training group • Need to do this carefully • Suppose that you don’t find a difference in the DV across your different groups • Is this because the IV and DV aren’t related? • Or is it because your levels of IV weren’t different enough Methods of Controlling Variability

  30. So far we’ve covered a lot of the about details experiments generally • Now let’s consider some specific experimental designs. • Some bad (but common) designs • Some good designs • 1 Factor, two levels • 1 Factor, multi-levels • Between & within factors • Factorial (more than 1 factor) Experimental designs

  31. Bad design example 1: Does standing close to somebody cause them to move? • “hmm… that’s an empirical question. Let’s see what happens if …” • So you stand closely to people and see how long before they move • Problem: no control group to establish the comparison group (this design is sometimes called “one-shot case study design”) Poorly designed experiments

  32. Bad design example 2: • Testing the effectiveness of a stop smoking relaxation program • The participants choose which group (relaxation or no program) to be in Poorly designed experiments

  33. Random Assignment • Bad design example 2: Non-equivalent control groups Independent Variable Dependent Variable Self Assignment Training group Measure participants No training (Control) group Measure Problem: selection bias for the two groups, need to do random assignment to groups Poorly designed experiments

  34. Bad design example 3:Does a relaxation program decrease the urge to smoke? • Pretest desire level – give relaxation program – posttest desire to smoke Poorly designed experiments

  35. Pre-test No Training group Post-test Measure • Bad design example 3: One group pretest-posttest design Dependent Variable Independent Variable Dependent Variable participants Pre-test Training group Post-test Measure Add another factor • Problems include: history, maturation, testing, and more Poorly designed experiments

  36. Good design example • How does anxiety level affect test performance? • Two groups take the same test • Grp1 (moderate anxiety group): 5 min lecture on the importance of good grades for success • Grp2 (low anxiety group): 5 min lecture on how good grades don’t matter, just trying is good enough • 1 Factor (Independent variable), two levels • Basically you want to compare two treatments (conditions) • The statistics are pretty easy, a t-test 1 factor - 2 levels

  37. Dependent Variable Random Assignment Anxiety Low Test participants Moderate Test • Good design example • How does anxiety level affect test performance? 1 factor - 2 levels

  38. One factor Use a t-test to see if these points are statistically different test performance low moderate low moderate anxiety Two levels • Good design example • How does anxiety level affect test performance? anxiety 60 80 Observed difference between conditions T-test = Difference expected by chance 1 factor - 2 levels

  39. Advantages: • Simple, relatively easy to interpret the results • Is the independent variable worth studying? • If no effect, then usually don’t bother with a more complex design • Sometimes two levels is all you need • One theory predicts one pattern and another predicts a different pattern 1 factor - 2 levels

  40. Interpolation What happens within of the ranges that you test? test performance low moderate anxiety • Disadvantages: • “True” shape of the function is hard to see • Interpolation and Extrapolation are not a good idea 1 factor - 2 levels

  41. Extrapolation What happens outside of the ranges that you test? test performance low moderate anxiety high • Disadvantages: • “True” shape of the function is hard to see • Interpolation and Extrapolation are not a good idea 1 factor - 2 levels

  42. For more complex theories you will typically need more complex designs (more than two levels of one IV) • 1 factor - more than two levels • Basically you want to compare more than two conditions • The statistics are a little more difficult, an ANOVA (Analysis of Variance) 1 Factor - multilevel experiments

  43. Grp3 (high anxiety group): 5 min lecture on how the students must pass this test to pass the course • Good design example (similar to earlier ex.) • How does anxiety level affect test performance? • Two groups take the same test • Grp1 (moderate anxiety group): 5 min lecture on the importance of good grades for success • Grp2 (low anxiety group): 5 min lecture on how good grades don’t matter, just trying is good enough 1 Factor - multilevel experiments

  44. Random Assignment Dependent Variable Anxiety Low Test participants Moderate Test High Test 1 factor - 3 levels

  45. anxiety mod high low test performance 60 80 low mod high anxiety 60 1 Factor - multilevel experiments

  46. Advantages • Gives a better picture of the relationship (function) • Generally, the more levels you have, the less you have to worry about your range of the independent variable 1 Factor - multilevel experiments

  47. 2 levels 3 levels testperformance test performance low mod high low moderate anxiety anxiety Relationship between Anxiety and Performance

  48. Disadvantages • Needs more resources (participants and/or stimuli) • Requires more complex statistical analysis (analysis of variance and pair-wise comparisons) 1 Factor - multilevel experiments

  49. The ANOVA just tells you that not all of the groups are equal. • If this is your conclusion (you get a “significant ANOVA”) then you should do further tests to see where the differences are • High vs. Low • High vs. Moderate • Low vs. Moderate Pair-wise comparisons

More Related