370 likes | 442 Views
Learn how to identify and address confounding variables in research design to ensure valid results in experiments. Differentiate between acceptable and unacceptable group differences to avoid systematic errors in data analysis.
E N D
New Complications • Adding a control group to our design deals with the confounds discussed so far • Unfortunately, however, it also creates new potential confounds
Selection • With a between-subjects variable, one has multiple groups of people • You want the groups of people to be as similar as possible to one another • This is known as having equivalent groups • Notice that I did not say identical groups • The groups will differ to some extent
Group Differences • What differences are acceptable or unacceptable? • Acceptable • The groups differ in terms of an uncontrolled variable that does NOT correlate with the DV • Unacceptable • The groups differ in terms of an uncontrolled variable that CORRELATES with the DV • These are known as systematic differences
Example • Does high vs. low waitperson attentiveness affect tipping? • Acceptable • Groups differ in terms of hair color • High attentiveness = mostly blondes • Low attentiveness = mostly brunettes • Unacceptable • Groups differ in terms of table waiting experience • High attentiveness = mostly waited tables • Low attentiveness = mostly not
Confound • If groups differ in terms of an uncontrolled variable that correlates with the DV, then the study MAY be confounded • To determine whether the study is confounded, one must consider whether the uncontrolled variable provides an alternative explanation for the study’s outcome
Example • Outcome • Tips were higher in the high attentiveness condition than the low attentiveness condition • Not confounded • High attentiveness = mostly not • Low attentiveness = mostly waited tables • Confounded • High attentiveness = mostly waited tables • Low attentiveness = mostly not
Example • Outcome • Tips were equal in the high and low attentiveness conditions • Not confounded • High attentiveness = mostly waited tables • Low attentiveness = mostly not • Confounded • High attentiveness = mostly not • Low attentiveness = mostly waited tables
Solutions • There are a number of ways to mitigate a Selection confound • Make the uncontrolled variable another IV • Create equivalent groups • By randomly placing participants into one’s conditions • This is better than matching when collecting a large sample • By matching participants in one’s conditions • This is better than randomization when collecting a small sample
Example: Make IV • Do waitperson attentiveness (high vs. low) and parton experience (waited tables vs. not) affect tipping?
Example: Randomize • Randomly assign participants to the high and low attentiveness conditions • One can do this purely randomly wherein every participant has an equal chance of being assigned to either condition • One can do this quasi-randomly with the constraint that once a participant has been assigned to a condition no other participants will be assigned to that condition until all conditions have an equal number of participants • This is known as block randomization
Example: Matching • Pair participants who have waited tables • Pair participants who have not waited tables • Randomly assign one member of each pair to the high and low attentiveness conditions • To be successful, one must have a good reason to match participants, as well as a good way to measure the matching variable
Our studies • Experimental group • Pre-Test, Verbal Training w/ Feedback, Post-Test • Control group • Pre-Test, Verbal Training w/o Feedback, Post-Test • Participants were placed randomly into the Experimental (Feedback) and Control (No Feedback) groups
Attrition • Sometimes people in one group quit the study more often than people in the other group • This is known as an attrition problem
Attrition 1. Collect active Pre-Test data 2a. Experimental group: Provide training with feedback • Many people in this group do not complete the study 2b. Control group: Provide training without feedback • Few people in this group do not complete the study 3. Don’t change anything else 4. Collect active Post-Test data 5. Compare Pre and Post-Test data • Is difference due to full training, or who quit the study?
Solution • There is nothing methodological that one can do to prevent an attrition confound • One must be watchful for them, and adjust conclusions accordingly, if one is suspected
Order • Within-Subjects variables expose each participant to every level of the Independent Variable • Accordingly, in most circumstances, the order in which the levels are presented to the participant becomes an issue • This is known as an Order problem
Solution • Order problems can be mitigated by presenting the levels of the Independent Variable in different orders to different people • The orders must be created so that each level occurs (roughly) equally often at each phase of the testing session
Example • Consider a variation of our studies • In this variation, participants will throw the beanbag to a target, with their eyes closed, and with them open • Throwing repeatedly might cause fatigue, so throwing in second half of the study will always be worse than throwing earlier in the study • Solution • Half of the participants throw with their eyes closed first, and then with their eyes open, while the other half throws in the opposite order
Counterbalancing • Complete counterbalancing • Uses each of the possible orders of treatments • The number of possible orders is found by X! • where “X” is the number of conditions and “!” is the factorial • As the number of conditions increases, then the number of potential orders increases markedly • Requires as many participants as possible orders
Counterbalancing • Partial counterbalancing • Uses a subset of the possible orders • The subset may • Be chosen at random • Employ a Latin Square
Latin Square • An example of a Latin Square would be • A, B, C, D • B, C, D, A • C, D, A, B • D, A, B, C • In a standard latin square, each level occurs equally often in every sequential position. • One or more people would be exposed to each of these orders
Our Independent Variables • Between-Subjects • Feedback • Two conditions, i.e., with and without feedback • Within-Subjects • Session • Three levels, i.e., Pre-Test, Training, Post-Test
Our studies • In our studies, we must run the levels of our Within-Subjects variable (Session) in a particular order, so we must not counterbalance
Carry-Over Effects • When employing a within-subjects IV, what happens during one level sometimes affects what happens during subsequent levels • This is known as a carry-over effect
Example • Does high vs. low waitperson attentiveness affect tipping? • High attentiveness Low attentiveness • In this order, Low attentiveness seems worse than it would have otherwise because of prior exposure to High attentiveness • Low attentiveness High attentiveness • In this order, High attentiveness seems better than it would have otherwise because of prior exposure to Low attentiveness
Example • Note that each sequence has a unique effect on the data • High Low = Low seems worse • Low High = High seems better • Note also that these effects do NOT cancel • The worsening of the Low attentiveness level is not counteracted by anything • The enhancing of the High attentiveness level is not counteracted by anything
Counterbalancing • Given that each sequence has a unique effect on the data, and those unique effects do not cancel one another, counterbalancing does NOT eliminate carry-over effects
Example • High Low = Low seems worse • Low High = High seems better • When you combine the data, you get • High + High (better) • Low + Low (worse)
Solution • If one suspects that a carry-over effect may happen, then it is advisable to use a Between-Subjects Independent Variable • This has consequences, though, because within-subjects variables require less participants and have greater statistical power than between-subjects variables
Bias • Lastly, two forms of bias can confound one’s experiment • Experimenter bias • Experimenters influence the participants unintentionally in ways that make the study come out as expected • Participant bias • Participants may act differently because they are in an experiment, which is known as a Hawthorne effect • Participants may try to be too good, which is known as an Orne effect
Solutions • Experimenter bias • Minimize and standardize the interaction between the participant and the experimenter • When possible, design studies so that participants and/or experimenters and participants do not know what condition is being run • These are known as blind, and double-blind studies, respectively
Solutions • Participant bias • Minimize and standardize the interaction between the participant and the experimenter • Provide the participants with just enough information so that they can make an educated judgment about informed consent, but no more • This is known as having naïve participants
Our studies • In our studies, we use a standard consent form and instructions, in order to minimize and standardize the experimenters’ interaction with the participants