1 / 55

Lessons from Social Psychology for New Experimental Disciplines ( and vice versa )

Lessons from Social Psychology for New Experimental Disciplines ( and vice versa ). Roger Giner-Sorolla University of Kent NCRM Festival 2008 University of Oxford . Experiments in social psychology. First experiment: Triplett (1898). Complicated analyses!.

sadah
Download Presentation

Lessons from Social Psychology for New Experimental Disciplines ( and vice versa )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lessons from Social Psychology for New Experimental Disciplines (and vice versa) Roger Giner-Sorolla University of Kent NCRM Festival 2008 University of Oxford

  2. Experiments in social psychology • First experiment: Triplett (1898)

  3. Complicated analyses!

  4. Famous social psychology experiments since then Zimbardo’s prison experiment Milgram’s obedience studies

  5. Don’t get the wrong impression … Experiments in social psychology are not usually so flamboyant Giner-Sorolla (2001): effect of primed emotion words on dieters’ eating ‘guilty’ et al., ‘happy’ et al.: ate less ‘proud’ et al., ‘depressed’ et al.: ate more

  6. Experiments in other fields of study Behavioural economics: Apply psychological factors to modify classical economic theory Kahneman & Tversky, Nobel 2002 Dan Ariely, MIT … others … (but publish in psychology journals…)

  7. Experimental philosophy (for critical review, Kauppinen, 2007) Survey research to gauge, e.g., lay understanding of terms and assertions John is a psychopathic criminal. He is an adult of normal intelligence, but he has no emotional reaction to hurting other people. John has hurt and indeed killed other people when he has wanted to steal their money. He says that he knows that hurting others is wrong, but that he just doesn’t care if he does things that are wrong. Does John really understand that hurting others is morally wrong? (Nichols, 2002) 85% of participants say yes

  8. Behavioural experimental anthropology Economic experiments (e.g., ultimatum game; Tracer, 2003) in hard-to-reach societies Papua New Guinea highlanders: low offer rate, high rejection rate

  9. Behavioural experiments in biology • Curtis & Biran (2001): what is disgusting and why?

  10. Experiments in law e.g. Kahan, Hoffman, & Braman, in press Video of a car chase presented as support of a US Supreme Court decision that the police did not act unreasonably … did the general public agree?

  11. What is an experiment? • Setting up a situation • Observing results More precise usage in social psychology: • Manipulation of an independent variable (IV) • Minimizing confounds • Observing dependent variable (DV)

  12. IV causes DV if… 1. IV covaries with the DV 2. IV precedes the DV in time (not necessarily in measurement) 3. No combination of 3rd variables can fully account for the relationship (no full mediation) Example: “Expertise causes greater persuasion”

  13. In a correlational study : Simple correlation can establish #1 (covariance)… But common sense can’t always establish #2 (priority in time)… And we can establish #3 (independence from 3rd variables) only for one or more 3rd variables that are known and measured, using multivariate analysis. Difficult to test all possible 3rd variables!

  14. Requirements to show causation in experiment • The IV must be manipulated as cleanly as possible – no confounding third variables to provide a plausible alternate explanation for the manipulation’s effects. Examples: “expert” and “nonexpert” give different speeches – confounded! “expert” and “nonexpert” are different people in other ways than expertise – confounded!

  15. Heavy metal confounds Exhibit B Exhibit A

  16. Requirements to show causation • Participants must be randomlyassigned to conditions, otherwise participant choice is a confound. Ideally, only the IV and random error contribute to differences among conditions.

  17. Sources of variance besides IV Systematic: can lead to biasing effects (if in direction of your hypotheses) or weakening (if against direction) Ex.: More persuadable people sign up for the condition with the expert (biasing) More persuadable people sign up for the condition with the non-expert (weakening) Extremely important to control these.

  18. Sources of variance besides IV Random: always lead to weakening effects; not a credible alternate explanation for significant results Ex.: People have a variety of opinions before being randomly assigned; this will of course increase the variance of their final opinion, but has nothing necessarily to do with what condition they are in.

  19. Random error Desirable to control this – how? • Standardisation (only look at neutral people) • Matching (run pairs of people with the same initial opinion; one hears an expert, one, a non-expert) • Analysis (measure prior opinion and factor it out)

  20. An additional consideration • The manipulation should be effective in influencing the independent variable. Manipulation > Expertise > Persuasion Manipulation check can establish this. ‘How expert did Prof. X appear to be in the field of educational policy?’ (Most useful if experiment doesn’t work!)

  21. Other terms • Operationalization: how you manipulate or measure a conceptual IV or DV. • Conceptual replication: same experiment, different operationalizations.

  22. Other terms • Quasi-experiment: experiment with non-randomly-assigned groups (ex., people in and out of treatment). • Control group: condition with no manipulation; useful for establishing baseline level of the DV.

  23. Other terms • Experimental realism: how much does the procedure create the desired psychological state within the participant? Ex.: ‘emergency’ via smoke coming from duct. • Ecological validity: how much does the procedure resemble the real-life version of the phenomenon you are studying? Ex.: person perception using phrases vs. video

  24. Limits of the experiment Experiments favour internal validity (exactitude) over external validity (generalization to real life) – reduce confounding factors and random variance. Manipulation favours studying single factors over multiple factors Many interesting variables can’t be manipulated ethically or practically

  25. Advantages of the experiment Best way to establish causal relationships decisively Can study ‘basic’ questions, minimising influence of context Conclusions can be confirmed by more applied studies

  26. True experiments across disciplines Gutierrez & Giner-Sorolla, 2007, Experiment 2

  27. True experiments across disciplines Manipulation (happy/sad music) Outcome (eating) Mediator (self report of mood)

  28. True experiments across disciplines

  29. Why true experiments are great for external communication 1. Manipulation and results are simple to understand (unlike SEM)

  30. Why true experiments are great for external communication 2. Underlying experimental method justifies the causal inference people draw anyway

  31. Four-way tradeoffs Investment Internal Validity Strength External Validity

  32. Making results strong … while not sacrificing too much validity Starting strong: “no result” is more likely to mean there’s nothing out there. Manipulation, setting and measures are all important.

  33. Strong manipulations • Value strength over subtlety; grab attention • Example: happy mood • Are credible (possible trade-off with research investment: for example, written scenario vs. “news video”?) • Are clearly expressed (participant input & feedback in pilot run) • Within-participants measures? • Efficiency and strength … sometimes

  34. Strong settings • Focus the attention (lab vs. in the street … or web?) • Mood lighting! • Reduce error variance by eliminating or matching: • Experimenter characteristics • Using pre-measures of DV • Using covariates related to DV

  35. Strong measures • Are easily noticed (not subtle) • Are clearly written and administered: pre-test on participant population for clarity • Watch for floor or ceiling effects (example: charity study) • Don’t leave random assignment to chance – use random method that ensures equal N • The 10 participant peek-in – basic check on item use and P comments

  36. Pilot testing for strength Often a good idea if the manipulation is new or being tried in a new context Include only the main DV and manipulation check, and enough participants to detect a strong effect on the check (12-20 per condition)

  37. Excluding participants On the basis of: • not “getting” the manipulation on a basic level • Expressing suspicion If more than 10%, need to rethink procedures in future experiments

  38. Internally valid manipulations Precision: the manipulation varies only the variable of interest; the measures tap only the variable they’re supposed to. After establishing a strong effect (without glaring validity problems!) tighten up validity. Some ways to get validity from day 1 …

  39. Validity without sacrifice I: Parallelism In procedure and materials Example: This could be better….

  40. Validity without sacrifice I: Parallelism Manipulations now vary only at the key word.

  41. Validity without sacrifice II: Pilot test for validity Look at experimental stimuli equivalence on possible confounds, not just strength Example: Attractiveness of “White male” facial stimuli

  42. Validity without sacrifice III: Include tests for confounds A confound is really only an unwanted mediator Example: Is your effect of goal priming “only” a mood effect? Include measure of mood to find out!

  43. Sacrifices for validity I: Add conditions Add control condition to see where the “action” is

  44. Sacrifices for validity I: Add conditions Sometimes the “right” control condition is not obvious; experiments with multiple controls need to be done. Example: What’s the right control condition for failure feedback: no feedback (lack of parallelism) or neutral performance feedback (gives people information)?

  45. Sacrifices for validity I: Add conditions Added experimental conditions can be used to establish a more precise causal story, too (Spencer, Zanna & Fong, 2005). Example: No-responsibility condition in Castano & Giner-Sorolla (2006) More infrahumanization when reading of a killing your group has done: is it just-world belief, or defence against guilt? Add “accidental killing” condition to find out.

  46. Sacrifices for validity II: Trade strength for subtlety Demand characteristics might explain results (but unsubtlety can also hinder results!) • Within manipulation > go between-participants • Go from an obvious manipulation, to an incidental one (e.g. Gilovich, 1981: war and metaphor) • Go from obvious measures to implicit measures

  47. Sacrifices for validity III: Buy validity with resources • From “scenario” studies to more compelling manipulations (possibly with deception) • 2. Include suspicion debriefing with “funnel” characteristics. A must when using deception or subtle manipulations!

  48. External validity • Generalizability In the later stages of an article or research programme: • add conditions to test boundary conditions and moderators. • Vary the procedures and materials to see how robust the underlying concept is (example: manipulating power).

  49. External validity: Generalizability Extending populations beyond the easily obtained … • General population • Children • Other cultures

  50. External validity: Ecological validity To what extent do your procedures resemble real life? Most objections to the “experiment” are actually objections to the lab. Can good experiments be done outside the lab in real world conditions?

More Related