Experimental design
1 / 23

Experimental design: - PowerPoint PPT Presentation

  • Uploaded on

Experimental design:. Why do psychologists perform experiments? Correlational methods merely identify relationships: they cannot establish cause and effect. A correlation between two variables is inherently ambiguous: X might cause Y Y might cause X

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Experimental design:' - clarke

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Why do psychologists perform experiments?

Correlational methods merely identify relationships: they cannot establish cause and effect.

A correlation between two variables is inherently ambiguous:

X might cause Y

Y might cause X

X and Y might both be caused by a third variable or set of variables.

Ambiguous correlations

Ambiguous correlations:

Clear data: strong correlation between amount of dietary fat and incidence of breast cancer.

Difficult interpretation:

1. Dietary fat causes breast cancer?

2. People in wealthy countries also eat more refined foods, sugar, salt, etc. - these cause breast cancer?

3. more obese women in wealthier countries - obesity causes breast cancer?

3. People in wealthy countries live longer and hence are more likely to develop cancer?

4. Genetic factors - caucasians more likely to develop colon cancer?

5. Reporting factors - better cancer diagnosis in wealthy countries?

Carroll, K.K. (1975). Experimental evidence of dietary factors and hormone-dependent cancers. Cancer Research, 35, 3374-3383.

The experimental method is the best way of identifying causal relationships.

X causes Y (PIN CAUSES MONEY) if:

X occurs before Y (PIN BEFORE MONEY);

Y happens in the presence of X (MONEY APPEARS WHEN PIN IS ENTERED;

Y does not happen in absence of X. (NO PIN, NO MONEY).

PIN number:

No PIN number:

Experiments enable us to eliminate alternative explanations:

To establish causality, we use groups that differ systematically only on one variable (the independent variable) and measure the effects of this on an outcome variable (the dependent variable).

Good experimental designs maximise validity: explanations:

Internal validity:

Extent to which we can be sure that changes in the dependent variable are due to changes in the independent variable.

External validity (ecological validity):

Extent to which we can generalise from our participants to other groups (e.g. to real-life situations).

Threats to the internal validity of an experiment's results explanations:(e.g. Campbell and Stanley 1969):

Time threats:



Selection-maturation interaction

Repeated testing

Instrument change

Group threats:

Initial non-equivalence of groups

Regression to the mean

Differential mortality

Control group awareness of its status.

Participant reactivity threats:

Experimenter effects, reactivity, evaluation apprehension.

History: explanations:

Extraneous events between pre-test and post-test affect participants' post-test performance.

Ask participants how often they use condoms.

Administer advice on safe sexual practices.

Media publicises statistics showing STDs are on the increase.

Ask participants how often they use condoms.

Changes in reported sexual behaviour may be due to advice, or due to participants' heightened awareness of dangers of unsafe sex due to media coverage.


Add a control group that is not given advice on safe sex.

Maturation: explanations:

Participants may change during the course of the study (e.g. get older, more experienced, fatigued, etc.).

Effects of an educational intervention on reading ability:

Children's reading ability tested at age 6.

Educational treatment administered.

Children's reading ability tested again, at age 9.

Changes in reading ability may be due to reading program and/or normal developmental changes with age.


Add a control group who do not receive the reading program, and whose reading ability is tested at ages 6 and 9.

Selection-maturation interaction: explanations:

Different participant groups have different maturation rates, that affect how they respond to the experimenter's manipulations.

Effectiveness of sex education program.

10-year old boys in experimental group.

8-year old boys in control group.

Pre-test on knowledge about sex.

Administer sex education program.

Post-test a year later: experimental group know more about sex.

But - results may be due to maturational differences (puberty in older group) as well as exposure to program.


Ensure groups differ only on one IV (e.g. in this case match groups for age).

Time threats: repeated testing. explanations:

Taking a pre-test may alter the results of the post-test.

Effects of fatigue on emergency braking in a simulator:

Pre-test: measure driver's braking RT to an unexpected hazard.

Fatigue induction (30 minutes of simulator driving).

Post-test: measure driver's braking RT to an unexpected hazard.

Pre-test may alert drivers to possibility of unexpected tests, and hence maintained arousal at higher levels than otherwise.


In studies like this, avoid repeated testing or add a control group who get only the post-test.

Instrument change: explanations:

e.g. experimenter tests all of one group before testing another, but becomes more practiced/bored/sloppy while running the study.

Now two systematic differences between conditions:

A problem for observational studies (changes in observer's sophistication affects scoring of behaviours).


Highly standardised procedures; random allocation of participants to conditions; familiarise oneself with behaviours before formal observations begin.

Selection (initial non-equivalence of groups): explanations:

Cohort effects - groups differ on many variables other than the one of interest (e.g. gender, age).

Study examines gender differences in attitudes to parking in disabled bays.

"Females" are also old ladies, "males" are also Stormtroopers. Cannot conclude observed attitude differences are due solely to gender.

Solution: often difficult to fix. Match on other IVs?

Regression to the mean: explanations:

Participants who give very low or very high scores on one occasion tend to give less extreme scores when tested again.

e.g. testing the effectiveness of a remedial reading program:

test children's reading ability;

select the worst children for the reading program;

re-test children - falsely assume that any improvement is due to the reading program.


Select children randomly, not on basis of low scores.

Avoid floor and ceiling effects with scores.

Differential mortality: explanations:

What factors account for missing participants?

(Here, only the really motivated patients overcome their phobia - so benefits are due to treatment plus personality factors, not just treatment alone).

Solution:often difficult to fix!

Control group problems that stem from social interaction: explanations:

Compensatory rivalry:

If the control group are aware it is not receiving the experimental treatment, they may show compensatory rivalry ("John Henry effect") - orresentful demoralisation!

Treatment imitation or diffusion:

Control group imitates the experimental group's treatment, or benefits from information given to the treatment group and diffused to the control group.

Solution - compensatory equalisation of treatments:

Treatment administrators provide control group with some benefit to compensate them for lacking the experimental treatment (e.g. supply an alternative educational treatment).

Reactivity: explanations:

Hawthorne Effect:

Workers' productivity increased after manipulations of pay, light levels and rest breaks - regardless of nature of changes made. Workers may have been affected by their awareness of being studied, as much as by experimental manipulations.

Draper (2006): review. Productivity may have been affected by

(a) Material factors, as originally studied, e.g. illumination.

(b) Motivation, e.g. changes in rewards, piecework pay.

(c) Learning (practice).

(d) Feedback on performance.

(e) Attention and expectations of observers.

Implication: act of measurement can affect the very thing being measured.

Experimenter Effects (e.g. Rosenthal 1966, Rosenthal and Rosnow 1969):

Expectations of experimenters, teachers, doctors and managers may affect performance.

Pygmalion effect - teachers expectations affected pupils' IQ.

Placebo effects - doctors' expectations affect drug effects, including side-effects.

Solution:"double-blind" procedures if possible.

(e.g. neither doctor nor patient know whether the patient has been assigned to the drug or placebo condition).

treatment Rosnow 1969):



rated anxiety

about terrorism

Types of experimental design:

1. Quasi-experimental designs:

No control over allocation of subjects to groups, or timing of manipulations of the independent variable.

(a) “One-group post-test" design:

Prone to time effects, and no baseline against which to measure effects - pretty useless!

measurement Rosnow 1969):

HDTV sales





HDTV sales

(b) One group pre-test/post-test design:

Now have a baseline against which to measure effects of treatment.

Still prone to time effects.

(c) Interrupted time-series design: Rosnow 1969):









Still prone to time effects.

group A: Rosnow 1969):



(experimental gp.)

group B:




(control gp.)

(d) “Static group comparison" design:

group A:

attitudes to

TV violence


('Eastenders' viewers)

group B:

attitudes to

TV violence




Subjects are not allocated randomly to groups; therefore observed differences may be due to pre-existing group differences.

Conclusion: Rosnow 1969):

Experiments are the best tool for establishing cause and effect.

A good experimental design ensures that the only variable that varies is the independent variable chosen by the experimenter - the effects of alternative confounding variablesare eliminated (or at least rendered unsystematic by randomisation).