Experimental design and the struggle to control threats to validity
Download
1 / 36

Experimental Design and the struggle to control threats to validity - PowerPoint PPT Presentation


  • 180 Views
  • Uploaded on

Experimental Design and the struggle to control threats to validity. Experimental Design. LOW. NATURALISTIC. CASE-STUDY. INCREASINGLY CONSTRAINED. CORRELATIONAL. DIFFERENTIAL. EXPERIMENTAL. HIGH.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Experimental Design and the struggle to control threats to validity' - carissa


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript


Slide3 l.jpg

LOW validity

NATURALISTIC

CASE-STUDY

INCREASINGLY

CONSTRAINED

CORRELATIONAL

DIFFERENTIAL

EXPERIMENTAL

HIGH


Slide4 l.jpg

Experimental research allows us to test hypotheses and infer causality under controlled conditions designed to maximize internal validity.

The high control and internal validity often mean a reduction of external validity.

That is, the more precise, constrained, and artificial we become in the experiment, the less natural are the procedures and findings.

The result is that we sometimes have difficulty generalizing experimentation to the natural environment.


Slide5 l.jpg

Experimental research allows us to causality under controlled conditions designed to maximize internal validity. test hypotheses and infer causality under controlled conditions designed to maximize internal validity.

The high control and internal validity often mean a reduction of external validity.

That is, the more precise, constrained, and artificial we become in the experiment, the less natural are the procedures and findings.

The result is that we sometimes have difficulty generalizing experimentation to the natural environment.



Experimental l.jpg
Experimental order of events by the researcher.

  • Comparisons are made under different and controlled conditions.

  • Subjects are assigned to each type of condition in an unbiased manner, usually matched or random.

  • Although causality can often be inferred, results may not be applicable outside of the experimental setting.


Slide8 l.jpg

TIME 0: PRE-TEST. Collect baseline data. order of events by the researcher.

CONTROL

TREATMENT #1

TREATMENT #2

TREATMENT #3

Experimental


Slide9 l.jpg

TIME 1: TREATMENT GIVEN order of events by the researcher.

CONTROL

JOLT

COKE

COFFEE

Experimental


Slide10 l.jpg

TIME 3: POST-TEST. Collect data on the effects of the treatment and compare to pretest, and to each treatment.

CONTROL

JOLT

COKE

COFFEE

Experimental


Hypotheses l.jpg
Hypotheses treatment and compare to pretest, and to each treatment.

  • NULL (Statistical)mean JOLT= mean COKE = mean COFFEE = mean CONTROL

  • RESEARCH (Causal)Caffeine causes people to grow tall and nervous.

  • CONFOUNDING (Rival)The differences are due to the amount of sugar or citric acid in the drinks.


A simple 2 group posttest only design l.jpg

Compare treatment and compare to pretest, and to each treatment.Scores

A simple 2-group, posttest-only design

  • Outfitters given low-impact training

  • Outfitters given NO low-impact training

  • Measure impacts caused by their clients

  • Measure impacts caused by their clients


Hypotheses13 l.jpg
Hypotheses treatment and compare to pretest, and to each treatment.

  • NULL (Statistical)mean Trained= meanUn-trained

  • RESEARCH (Causal)The clients of outfitters trained in low impact methods will cause fewer impacts than clients of outfitters who did not receive such training.

  • CONFOUNDING (Rival)Prior knowledge may have caused the observed differences in response.


Threats to validity l.jpg
Threats to Validity treatment and compare to pretest, and to each treatment.


Validity l.jpg
Validity treatment and compare to pretest, and to each treatment.

  • Because of the confounding Hypothesis we are not 100% sure that our conclusions are valid.

  • Did we indeed measure the effects of new knowledge?

  • Independent -- Dependent?

  • Other Variable/s -- Dependent?


Road map for research success l.jpg
Road map for research success treatment and compare to pretest, and to each treatment.

  • Anticipate all threats to validity.

  • Take plans to eliminate them.

  • Statistical validity

  • Construct validity

  • External validity

  • Internal validity.


Statistical validity l.jpg
Statistical validity treatment and compare to pretest, and to each treatment.

  • The variations in the dependent variable are not due to variation in the subjects, but due to variations in the measuring instrument. The instrument is unreliable.

  • Some statistical assumptions have been violated (e.g., non-normal data treated with parametric statistics; means of ordinal data, etc.).


Construct validity l.jpg
Construct validity treatment and compare to pretest, and to each treatment.

  • How well the observed data support the theory, and not a rival theory.


External validity l.jpg
External validity treatment and compare to pretest, and to each treatment.

  • The degree to which the findings of the study are valid for subjects outside the present study. The degree to which they are generalizable.

  • Unbiased, complex sampling procedures; many studies, mid-constraint approaches help strengthen external validity.

  • < external = > internal


Internal validity l.jpg
Internal Validity treatment and compare to pretest, and to each treatment.

  • Threats to causality (our ability to infer).

  • Did the independent variable cause the dependent to change (were they related), or did some confounding variable intervene?

  • Maturation, history, testing, instrumentation, regression to the mean, selection, attrition, diffusion and sequencing effects.


Maturation l.jpg
Maturation treatment and compare to pretest, and to each treatment.

  • If the time between pre- and posttest is great enough to allow the subjects to mature, they will!

  • Subjects may change over the course of the study or between repeated measures of the dependent variable due to the passage of time per se. Some of these changes are permanent (e.g., biological growth), while others are temporary (e.g., fatigue).


History l.jpg
History treatment and compare to pretest, and to each treatment.

  • Outside events may influence subjects in the course of the experiment or between repeated measures of the dependent variable.

  • Eg., a dependent variable is measured twice for a group of subjects, once at Time A and again at Time B, and that the independent variable (treatment) is introduced between the two measurements.

  • Suppose also that Event A occurs between Time A and Time B. If scores on the dependent measure differ at these two times, the discrepancy may be due to the independent variable or to Event A.


Testing l.jpg
Testing treatment and compare to pretest, and to each treatment.

  • Subjects gain proficiency through repeated exposure to one kind of testing. Scores will naturally increase with repeated testing.

  • If you take the same test (identical or not) 2 times in a row, over a short period of time, you increase your chances of improving your score.


Instrumentation l.jpg
Instrumentation treatment and compare to pretest, and to each treatment.

  • Changes in the dependent variable are not due to changes in the independent variable, but to changes in the instrument (human or otherwise).

  • Measurement instruments and protocols must remain constant and be calibrated.

  • Human observers become better, mechanical instruments become worse!


Regression to the mean l.jpg
Regression to the mean treatment and compare to pretest, and to each treatment.

  • If you select people based on extreme scores (High or low), in subsequent testing they will have scores closer to the mean (they would have regressed to the center).


Selection l.jpg
Selection treatment and compare to pretest, and to each treatment.

  • When random assignment or selection in not possible the two groups are not equivalent in terms of the independent variable/s.

  • For example, males=treatment; females=control.

  • Highest threats in naturalistic, case study and differential approaches.


Attrition l.jpg
Attrition treatment and compare to pretest, and to each treatment.

  • When subjects are lost from the study.

  • If random it may be OK.

  • Confounding attrition is when the loss is in one group or because of the effects of the independent variable. (Jolt killed off 2 people!)


Diffusion of treatment l.jpg
Diffusion of treatment treatment and compare to pretest, and to each treatment.

  • When subjects communicate with each other (within and between groups) about the treatment) they diffuse the effects of the independent variable.


Sequencing effects l.jpg
Sequencing effects treatment and compare to pretest, and to each treatment.

  • The effects caused by the order in which you apply the treatment.

  • A B C

  • A C B

  • B A C, etc.


Subject effects l.jpg
Subject effects treatment and compare to pretest, and to each treatment.

  • Subjects “know” what is expected of them, and behave accordingly (second guessing).

  • Social desirability effect.

  • Placebo effect. A placebo is a dummy independent effect. Some people react to it.


Experimenter effects l.jpg
Experimenter effects treatment and compare to pretest, and to each treatment.

  • Forcing the study to produce the desired outcome.

  • Expectations of an outcome by persons running an experiment may significantly influence that outcome.


Single and double blind procedures l.jpg
Single- and double-blind procedures treatment and compare to pretest, and to each treatment.

  • Single blind –subjects don’t know which is treatment, which is not.

  • Double blind—experimenter is also blind.


Designs l.jpg
Designs treatment and compare to pretest, and to each treatment.

  • One shot:

    • G1 T------O2

  • One Group Pre-Post:

    • G1 O1------T------O2

  • Static Group:

    • G1 T------O2

    • G2 (C) ------O2


More designs l.jpg
More designs treatment and compare to pretest, and to each treatment.

  • Random Group:

    • RG1 T-------O

    • RG2 (C) O

  • Pretest-Posttest, Randomized Group:

    • RG1 O1------T------O2

    • RG2 (C) O1------- ------O2


Yet another design l.jpg
Yet another design: treatment and compare to pretest, and to each treatment.

  • Solomon four-group:

    • RG1 O1------T------O2

    • RG2 O1-------------O2

    • RG3 T-------O2

    • RG4 O2


One last one l.jpg
One last one ! treatment and compare to pretest, and to each treatment.

  • Latin Square:

    • RG1 O T1 O T2 O T3 O

    • RG2 O T2 O T3 O T1 O

    • RG3 O T3 O T1 O T2 O