1 / 21

Quasi-Experimental Designs

Quasi-Experimental Designs . Slides Prepared by Alison L. O’Malley. Passer Chapter 11 . Quasi-Experimentation . Quasi-experiments resemble experiments, but lack experimental control

marged
Download Presentation

Quasi-Experimental Designs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quasi-Experimental Designs Slides Prepared by Alison L. O’Malley Passer Chapter 11

  2. Quasi-Experimentation • Quasi-experiments resemble experiments, but lack experimental control • Generally, lack of random assignment is the key point of distinction between quasi-experiments and “true” experiments (Shadish, Cook, & Campbell, 2002) • Quasi-experiments are thus more vulnerable to internal validity threats If quasi-experiments lack experimental control, what good are they?

  3. Quasi-Experimentation:Designs without a control group • One-group posttest-only design • A treatment occurs and the DV is measured afterward What threats to interval validity are present here?

  4. Quasi-Experimentation:Designs without a control group • One-group pretest-posttest design • DV measured before and after treatment

  5. Quasi-Experimentation:Designs without a control group • Simple interrupted time-series design • DV repeatedly measured before and after a treatment • History is the primary threat to internal validity

  6. Quasi-Experimentation:Designs with a nonequivalent control group • Selection emerges as a major threat to internal validity • Selection may interact with other threats (i.e., selection interactions) • In such cases, the threat is labeled by replacing the term “selection” with “differential” (e.g., differential attrition, differential testing)

  7. Quasi-Experimentation:Posttest only with nonequivalent control group • Participants in one condition exposed to a treatment • Participants in the other nonequivalent condition are not exposed to the treatment • Outcome measures obtained from both groups • Lack of pretests poses difficulties in interpreting results

  8. Quasi-Experimentation:Pretest-posttest with nonequivalent control group • Pretreatment and posttreatment scores are obtained for a treatment group and a nonequivalent control group • What benefits are added by this research approach?

  9. Quasi-Experimentation:Simple interrupted time-series with nonequivalent control group • A series of pre- and posttreatment scores are obtained for a treatment group and a nonequivalent control group

  10. Quasi-Experimentation:Simple interrupted time-series with nonequivalent control group • What information can be obtained from examining the pretreatment trend lines?

  11. Switching Replication Designs • One group receives a treatment while a nonequivalent group does not receive a treatment but is then exposed to treatment down the road • Can be used with both pretest-posttest and time-series designs • In the switching replication with treatment removal, the initial treatment group no longer receives the treatment once the control group is switched

  12. Switching Replication with Treatment Removal How might you improve the design of this study?

  13. Program Evaluation • Assesses the need for as well as the design, implementation, and effectiveness of a social intervention • What is a recent social intervention undertaken in your community?

  14. Program Evaluation • Much talk surrounds “evidence-based” programs and public policies • How do you know whether a program or policy works?

  15. Program Evaluation: Needs Assessment • Needs assessment determines whether there is a need for a social program, and if so, what is required to meet the need • Must acquire data from a wide range of sources

  16. Program Evaluation: Program Theory and Design Assessment • Rationale for designing a program in a particular way – theoretical and empirical justification

  17. Program Evaluation: Process Evaluation • Is program implemented as intended? • Also known as program monitoring

  18. Program Evaluation: Outcome Evaluation • Likely more comfortable terrain, as this deals with assessing program (treatment) effectiveness • If randomized controlled trials aren’t possible, turn to alternative designs

  19. Program Evaluation: Outcome Evaluation • Watch out for contamination, which occurs when knowledge, services, or other experiences intended for one group are unintentionally received by another group

  20. Program Evaluation: Efficiency Assessment • Cost-benefit analysis of program effectiveness • Is the program financially beneficial?

  21. Program Evaluation: Program Diffusion • Implementing and maintaining effective programs in other settings or with other groups Dissemination Adoption Implementation Sustainability

More Related