Quasi-Experimental Designs . Slides Prepared by Alison L. O’Malley. Passer Chapter 11 . Quasi-Experimentation . Quasi-experiments resemble experiments, but lack experimental control
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Quasi-Experimental Designs Slides Prepared by Alison L. O’Malley Passer Chapter 11
Quasi-Experimentation • Quasi-experiments resemble experiments, but lack experimental control • Generally, lack of random assignment is the key point of distinction between quasi-experiments and “true” experiments (Shadish, Cook, & Campbell, 2002) • Quasi-experiments are thus more vulnerable to internal validity threats If quasi-experiments lack experimental control, what good are they?
Quasi-Experimentation:Designs without a control group • One-group posttest-only design • A treatment occurs and the DV is measured afterward What threats to interval validity are present here?
Quasi-Experimentation:Designs without a control group • One-group pretest-posttest design • DV measured before and after treatment
Quasi-Experimentation:Designs without a control group • Simple interrupted time-series design • DV repeatedly measured before and after a treatment • History is the primary threat to internal validity
Quasi-Experimentation:Designs with a nonequivalent control group • Selection emerges as a major threat to internal validity • Selection may interact with other threats (i.e., selection interactions) • In such cases, the threat is labeled by replacing the term “selection” with “differential” (e.g., differential attrition, differential testing)
Quasi-Experimentation:Posttest only with nonequivalent control group • Participants in one condition exposed to a treatment • Participants in the other nonequivalent condition are not exposed to the treatment • Outcome measures obtained from both groups • Lack of pretests poses difficulties in interpreting results
Quasi-Experimentation:Pretest-posttest with nonequivalent control group • Pretreatment and posttreatment scores are obtained for a treatment group and a nonequivalent control group • What benefits are added by this research approach?
Quasi-Experimentation:Simple interrupted time-series with nonequivalent control group • A series of pre- and posttreatment scores are obtained for a treatment group and a nonequivalent control group
Quasi-Experimentation:Simple interrupted time-series with nonequivalent control group • What information can be obtained from examining the pretreatment trend lines?
Switching Replication Designs • One group receives a treatment while a nonequivalent group does not receive a treatment but is then exposed to treatment down the road • Can be used with both pretest-posttest and time-series designs • In the switching replication with treatment removal, the initial treatment group no longer receives the treatment once the control group is switched
Switching Replication with Treatment Removal How might you improve the design of this study?
Program Evaluation • Assesses the need for as well as the design, implementation, and effectiveness of a social intervention • What is a recent social intervention undertaken in your community?
Program Evaluation • Much talk surrounds “evidence-based” programs and public policies • How do you know whether a program or policy works?
Program Evaluation: Needs Assessment • Needs assessment determines whether there is a need for a social program, and if so, what is required to meet the need • Must acquire data from a wide range of sources
Program Evaluation: Program Theory and Design Assessment • Rationale for designing a program in a particular way – theoretical and empirical justification
Program Evaluation: Process Evaluation • Is program implemented as intended? • Also known as program monitoring
Program Evaluation: Outcome Evaluation • Likely more comfortable terrain, as this deals with assessing program (treatment) effectiveness • If randomized controlled trials aren’t possible, turn to alternative designs
Program Evaluation: Outcome Evaluation • Watch out for contamination, which occurs when knowledge, services, or other experiences intended for one group are unintentionally received by another group
Program Evaluation: Efficiency Assessment • Cost-benefit analysis of program effectiveness • Is the program financially beneficial?
Program Evaluation: Program Diffusion • Implementing and maintaining effective programs in other settings or with other groups Dissemination Adoption Implementation Sustainability