1 / 13

Experimental Design and Other Evaluation Methods

Experimental Design and Other Evaluation Methods. Lana Muraskin lmuraskin@yahoo.com. Clearing the Air. Experimental design has a poor reputation in TRIO community Over-recruitment seen as difficult, unfair Outcomes of evaluation have been disappointing, counter intuitive

solana
Download Presentation

Experimental Design and Other Evaluation Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com

  2. Clearing the Air • Experimental design has a poor reputation in TRIO community • Over-recruitment seen as difficult, unfair • Outcomes of evaluation have been disappointing, counter intuitive • Reputation unfortunate • Experimental design offers opportunities for understanding program and component effectiveness

  3. Why employ experimental design? • Good way to understand the impact of an intervention because it allows us to • Eliminate selection bias (serious problem, not fully addressed in quasi-experimental evaluations—such as national evaluation of SSS) • Be sure we are comparing groups on all significant characteristics (can’t always be sure with quasi-experimental designs)

  4. Experimental design can overcome these obstacles, but… Random assignment of people to services doesn’t always ensure that a treatment/no treatment design will occur Behavior in a project setting is hard to “control.” Project staff may behave differently when over-recruiting than they would under other circumstances Random assignment may be impossible or extremely costly

  5. Quasi-experimental design • Matched comparison groups • Comparison groups sometimes drawn from same cohorts, sometimes from other cohorts • Probably more suited to individual TRIO projects • Already in effect in many projects

  6. What is/is not learned through treatment/no treatment designs—experimental or quasi-experimental? • Can learn whether project participation “works” in a global sense, and for different participant subgroups • Both approaches often treat the services projects provide as a “black box” • Even when services are counted, rarely learn what project features account for project success or lack of success (can’t randomly assign to different services)

  7. Are there other alternatives for project evaluations? • Service variation or service mix designs • Can be “experimental” under some circumstances, quasi-experimental more often • Can enable projects to learn about their performance and make changes as needed • Hard to implement but worth the effort—best done with groups of projects

  8. Possible experimental or quasi experimental designs within a project • Vary services over time (compare participants in a baseline year and participants in subsequent year(s) as services or mix of services differ) • Randomly assign participants to different services or mix of services • Create artificial comparison groups and track participants and comparisons over time

  9. Another alternative—type and intensity of implementation evaluation • Can be done by all projects (if only so staff can sleep easier) • Track mix and extent (intensity) of service each participant receives—what each receives, how much, from whom • Decide what you consider “high fidelity” service (observe service, create measures) and high/medium/low participation • See whether more and “better” services lead to better participant outcomes—if not, which services seem to account for better or worse outcomes

  10. Some caveats… • Some services are aimed at students with greatest difficulty (esp. SSS counseling services), so more may not be associated with better outcomes • This design won’t answer the question of whether service is better than no service (but should lead to improved service over time). • This approach won’t work if all participants get exactly the same services in the same amount (rare?)

  11. On the plus side… • If “high fidelity” service and solid participation lead to better outcomes, it is pretty likely that project participation is worth the effort. • If there is no relationship between a service and outcomes, it’s time to take a hard look at reforms—but the evaluation is still useful to the project

  12. A word about federalism • Push to project-level experimental design evaluations seems to confuse federal and local roles • Projects do not have the resources to conduct such evaluations—to over recruit, to conduct the evaluations, to track participants over time • Incentives for experimental design evaluations at the local level encourage projects to shift resources from service to evaluation with little likelihood that the results will be worth the resources expended.

  13. The Executive Branch should implement sophisticated evaluations that study program effects writ large. It has the responsibility to Congress and the public to ensure that all projects have positive outcomes.

More Related