experimental design and other evaluation methods n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Experimental Design and Other Evaluation Methods PowerPoint Presentation
Download Presentation
Experimental Design and Other Evaluation Methods

Loading in 2 Seconds...

play fullscreen
1 / 13

Experimental Design and Other Evaluation Methods - PowerPoint PPT Presentation


  • 146 Views
  • Uploaded on

Experimental Design and Other Evaluation Methods. Lana Muraskin lmuraskin@yahoo.com. Clearing the Air. Experimental design has a poor reputation in TRIO community Over-recruitment seen as difficult, unfair Outcomes of evaluation have been disappointing, counter intuitive

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Experimental Design and Other Evaluation Methods' - solana


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
experimental design and other evaluation methods

Experimental Design and Other Evaluation Methods

Lana Muraskin

lmuraskin@yahoo.com

clearing the air
Clearing the Air
  • Experimental design has a poor reputation in TRIO community
    • Over-recruitment seen as difficult, unfair
    • Outcomes of evaluation have been disappointing, counter intuitive
  • Reputation unfortunate
    • Experimental design offers opportunities for understanding program and component effectiveness
why employ experimental design
Why employ experimental design?
  • Good way to understand the impact of an intervention because it allows us to
    • Eliminate selection bias (serious problem, not fully addressed in quasi-experimental evaluations—such as national evaluation of SSS)
    • Be sure we are comparing groups on all significant characteristics (can’t always be sure with quasi-experimental designs)
experimental design can overcome these obstacles but
Experimental design can overcome these obstacles, but…

Random assignment of people to services doesn’t always ensure that a treatment/no treatment design will occur

Behavior in a project setting is hard to “control.”

Project staff may behave differently when over-recruiting than they would under other circumstances

Random assignment may be impossible or extremely costly

quasi experimental design
Quasi-experimental design
  • Matched comparison groups
  • Comparison groups sometimes drawn from same cohorts, sometimes from other cohorts
  • Probably more suited to individual TRIO projects
  • Already in effect in many projects
what is is not learned through treatment no treatment designs experimental or quasi experimental
What is/is not learned through treatment/no treatment designs—experimental or quasi-experimental?
  • Can learn whether project participation “works” in a global sense, and for different participant subgroups
  • Both approaches often treat the services projects provide as a “black box”
  • Even when services are counted, rarely learn what project features account for project success or lack of success (can’t randomly assign to different services)
are there other alternatives for project evaluations
Are there other alternatives for project evaluations?
  • Service variation or service mix designs
  • Can be “experimental” under some circumstances, quasi-experimental more often
  • Can enable projects to learn about their performance and make changes as needed
  • Hard to implement but worth the effort—best done with groups of projects
possible experimental or quasi experimental designs within a project
Possible experimental or quasi experimental designs within a project
  • Vary services over time (compare participants in a baseline year and participants in subsequent year(s) as services or mix of services differ)
  • Randomly assign participants to different services or mix of services
  • Create artificial comparison groups and track participants and comparisons over time
another alternative type and intensity of implementation evaluation
Another alternative—type and intensity of implementation evaluation
  • Can be done by all projects (if only so staff can sleep easier)
  • Track mix and extent (intensity) of service each participant receives—what each receives, how much, from whom
  • Decide what you consider “high fidelity” service (observe service, create measures) and high/medium/low participation
  • See whether more and “better” services lead to better participant outcomes—if not, which services seem to account for better or worse outcomes
some caveats
Some caveats…
  • Some services are aimed at students with greatest difficulty (esp. SSS counseling services), so more may not be associated with better outcomes
  • This design won’t answer the question of whether service is better than no service (but should lead to improved service over time).
  • This approach won’t work if all participants get exactly the same services in the same amount (rare?)
on the plus side
On the plus side…
  • If “high fidelity” service and solid participation lead to better outcomes, it is pretty likely that project participation is worth the effort.
  • If there is no relationship between a service and outcomes, it’s time to take a hard look at reforms—but the evaluation is still useful to the project
a word about federalism
A word about federalism
  • Push to project-level experimental design evaluations seems to confuse federal and local roles
  • Projects do not have the resources to conduct such evaluations—to over recruit, to conduct the evaluations, to track participants over time
  • Incentives for experimental design evaluations at the local level encourage projects to shift resources from service to evaluation with little likelihood that the results will be worth the resources expended.
slide13
The Executive Branch should implement sophisticated evaluations that study program effects writ large. It has the responsibility to Congress and the public to ensure that all projects have positive outcomes.