1 / 20

Center for Research and Reform in Education Johns Hopkins University Steven M. Ross, Ph.D.

Third-Party Evaluation Studies as a Basis for Determining Program Effectiveness and Improvement Needs. Center for Research and Reform in Education Johns Hopkins University Steven M. Ross, Ph.D. Professor and Evaluation Director. The School of Education and CRRE.

ulric
Download Presentation

Center for Research and Reform in Education Johns Hopkins University Steven M. Ross, Ph.D.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Third-Party Evaluation Studies as a Basis for Determining Program Effectiveness and Improvement Needs Center for Research and Reform in Education Johns Hopkins University Steven M. Ross, Ph.D. Professor and Evaluation Director

  2. The School of Education and CRRE We want SOE to be the place where education companies and others go for research and development and instruction.

  3. The Center for Research and Reform in Education: Evaluation Services • Independent studies of program implementation, products, and outcomes • Literature reviews and research papers on selected topics • Best Evidence Encyclopedia (BEE)

  4. Recent and Ongoing CRRE Evaluations • Parent Engagement and Partnership Program (EIA) • Middlebury Interactive Program (EIA) • JUMP Math in NYC • Middle School Matters • National Institute of School Leaders • Three principal preparation programs • Women’s Initiative Fellowship Program • High school reform in Minnesota • Social-emotional learning in Northern Ireland • The Leader in Me Program in two schools • Pre-K and K Early Literacy • English Language Learners in Texas

  5. Growing Demand for Evidence • Publishers • Evidence-Based Acquisition Standards • State/Local Education Agencies • Contract RFP Evaluation Criteria • Federal Government • Rising Standard of Efficacy • Companies • Product Differentiation • Investors • Culling Criteria

  6. A Hierarchy of Program Evaluations • Level I: Design Study • What is the quality of the program design with regard to instructional theory, logic model, pedagogy, etc.? • Focus: Systematic program review relative to rubrics and standards.

  7. A Hierarchy of Program Evaluations • Level II: Development Study • What is the quality of program implementation, user satisfaction, and refinement needs? Is the program ready for broader implementation? • Focus: Case study in one or a few selected application contexts (e.g., classrooms or schools)

  8. A Hierarchy of Program Evaluations • Level III: Efficacy Study • What is the potential of the program to produce educational benefits in selected target contexts? • Focus: Treatment-control comparison in a small number of selected application contexts (e.g., 3 program schools vs. 3 control schools. )

  9. A Hierarchy of Program Evaluations • Level IV: Effectiveness Study • What are the effects of the program to produce educational benefits in a broad range of target contexts? • Focus: Highly rigorous treatment-control comparison in a large number of application contexts (e.g., 20 program schools vs. 20 control schools)

  10. Types of Evaluation Studies • Simplest and Least Costly • Case Study Example: Examining a middle school’s use of a new computer program for supplementing math instruction

  11. Types of Evaluation Studies • Simplest and Least Costly • Survey/Interview Study Example: How 325 principals who participated in online leadership training react to the program and their application of the skills taught

  12. Types of Evaluation Studies • Simplest and Least Costly • Achievement Profile Study Example: Descriptive analysis of posted state assessment scores for 25 schools before and after using a new after-school program in E/LA

  13. Types of Evaluation Studies • Medium Rigor and Cost • Mixed-Methods Control Group Study Example: Program Schools A and B are compared on district science assessments to Control Schools C and D

  14. Types of Evaluation Studies • Medium Rigor and Cost • Quantitative Control Group Study Example: Using statistical controls, comparisons are made on school-level AP scores in chemistry between 26 program schools and 50 control schools

  15. Types of Evaluation Studies • Medium Rigor and Cost • Qualitative Control Group Study Example: Through observations, interviews, and surveys, teaching methods and student engagement are compared at two schools receiving professional development in project- based learning and two control schools

  16. Types of Evaluation Studies • Most Rigorous and Costly (Often funded by federal grants) • Mixed-Methods Matched Comparison Study Example: 10 schools that elected to use a new program are compared on student-level test scores and qualitative measures to 10 matched schools serving as control sites

  17. What Determines Rigor? • Multiple measures (triangulation) • Standardized measures (unbiased/objective) • Treatment-control group comparisons • Equivalent comparison groups

  18. What Determines Cost? • Accessibility of data • Cooperativeness of participants • Travel

  19. Major Considerations • What questions do you want to answer? • How quickly do you need the answers? • What resources are available to fund the study? • How accessible are participants and data?

  20. Steven M. Ross Evaluation Director, CRRE sross19@jhu.edu

More Related