1 / 14

Laying the Foundation for Scaling Up During Development

Laying the Foundation for Scaling Up During Development . Definition. Fidelity of implementation is: the extent to which a program (including its content and process) is implemented as designed; how it is implemented (by the teacher); how it is received (by the students);

Leo
Download Presentation

Laying the Foundation for Scaling Up During Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Laying the Foundation for Scaling Up During Development

  2. Definition • Fidelity of implementation is: • the extent to which a program (including its content and process) is implemented as designed; • how it is implemented (by the teacher); • how it is received (by the students); • how long it takes to implement (duration); and, • what it looks like when it is implemented (quality).

  3. Motivation: What problems exist? Teachers have difficulty teaching with fidelity when creativity, variability, and local adaptations are encouraged. Developers often fail to identify the critical components of an intervention. Researchers often fail to measure whether components are delivered as intended.

  4. What questions about fidelity of implementation are asked during Development studies? • What are the critical components of the program? If the teacher skips part of the program, why does that happen, and what effect will it have on outcomes? • Is the program feasible (practical) for a teacher to use? Is it usable (are the program goals clear)? If not, what changes should I make to the program? What programmatic support must be added? • What ancillary components are part of the program (e.g., professional development) and must be scaled-up with it?

  5. What questions about fidelity of implementation are asked during Efficacy studies? • How well is the program being implemented in comparison with the original program design? • To what extent does the delivery of the intervention adhere to the program model originally developed?

  6. What questions about fidelity of implementation are asked during Scale-up studies? • How can effective programs be scaled up across many sites (i.e., if implementation is a moving target, generalizability of research may be imperiled)? • How can we gain confidence that the observed student outcomes can be attributed to the program? • Can we gauge the wide range of fidelity with which an intervention might be implemented? Source: Lynch, O’Donnell, Ruiz-Primo, Lee, & Songer, 2004.

  7. Definition: Efficacy Study • Efficacyis the first stage of program research following development . Efficacy is defined as “the ability of an intervention to produce the desired beneficial effect in expert hands and under ideal circumstances” (RCTs) (Dorland’s Illustrated Medical Dictionary, 1994, p. 531). • Failure to achieve desired outcomes in an efficacy study "give[s] evidence of theory failure, not implementation failure" (Raudenbush, 2003, p. 4).

  8. Definition: Efficacy Studies • Internal validity - determines that the program will result in successful achievement of the instructional objectives, provided the program is “delivered effectively as designed” (Gagne et al., 2005, p. 354). • Efficacy entails continuously monitoring and improving implementation to ensure the program is implemented with fidelity (Resnick et al., 2005). • Explains why innovations succeed or fail (Dusenbury et al., 2003); • Helps determine which features of program are essential and require high fidelity, and which may be adapted or deleted (Mowbray et al., 2003).

  9. Definition: Scale-up Study • Interventions with demonstrated benefit in efficacystudies are then transferred into effectiveness studies. • Effectiveness study is not simply a replication of an efficacy study with more subjects and more diverse outcome measures conducted in a naturalistic setting (Hohmann & Shear, 2002). • Effectivenessis defined as “the ability of an intervention to produce the desired beneficial effect in actual use” under routine conditions (Dorland, 1994, p. 531) wheremediating and moderating factors can be identified (Aron et al., 1997; Mihalic, 2002; Raudenbush, 2003; Summerfelt & Meltzer, 1998).

  10. Definition: Scale-up Studies • External validity – fidelity in effectiveness studies helps to generalize results and provides “adequate documentation and guidelines for replication projects adopting a given model” (Mowbray et al, 2003; Bybee, 2003; Raudenbush, 2003). • Role of developer and researcher is minimized. • Focus is not on monitoring and controlling levels of fidelity; instead, variations in fidelity are measured in a natural setting and accounted for in outcomes.

  11. O’Donnell (2008): Steps in Measuring Fidelity • Start with curriculum profile or analysis; review program materials and consult with developer. Determine the intervention’s program theory. What does it mean to teach it with fidelity? • Using developer’s and past implementers’ input, outline critical components of intervention divided by structure (adherence, duration) and process (quality of delivery, program differentiation, participant responsiveness) and outline range of variations for acceptable use. O’Donnell, C. L. (2008).Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research, 78, 33–84.

  12. O’Donnell (2008): Steps in Measuring Fidelity • Develop checklists and other instruments to measure implementation of components (in most cases unit of analysis is the classroom). • Collect multi-dimensional data in bothtreatmentand comparison conditions: questionnaires, classroom observations, self-report, student artifacts, interviews. Self-report data typically yields higher levels of fidelity than observed in the field. • Adjust outcomes if fidelity falls outside acceptable range. O’Donnell, C. L. (2008).Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research, 78, 33–84.

  13. Know when & how to use fidelity data Development - Use fidelity results to inform revisions. Decide now what components are required to deliver the intervention as intended when implemented at scale. Efficacy - Monitor fidelity and relate it to outcomes to gain confidence that outcomes are due to the program (internal validity). Replication - Determine if levels of fidelity and program results under a specific structure replicate under other organizational structures. Scale-up - Understand implementation conditions, tools, and processes needed to reproduce positive effects under routine practice on a large scale (external validity). Are methods for establishing high fidelity financially feasible?

  14. Bottom Line: If the intervention can be implemented with adequate fidelity under conditions of routine practice and yield positive results, scale it up. Source: O’Donnell, 2008

More Related