Conceptualizing intervention fidelity implications for measurement design and analysis
This presentation is the property of its rightful owner.
Sponsored Links
1 / 12

Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis PowerPoint PPT Presentation


  • 92 Views
  • Uploaded on
  • Presentation posted in: General

Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis. Chris S. Hulleman, Ph.D. Implementation: What to Consider At Different Stages in the Research Process Panel presentation for the Institute for Education Sciences Annual Grantee Meeting September 7, 2011.

Download Presentation

Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Conceptualizing intervention fidelity implications for measurement design and analysis

Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis

Chris S. Hulleman, Ph.D.

Implementation: What to Consider At Different Stages in the Research Process

Panel presentation for the Institute for Education Sciences Annual Grantee Meeting

September 7, 2011


Implementation vs implementation fidelity

Implementation vs. Implementation Fidelity

Fidelity: How faithful was the implemented intervention (tTx) to the intended intervention (TTx)?

Infidelity: TTx – tTx

Implementation Assessment Continuum

Descriptive

What happened as the intervention was implemented?

A priori model

How much, and with what quality, were the core intervention components implemented?

Most assessments include both


Linking fidelity to causal models

Linking Fidelity to Causal Models

Rubin’s Causal Model:

True causal effect of X is (YiTx – YiC)

RCT is best approximation

Tx – C = average causal effect

Fidelity Assessment

Examines the difference between implementedcausal components in the Tx and C

This difference is the achieved relative strength (ARS) of the intervention

Theoretical relative strength = TTx – TC

Achieved relative strength = tTx – tC

Index of fidelity


Implementation assessment typically captures

Implementation assessment typically captures…

(1) Essentialor corecomponents (activities, processes, structures)

(2) Necessary, but not unique,activities, processes and structures (supporting the essential components of Tx)

(3) Best practices

(4) Ordinary featuresof the setting (shared with the control group)

Intervention

Fidelity

assessment


Why is this important

Why is this Important?

Construct Validity

  • Which is the cause? (TTx - TC) or (tTx – tC)

  • Degradation due to poor implementation, contamination, or similarity between Tx and C

    External Validity

  • Generalization is about tTx – tC

  • Implications for future specification of Tx

  • Program failure vs. Implementation failure

    Statistical Conclusion Validity

  • Variability in implementation increases error, and reduces effect size and power


Why is this important reading first implementation results

Why is this important? Reading First implementation results

Effect Size Impact of Reading First on Reading Outcomes = .05

Adapted from Gamse et al. (2008) and Moss et al. (2008)


5 step process cordray 2007

5-Step Process(Cordray, 2007)

  • Specify the intervention model

  • Develop fidelity indices

  • Determine reliability and validity

  • Combine indices

  • Link fidelity to outcomes

Conceptual

Measurement

Analytical


Some challenges

Some Challenges

  • Analyses

  • Weighting of components

  • Psychometric properties?

  • Functional form?

  • Analytic frameworks

    • Descriptive vs. Causal (e.g., ITT) vs. Explanatory (e.g., LATE)

    • See Howard’s Talk Next!

  • Intervention Models

  • Unclear interventions

  • Scripted vs. Unscripted

  • Intervention Components vs. Best Practices

  • Measurement

    • Novel constructs: Standardize methods and reporting (i.e., ARS) but not measures (Tx-specific)

    • Measure in both Tx & C

    • Aggregation (or not) within and across levels

  • Future Implementation

  • Zone of Tolerable Adaptation

  • Systematically test impact of fidelity to core components

  • Tx Strength (e.g., ARS): How big is big enough?


Treatment strength ars how big is big enough

Treatment Strength (ARS): How Big is Big Enough?

*Averaged over 1st, 2nd, and 3rd grades (Gamse et al., 2008).


Thank you

Thank You!

And Special Thanks to My Collaborators:

David S. Cordray

Michael Nelson

Evan Sommer

Anne Garrison

Charles Munter

  • Catherine Darrow, Ph. D.

  • Amy Cassata-Widera, Ph.D.


Conceptualizing intervention fidelity implications for measurement design and analysis

Chris Hulleman is an assistant professor at James Madison University with joint appointments in Graduate Psychology and the Center for Assessment and Research Studies. Chris also co-directs the Motivation Research Institute at James Madison. He received his PhD in social/personality psychology from the University of Wisconsin-Madison in 2007, and then spent two years as an Institute for Education Sciences Research Fellow in Vanderbilt University’s Peabody College of Education. In 2009, he won the Pintrich Outstanding Dissertation Award from Division 15 (Educational Psychology) of the American Psychological Association. He teaches courses in graduate statistics and research methods, and serves as the assessment liaison for the Division of Student Affairs. His motivation research focuses on motivation in academic, sport, work, and family settings. His methodological interests include developing guidelines for translating laboratory research into the field, and developing indices of intervention fidelity.  As a Research Affiliate for the National Center on Performance Incentives, Chris is involved in several randomized field experiments of teacher pay-for-performance programs in K-12 settings. His scholarship has been published in journals such as Science, Psychological Bulletin, Journal of Research on Educational Effectiveness, Journal of Educational Psychology, and Phi Delta Kappan.

Department of Graduate Psychology

James Madison University

[email protected]


Conceptualizing intervention fidelity implications for measurement design and analysis

Treatment Strength

Outcome

.45

.40

.35

.30

.25

.20

.15

.10

.05

.00

TTx

100

90

85

80

75

70

65

60

55

50

Infidelity

t tx

Achieved Relative Strength (ttx) = 0.15

(85)-(70) = 15

tC

“Infidelity”

TC

Expected Relative Strength = TTx - TC = (0.40-0.15) = 0.25


  • Login