1 / 47

DAY 1 Applications of Appraisal Guidelines in Single-Case Design Research Tom Kratochwill

This presentation explores the use of appraisal guidelines in single-case design research, including examples of literature review guidelines and applications of What Works Clearinghouse standards. It also discusses the importance of negative results studies and replication in single-case design research.

schlosser
Download Presentation

DAY 1 Applications of Appraisal Guidelines in Single-Case Design Research Tom Kratochwill

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DAY 1Applications of Appraisal Guidelines in Single-Case Design ResearchTom Kratochwill

  2. Goals of the Presentation • Provide a rationale for single-case design appraisal guidelines; • Present some examples of literature review appraisal guidelines that have been developed in education and psychology; • Provide some example applications of the What Works Clearinghouse (WWC) Pilot Standards in literature reviews; • Review the importance of negative results studies and publication bias in single-case design research literature reviews; • Review the importance of replication studies in single-case design research.

  3. Standards for Research Review • Most standards or appraisal guidelines were developed for the purpose of reviewing an available literature and drawing conclusions from the existing data base. • Traditional guidelines for this review process typically focused on large N or randomized controlled trial research. • As the literature on single-case designs (SCDs) developed, an interest in guidelines for this data base became more prominent.

  4. Standards for Single-Case Design Intervention Research Are Now Used for Multiple Purposes • Professional agreement on the criteria for design and analysis of single-case research: • Publication criteria for peer reviewed journals; • Design, analysis, interpretation of research findings; • Grant review criteria (e.g., IES, NSF, NIMH/NIH); • RFP stipulations, grant reviewer criteria;

  5. Standards(Continued): • Conduct of literature reviews: • Review existing studies to draw conclusions about intervention research; • Draw conclusions about shortcomings of studies on methodological and statistical grounds and offer recommendations for improved research; • Make recommendations about what type of research needs to be conducted in a particular area;

  6. Standards(Continued): • Design studies that meet various appraisal guidelines: • Address the gold standard of methodology as recommended in the appraisal guideline; • Address the gold standard of data analysis as recommended in the appraisal guideline; • Address limitations of prior research methodology; • Plan for practical and logistical features of conducting the research (e.g., how many replications, participants, settings);

  7. Standards(Continued): • Better standards (materials) for training in single-case methods: • Visual Analysis; Statistical Analysis; • Development of effect size and meta-analysis technology: • Meta-analyses procedures that will allow single-case research findings to reach broader audiences; • Consensus on what is required to identify “evidence-based practices:” • Professional agreement on what works and what does not work.

  8. Example Organizations That Have Developed Guidelines for Single-Case Research Review and Publication • National Reading Panel • American Psychological Association (APA) Division 12/53 (Clinical/Clinical Child) • American Psychological Association (APA) Division 16 (School) • Exceptional Children 2005 Guidelines • What Works Clearinghouse (WWC) • Consolidated Standards of Reporting Trials (CONSORT) Guidelines for N-of-1 Trials (the CONSORT Extension for N-of1 Trials [CENT] • Single-Case Reporting Guideline in Behavioral Interventions (SCRIBE) • American Psychological Association (APA) Journal Reporting Standards (2018) (The APA Publications and Communications Board Task Force Report)

  9. American Psychological Association (APA) Journal Reporting Standards (2018) • Reporting standards are presented for “N-of-1” studies; • Tabled guidelines are presented for all journal reporting; • N-of-1 standards are listed in addition to the reporting guidelines for all studies: • Design • Type of design including procedural changes, replication, and randomization • Analysis including sequence completed and outcomes and estimation Appelbaum et al. (2018) Journal article reporting standards for quantitative research in psychology: The APA Publications and Communication Board Task Force Report. American Psychologist, 73, 3-25. http//dx.doi.org/10.1037/amp0000191

  10. Review and Critique of Appraisal Guidelines • Wendt and Miller (2012) identified seven “quality appraisal tools” and compared these standards to the single-case research criteria presented in Exceptional Children (Horner et al., 2005). Wendt, O., & Miller, B. (2012). Quality appraisal of single-subject experimental designs: An overview and comparison of different appraisal tools. Education and Treatment of Children, 35, 235–268. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165-179.

  11. Review and Critique of Appraisal Guidelines • Smith (2012) reviewed research design and various methodological characteristics of SCDs in peer-reviewed journals, primarily from the psychological literature (over the years 2000-2010). Based on his review, six standards for appraisal of the literature were identified (some of which overlap with the Wendt and Miller review). Smith, J. D. (2012). Single-case experimental designs: A systematic review of published, research and recommendations for researchers and reviewers. Psychological Methods 17, 510-550.

  12. Review and Critique of Appraisal Guidelines • Maggin, Briesch, Chafouleus, Ferguson, and Clark (2013) reviewed “rubrics” for identifying “empirically supported practices” with single-case research including the WWC PilotStandards.* Maggin, D. M., Briesch, A. M., Chafouleas, S. M., Ferguson, T. D., & Clark, C. (2014). A comparison of rubrics for identifying empirically supported practices with single-case research. Journal of Behavioral Education, 23, 287-311. *(Note: see a response to the Maggin et al. (2013) review by Hitchcock, Kratochwill, and Chasen (2015) in the Journal of Behavioral Education).

  13. Single-Case Research Review Applications and the WWC Pilot Standards • What Works Clearinghouse Pilot Standards • Design Standards • Evidence Criteria • Social Validity

  14. Toward a Professional Consensus on Using Single-Case Research to Identify Evidence-Based Practices : Some Initial Options from the PilotStandards • Five studies documenting experimental control (i.e., Meets Design Standards or Meets Design Standards With Reservations); • Conducted by at least three research teams with no overlapping authorship at three different institutions; • The combined number of cases totals at least 20; • Each study demonstrates an effect size of ___. These criteria are said to be the “threshold” for meeting an evidence-base practice standard.

  15. Literature Reviews • Kiuhara et al. (2017) found over 70 literature review articles; • The number of research reviews that adopt the WWC Pilot Standards has increased in recent years; • The number of research reviews that assess and combine single-case and large N group research has increased; • Options are now available to combine the effect sizes from single-case studies and group designs (see http://ies.ed.gov/ncser/pubs/2015002/(Authors: William Shadish, Larry Hedges, Robert Horner, and Samuel Odom) Kiuhara, S. A., Kratochwill, T. R., & Pullen, P. C. (2017). Designing robust single-case design experimental studies. In J. M Kauffman, D. P. Hallahan, & P. C. Pullen (Eds.). Handbook of special education (2nd) (pp. 116-136). New York: Routledge.

  16. Examples of using Single-Case Research to Document Evidence-Based Practice A systematic evaluation of token economies as a classroom management tool for students with challenging behavior (Maggin, Chafouleas, Goddard, & Johnson, 2011) • Studies documenting experimental control [n=7/3 (MDS-student/classroom),4/0 (MDSWR-student/classroom)] • At least three settings /scholars (yes) • At least 20 participants (no) EVIDENCE CRITERIA: Strong evidence (n=1 at the student level and n=3 at the classroom level) Moderate evidence (n=8 at the student level and n=0 at the classroom level) No evidence (n=2 at the student level and n=0 at the classroom level) Maggin, D. M., Chafouleas, S. M., Goddard, K. M., & Johnson, A. H. (2011). A systematic evaluation of token economies as a classroom management tool for students with challenging behavior. Journal of School Psychology, 49, 529-554.

  17. Examples of using Single-Case Research to Document Evidence-Based Practice An application of the What Works Clearinghouse Standards for evaluating single-subject research: Synthesis of the self-management literature base (Maggin, Briesch, & Chafouleas, 2013). • Studies documenting experimental control [n=37 (MDS)/n=31(MDSWR)] • At least three settings /scholars (Yes) • At least 20 participants (Yes) EVIDENCE CRITERIA: Strong evidence (n=25) Moderate evidence (n=30) No evidence (n=13) Maggin, D. M., Briesch, A. M., & Chafouleas, S. M. (2013). An application of the What Works Clearinghouse Standards for evaluating single-subject research: Synthesis of the self-management literature base. Remedial and Special Education, 34, 44-58.

  18. Reviews from the WWC • Pivotal Response Training (PRT) (WWC Intervention Report, 2016): • Identified 2 group design studies • Both studies met design standards • The evidence was small for one outcome domain: communication/language competencies • Identified 37 single-case design studies • 3 met WWC pilot standard without reservations and, • 1 met with reservations • The single-case design studies did not reach the threshold to include in the effectiveness ratings • Overall, PRT was found to have no discernible effects on communication/language competencies for children and students with autism spectrum disorder.

  19. Reviews from the WWC • Functional Behavioral Assessment-based Interventions (FBA) (WWC Intervention Report, 2016): • No studies met WWC group design standards • 17 studies met WWC plot single-case design standards • 7 met standards without reservations • 10 met standards with reservations • Studies met the threshold (i.e., replication across different studies, research teams, and cases) • FBA interventions were found to have potentially positive effects on school engagement and on problem behaviors.

  20. Further Examples of using Single-Case Research to Document Evidence-Based Practice: Topical Areas • Repeated Reading (WWC) • Peer Management Interventions • Writing Interventions for High School Students with Disabilities • Sensory-Based Treatments for Children with Disabilities • The Good Behavior Game • Evidence –Based Practices in Education and Treatment of Learners with Autism Spectrum Disorders • Academic Interventions for Incarcerated Adolescents

  21. Things that Could be Added to the Standards When Used in Literature Reviews: • Development of Standards for Complex Single-Case Designs, including Randomized Designs • Clarification on Ratings for Complex Single-Case Designs • Clarification of Ratings for Integrity of Interventions • Addition of Validity Issues for Single-Case Designs that Involve Clusters • Expansion of Social Validity Criteria

  22. Things that Could be Added to the Standards (Continued): • Addition of Meta-Analysis Criteria for SCD (effect size measures) • Additional Criteria for Visual Analysis Including Training in Visual Analysis • Criteria for Various Methods of Statistical Analysis of SCD Data

  23. Negative Results, Publication Bias, and Appraisal of Single-Case Design Research

  24. Negative Results Definition The term negative results traditionally has meant that there are either: (a) no statistically significant differences between groups that receive different intervention conditions in randomized controlled trials; or (b) no documented differences (visually and/or statistically) between baseline and intervention conditions in experimental single-case designs. Seftor, N. (2016). What does it mean when a study finds no effects. Washington, DC: USDepartment of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.

  25. Negative Results in Single-Case Design In the domain of SCD research, negative results reflect findings of (a) no difference between baseline (A) and intervention (B) phases (As = Bs), (b) a difference between baseline and intervention phases but in the opposite direction to what was predicted (As > Bs, where B was predicted to be superior to A), (c) no difference between two alternative interventions, B and C (Bs = Cs), or (d) a difference between two alternative interventions, but in the direction opposite to what was predicted (Bs > Cs, where C was predicted to be superior to B). Kratochwill, T. R., Levin, J. R., & Horner, R. H. (2018). Negative results: Conceptual and methodological dimensions in single-case intervention research. Remedial and Special Education, 39, 67-76. DOI: 10.1177/0741932517741721

  26. Negative Effects Negative results/findings in SCD intervention research should be distinguished from negative effects in intervention research (i.e., iatrogenic effects). Some interventions may actually produce negative effects on participants (i.e., participants get worse or show negative side effects from an intervention)―see, for example Barlow for a discussion in the psychotherapy literature (2010). Barlow, D. H. (2010). Negative effects from psychological treatments: A perspective. American Psychologist, 65, 13-20. DOI: 10.1037/a0015643.

  27. Selective Results Selective results refer to the withholding of any findings in a single study or in a replication series (i.e., a series of single-case studies in which the intervention is replicated several times in independent experiments) and can be considered as a part of the domain of negative results.

  28. Erroneous Results Erroneous results have been considered in traditional “group” research in situations where various statistical tests are incorrectly conducted or interpreted to yield findings that are reported as statistically significant but are found not to be when the correct test or interpretation is applied (e.g., Levin, 1985). Also included in the erroneous results category are “spurious” findings that are produced in various research investigations. Levin, J. R. (1985). Some methodological and statistical “bugs” in research on children’s learning. In M. Pressley & C. J. Brainerd (Eds.). Cognitive learning and memory in children’s learning (pp.205-244). New York: Springer-Verlag.

  29. Failure to Publish Negative Results May Produce Publication Bias • Publication bias results when studies with positive outcomes or more favorable results, are more likely to be published than are those studies with null or negative findings (e.g., Cook & Therrien, 2017). • Negative results are less likely to be published, a tendency often known as publication bias (Rosenthal, 1979). • Publication bias occurs in single-case design research but less is known about it in this methodology (e.g., Tincani & Travers, 2017). • If literature summaries such as meta-analyses fail to include negative results, they may overestimate the size of an effect. Cook, B. G., & Therrien, W. J. (2017). Null effects and publication bias in special education research. Behavior Disorders, 42, 149-158. Rosenthal, R. (1979). The “file drawer problem” and tolerance for null results. Psychological Bulletin, 86,638-641. Tincani, M., & Travelrs, J. (2017). Publishing single-case research design studies that do not demonstrate experimental control. Remedial and Special Education, 1-11, DOI: 10.1177/0741932517697447

  30. Publication Bias: Some Recent Research Sham and Smith (2014) examined publication bias by comparing effect sizes in single-case research in published studies (n=21) and non-published dissertation studies (n=10) in the area of pivotal response treatment (PRS). The effect sizes were assessed with PND. They found that the mean PND for published studies was 22% higher than unpublished studies. Nevertheless, PRS was found to be overall effective in published and unpublished studies. Sham, E., & Smith, T. (2014). Publication bias in studies of an applied behavior-analytic intervention: An initial analysis. Journal of Applied Behavior Analysis, 47, 663-678.

  31. Publication Bias: Some Survey Research Shadish, Zelinsky, Vevea, and Kratochwill (2016) surveyed SCD researchers about their publication practices across 34 journals. Results suggested researchers expressed a preference for submitting manuscripts for review that show large effects. And there was a reported preference for large effects in making publication recommendations on manuscripts that researchers were reviewing. A minority of researchers reporting dropping 1-2 cases if the effect size was small. We suggest that there is likely a preference for positive-results studies and very likely a publication bias in the SCD intervention literature. Shadish, W. R., Zelinsky, N. A. M., Vevea, J. L., & Kratochwill, T. R. (2016). Survey of publication preferences of single-case design researchers when treatments have small or large effects. Journal of Applied Behavior Analysis, 49, 656-673. DOI: 10.1002/jaba.308

  32. Negative Results: Some Survey Research We reviewed 29 journals from educational psychology and education to assess policies surrounding the submission of research and the reporting of negative results. The most recent issue of each journal was reviewed to determine whether articles reporting negative results were published, and a survey of editors for each of the journals was conducted to assess conditions in which publication of articles that did not demonstrate experimental effects warranted publication. Results indicated that only one of the 29 journals provided formal guidance to authors submitting papers related to negative results. Eleven of the 129 articles published by the 29 journals in their last issue of 2016 included descriptions of negative results, and of the 60% of recruited editors who responded, 96% indicated there were conditions where publication of negative results was appropriate. Kittelman, A., Cody Gion, C., & Horner, R. H., Levin, J. R., & Kratochwill, T. R. (2018). Establishing journalistic standards for the publication of negative results. Remedial and Special Education, 39, 1-6. DOI: 10.1177/0741932517745491

  33. Example Negative Results Research

  34. Weighted Vest Interventions for Individuals with Developmental Disabilities • Sensory-based interventions have been popular and are a common requested intervention for children with developmental disabilities (e.g., autism spectrum disorders); • Sensory interventions are designed to improve sensory processing and increase adaptive functioning; • The interventions are based on sensory integration theory.

  35. Cox, Gast, Luscre, and Ayers (2009) studied the effects of a weighted vest on in-seat behaviors of elementary-age students with autism and severe to profound intellectual disabilities Cox, A.L., Gast, D.L., Luscre, D.,& Ayres, K.M.(2009).The effects of weighted vests on appropriate in seat behaviors of elementary-age students with autism and severe to profound intellectual disabilities. Focus on Autism and Other Developmental Disabilities, 24, 17–26. http://dx.doi.org/10.1177/1088357608330753

  36. Methods – Study 1 • Participants: • May : 5.5 years; probable autism on GARS • Sam: 6.4 years; severe autism on CARS • Ted: 9 years; probable autism on GARS • Conditions: • No vest • Vest without weights • Weighted vest • Sequence was randomly determined • Examined in-seat behaviors: • Child facing forward looking at teacher and body in seat • Design: Alternating Intervention Design

  37. Results of Study 1

  38. A Systematic Review of Sensory-Based Treatments Sensory-based therapies are designed to address sensory processing difficulties by helping to organize and control the regulation of environmental sensory inputs. These treatments are increasingly popular, particularly with children with behavioral and developmental disabilities. However, empirical support for sensory-based treatments is limited. The purpose of this review was to conduct a comprehensive and methodologically sound evaluation of the efficacy of sensory-based treatments for children with disabilities. Methods for this review were registered with PROSPERO (CRD42012003243). Barton, E. E., Reichow, B., Schnitz, A., Smith, I. C., & Sherlock, D. (2015). A systematic review of sensory-based treatments for children with disabilities. Research in Developmental Disabilities, 37, 64-80.

  39. The Review Methods Thirty studies involving 856 participants met inclusion criteria and were included in the review. Considerable heterogeneity was noted across studies in implementation, measurement, and study rigor. The research on sensory-based treatments is limited due to insubstantial treatment outcomes, weak experimental designs, or high risk of bias. Although many people use and advocate for the use of sensory-based treatments and there is a substantial empirical literature on sensory-based treatments for children with disabilities, insufficient evidence exists to support their use.

  40. Replication Studies in Intervention Research

  41. Need for Replication Studies in Single-Case Intervention Research • There has bee an increase in efforts to promote replication research generally (e.g., Hedges, 2017; Therrien et al. 2016; Thravers et al., 2016); • Journals now are more likely to publish replication research studies; • Replication studies can establish the reliability of findings; • Replication studies can establish the generalizability of findings; • Replication studies can help determine what works and what does not work in intervention; • Replication studies can ultimately can determine the credibility of “one-time” findings. Hedges, L. V. (2017). Challenges in building usable knowledge in education. Journal of Research on Educational Effectiveness, 11:1,1-21. DOI: 10.1080/19345747.2017.1375583 Therrien, W. J., Mathews, H. M., Hirsh, S. E., & Solis, M. (2016). Progeny review: An alternative approach for examining the replication of intervention studies in special education. Remedial and Special Education, 42, 1-9. DOI: 10.1177/0741932516646081 Travers, J. C. et al., (2016). Replication research and special education. Remedial and Special Education, 42, 1-10. DOI: 10.1177/0741932516648462

  42. Three Types of Replication Research in Single-Case Design (Barlow, Nock, & Hersen, 2009): • Direct Replication: Replication of the experiment by the same researcher (sometimes called interparticipant replication). • Systematic Replication: An attempt by the researcher to replicate findings from a direct replication series while varying settings, therapists, behavior problems/disorders, and any combination of the above (can involve intraexperiment and/or interexperiment replication) (See also guidelines for conceptual replication in Coyne, Cook, & Therrien, 2106). • Clinical Replication: Administration of an intervention by the same investigator of an intervention package containing two or more distinct procedures. Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single-case experimental designs: Strategies for studying behavior change (3rd Ed.). Boston: Allyn and Bacon. Coyne, M. D., Cook, B. G., & Therrien, W. J. (2016). Recommendations for replication research in special education: A framework of systematic conceptual replications. Remedial and Special Education, 41, 1-10. DOI: 10.1177/0741932516648463

  43. Guidelines for Replication Research • Guidelines are available for each type of replication research (see Barlow et al., 2009 and Gast 2014 in Gast & Ledford 2014). • General standards have been proposed for replication research (Appelbaumet al., 2018): • External replication; study is a repetition of one or more previously published or archived studies. • Internal replication; study involves cross-validation of analyses within the same sample or resampling/randomization procedures. Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single-case experimental designs: Strategies for studying behavior change (3rd Ed.). Boston: Allyn and Bacon. Gast, D. L., & Ledford, J. R. (2014). Single case research methodology. New York: Routledge.

  44. But, Curb Your Enthusiasm: Evidence May Not Guide Practice • Good evidence may not be available to guide practice; • Evidence to guide practice may not have been disseminated; • Negative scientific evidence may be ignored; • Fad interventions may “replace” evidence-based practices.

  45. An Example of the Persistence of Fad Interventions Lillenfeld, Marshall, Todd, and Shane (2015) provide an example of “fad interventions” that persist even though there is negative scientific evidence available. The example comes from the use of facilitated communication for autism…and more recently, from sensory integration interventions. Lilienfeld, S., Marshall, J., Todd, J., & Shane, H. (2014). The persistence of fad interventions in the face of negative scientific evidence: Facilitated communication for autism as a case example. Evidence-Based Communication Assessment and Intervention, 8:2, 62-101. DOI: 10.1080/17489539.2014.976332

  46. Disseminating Evidence-Based Practices • Evidence-based Intervention is Not Sufficient: • In addition to the features of the practice: define what outcomes, when/where used, by whom, with what target populations, at what fidelity? • The innovative practice needs to not only be evidence-based, but dramatically easier and betterthan what is already being used. • The practice should be defined conceptuallyas well as procedurally, to allow guidance for adaptation. • Innovative methods of dissemination may be the key to adoption and use. Kazdin, A. E. (2018). Innovation in psychosocial interventions and their delivery: Leveraging cutting-edge science to improve the world’s mental health. New York: Oxford.

  47. Questions and Discussion

More Related