1 / 52

What Works Regarding Social Skills Interventions Using Single Subject Design

What Works Regarding Social Skills Interventions Using Single Subject Design. Jeffrey Chenier, M.A., Aaron J. Fischer, Katherine Hunter, Emily Patty, Lisa Libster , M.A., Kristen O’Leary, Haley York, Natalie Robichaux and Frank Gresham, Ph.D. Introduction. Scientifically Based Research

Download Presentation

What Works Regarding Social Skills Interventions Using Single Subject Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What Works Regarding Social Skills Interventions Using Single Subject Design Jeffrey Chenier, M.A., Aaron J. Fischer, Katherine Hunter, Emily Patty, Lisa Libster, M.A., Kristen O’Leary, Haley York, Natalie Robichaux and Frank Gresham, Ph.D.

  2. Introduction • Scientifically Based Research • Section 9101(37) NCLB: Research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs (No Child Left Behind Act, 2001)

  3. Introduction • What Works Clearinghouse (2006): Group Design • Meet Evidence Standards: • “well designed and implemented randomized controlled trials” • Meet Evidence Standards with reservations: • “quasi-experiments with equating and no severe design or implementation problems or randomized clinical trials with severe design or implementation problems”

  4. Introduction • What Works Clearinghouse (2010): Single Case (SC) • Meet Evidence Standards • IV must be systematically manipulated, with the researcher determining when and how the IV conditions change • Each outcome variable must be measured systematically over time by more than one assessor, and the study needs to have IOA calculated 20% of the time in each condition, and IOA percentage must meet minimum thresholds • 0.80 IOA or 0.60 Cohen’s Kappa • Study must include at least three attempts to demonstrate an intervention effect at three different points in time or with three different phase repetitions • Phase must have a minimum of three data points • Effect size estimation follows if a study has either Strong Evidence or Moderate Evidence

  5. Introduction • Meta-Analysis • Strube & Hartmann (1982) • Objective method for summarizing a body of empirical findings • Emphasizes the direction and magnitude of effects across studies for a particular intervention

  6. What Works Clearinghouse (2010) • No agreed upon method or gold standard to calculate effect sizes from single-case design research • Problems • How to quantify the effect? • How accurate is the effect? • How comparable are the effects across other SC designs? • How comparable are the effects compared to group design effect sizes?

  7. Current Effect Size Estimators (WWC, 2010) • Nonparametric Methods • Percentage of Nonoverlapping Data (PND), Percentage of All Nonoverlapping Data (PAND), Percent Exceeding the Median (PEM) • “Distributional properties of these measures are unknown, so standard errors and statistical tests are not formally justified.” • Additionally, trend is not addressed • Because of the lack of statistical justification, only use if an approximate size of the effect is desired. • Wolery et al. (2010) compared four overlapping methods to visual inspection of effect and each method had its own host of issues, so much that they called for their abandonment • Visual analysis only agreed 121/160 on whether the treatment was effective or not

  8. Current Effect Size Estimators (WWC, 2010) • Parametric Methods • Regression Estimates • Advantages • Familiarity • Ability to model trends • Ability to attain an Effect Size from a single case • Disadvantages • Inability to deal with complex structures present in single case design

  9. Current Effect Size Estimators (WWC, 2010) • Parametric Methods • Multilevel Modeling • Advantages • Ability to account for complexity of design • Disadvantages • Unfamiliarity • Technically challenging and time consuming • Different metric from group design Effect Sizes, therefore the estimate is not comparable

  10. Current Effect Size Estimators (WWC, 2010) • Quantitative Methods • Differing methods to calculate a Standardized Mean Difference statistic (current study) • Advantages • Encourages inclusion of SC designs in evaluating effects of interventions • Potentially gives another method in which to rank order interventions • Disadvantages • Not completely comparable to group design research • Pooled within-group variance not comparable to pooled within phase variance • Small n leads to imprecise estimates • Trend is not assessed

  11. Summary of Effect Size Estimators for SC Design (WWC, 2010) • Simply put, science is not there yet • Nonparametric estimators should be reported with a parametric estimator (regression) • Multilevel methods are not ready • Quantitative methods are not as statistically sound as they should be, but the base from which to build is present

  12. Social Skills • Learned behaviors that enable positive interactions and allow for escape/avoidance of negative interactions • Academic Enablers (DiPerna & Elliott, 2002) • Better predictor of academic achievement in 8th grade than 3rd grade academic achievement (Caprara et. al, 2000) • Myriad of problems co-occurring with social skills deficits • Both externalizing and internalizing

  13. Introduction • Does social skills training work? • Gresham, Cook, Crews, and Kern, 2004

  14. Introduction • Does social skills training work? • Godbold et. al, 2010 • 34 group design studies • Random Assignment with Equivalent Starting Groups • g=0.67, p<0.05 ; BESD treatment = 82% • Significantly higher than quasi-experimental designs

  15. Introduction • Godbold et. al, 2010 • Contrast Analyses

  16. Introduction • Research question • Evidence is there for Primary Programs; is there evidence for Secondary Programs?

  17. Other Meta Analyses Since 2000

  18. Summary • 7 total studies • 6 with Autism Spectrum, 1 with Typically Developing • Multiple interventions available • Effectiveness • 3 very effective • 3 moderately effective • 1 not as effective (but still statistically significant)

  19. Method • Literature Search, 2000-2009 • Keyword Search in PsycINFO • 5940 Articles Total

  20. Method • Coding 1 Primary Inclusionary Criteria • No Books, Reviews, Meta-Analyses, Group Designs, Dissertations • Coding 1 Secondary Inclusionary Criteria • “Is study a social skills intervention or does it target a social skill?” (YES) • Does study focus on ages 3-21, or through high school? (YES) • Does study target drugs, alcohol, or sexual offenders? (NO) • 296 studies remained, 100% IOA in coding 1 (approximately 22% of articles)

  21. Method • Coding 2 Primary Inclusionary Criteria • Is the full article in English? • Does article include single subject graphs? • No AB design • Coding 2 Secondary Inclusionary Criteria • Does study fit our Social Skills definition? • Gresham, Van, and Cook, 2006: • Facilitates initiating and maintain positive social relationships • Contributes to peer acceptance and friendship development • Results in satisfactory school adjustment • Allows individuals to cope with and adapt to the demands of the social environment • Is the study not part of a larger treatment package? • Coded 190 unique studies • 64 studies on to Coding 3 (IOA = 92% for 38% of studies)

  22. Method • Coding 3 Primary Inclusion Criteria • If one participant, more than 1 replication across setting or behavior • Presence of variability in baseline and treatment conditions across at least 2 participants, settings, or behaviors • Graphs in which UnGraph was able to score • 40 studies eligible for analysis (IOA 100% on 30% of studies)

  23. Method • Coding 3 • Design Type and Subtype • Research Question • Main Unit of Comparison • Participant Info • Phase Info • Dataset Info • Measurement Strategy • DV information • IOA • Treatment Efficacy • Study Quality • Three replications across or within? • Treatment Integrity? • IV Operationally Defined? • DV Operationally Defined? • IOA

  24. Method • Data Extraction • UnGraph (Biosoft, 2004) • Extracts numerical data from graphs and puts it into Microsoft Excel • High reliability and validity in collecting data from single subject graphs (Shadish et al., 2009)

  25. Method • Effect Size Calculation (Shadish, 2007) • G = (Mt– Mb) / sp • yields a standardized mean difference statistic • Currently the best quantitative method available, but not absolutely accurate

  26. Method 1) Calculate Mean of Baseline and Tx Panels One, Two, and Three 3) Calculate Standard Deviation of BL and Tx from the Mean of Means 2) Calculate BL and Tx Mean of Means across the three panels 4) Calculate Effect Size for Positive Social Interactions

  27. Results

  28. Journals

  29. Results • Are social skills interventions evaluated by single subject methods effective? • Yes • g=3.06

  30. Results • Independent Variable *p<.05

  31. Results • Independent Variable • Largest effects occurred with interventions that target self-awareness / self-monitoring of behavior as the primary independent variable

  32. Results • Dependent Variable *p<.05

  33. Results • Dependent Variable • Larger effects are seen when targeting prosocial behavior over problem behavior

  34. Results Recipient Director *p<.05 *p<.05

  35. Results • Recipient and Director • Largest effects are seen when the student is the target of the intervention and when either an independent experimenter/interventionist or peer is the director of the intervention

  36. Results • Disability *p<.05

  37. Results • Disability* • Interventions are most effective with children who have disabilities along the Autism spectrum

  38. Results • Setting

  39. Results • Setting • The largest effects were seen when interventions were implemented in schools

  40. Results • Number of Components

  41. Results • Components** • Increasing the number of components in an intervention did not increase the magnitude of effect • Size may not matter, quality matters

  42. Results • Treatment Integrity

  43. Results • Treatment Integrity* • Studies that report percentage of integrity had largest effects

  44. Results • Reinforcement

  45. Results • Reinforcement** • No difference in these studies in regards to reinforcement • Could be a definitional issue. Coded articles that specified reinforcement given, not necessarily lack of reinforcement • Reinforcement should be provided if necessary to acquire behavior change • Quality of intervention may be more important for some

  46. Results • Study Quality

  47. Results • Study Quality* • Highest effects were found in studies that had highest quality ratings, although not statistically significant • Studies/interventions should be implemented with highest quality possible

  48. Summary of Results • Mixture of Ivs, all having self awareness qualities • Matches effect of group design study • Opposite of effect of group design study • Not significantly higher, but higher, and similar to our group design finding

  49. Limitations • Effect size estimator not entirely accurate • No correction for small sample size • Stringent selection criteria / data analytic method responsible for abandonment of nearly 40% of single subject studies • Both parametric and nonparametric methods would have had a larger n

  50. Discussion and Future Directions • All interventions were effective • If assessment leads to a social skills deficit, with almost any kind of student, there are interventions that work • Single subject meta-analyses are not yet as informative/definitive as they could be, but current best practice is still to aggregate magnitude and direction of effects across studies • Calculate results using other available methods and comparing effects (NASP in Philly 2012????)

More Related