1 / 37

Chapter 3 The Nature and Value of Empirically Validated Interventions Crighton Newsom and Christine A. Hovanitz

Chapter 3 The Nature and Value of Empirically Validated Interventions Crighton Newsom and Christine A. Hovanitz. In J. W. Jacobson, R. M. Foxx, & J. A. Mulick . (Eds.), Controversial therapies for developmental disabilities: Fad, fashion, and science in

salena
Download Presentation

Chapter 3 The Nature and Value of Empirically Validated Interventions Crighton Newsom and Christine A. Hovanitz

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 The Nature and Value of Empirically Validated Interventions Crighton Newsom and Christine A. Hovanitz In J. W. Jacobson, R. M. Foxx, & J. A. Mulick. (Eds.), Controversial therapies for developmental disabilities: Fad, fashion, and science in professional practice (pp. 31-44). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Diane Canavan Caldwell College

  2. Overview • Ethical standards • Societal standards • Criteria for assessment of a scientific statement • Types of participant designs • Expert consensus guidelines • Standards for evidence • Concerns about scientific research • Ways to defend it • Sources of evidence • Activity

  3. Ethical Standards • “All professions involved in developmental disabilities have ethical standards that include a principle requiring that the individual professional provide competent treatment” (p. 31). • However, there is a difference between competent and effective or empirically validated. • It is defined in the ethical guidelines of both the American Psychological Association and the American Medical Association, and must be based on scientific knowledge. • Conversely, it is left undefined in those of the American Physical Therapy Association and the Council for Exceptional Children.

  4. Ethical Standards • “[P]rofessional organizations typically avoid an explicit requirement that practitioners use only scientifically valid interventions. As a result, most of the professions involved in developmental disabilities tolerate clinicians who provide dubious therapies and managers who operate questionable residential, vocational, and community services” (p. 31).

  5. Societal Standards • Based on health-care and legal systems • Cost-effectiveness • A treatment that serves the greatest number of people for the least amount of money • Standard of common practice • A treatment that is frequently used • Doctrine of the respectable minority • A treatment that is implemented by a significant minority group of practitioners • Also based on a specified theory and service delivery model

  6. Societal Standards • NONE OF THE SOCIETAL STANDARDS STATE THAT THE TREATMENT MUST BE EFFECTIVE!

  7. ELEMENTS OF A SCIENTIFIC APPROACH • The purpose of science is to determine the relationship between phenomena (determinism). • To assess a cause-effect relationship, it must be: • Objective • Testable • Replicable • The occurrence must also satisfy an additional two related criteria, that contribute to the ease of assessment: • Observability • Measurability

  8. Criteria for Assessment of a Scientific Statement • Objective • Terms common to the community should be used. • Observable events must be operationally defined. • Operational definition: breaks down an event into its component parts, so that independent observers would agree whether or not it occurred (interobserver agreement) • Task analyses • Testable • The statement may be subjected to experimentation in order to confirm or reject it.

  9. Criteria for Assessment of a Scientific Statement • Replicable • The results are replicated in different settings, using different participants, with the procedure(s) implemented by different experimenters. • Generalization • Theory • If the results are replicated many times over, the statement may be referred to as a theory. • First, the statement is used to describe a phenomenon. • Later, the theory may be used to predict behavior, and control it under a specific set of circumstances.

  10. How do you know? (Agnew & Pyke, 1969) • Observe it! • In the natural environment • Measure it! • In the natural environment • Test it! • Measure again! • Replicate it! • In different environments

  11. When conducting research . . . • Manipulate only one variable at a time • If manipulating more than one, it is unclear whether one or both cause the resulting change in behavior • Eliminate as many possible confounding variables as you can • Variables other than the treatment that can cause a change in behavior • E.g., changes in medication or diet

  12. Single-Case Designs • Baseline • Absence of the treatment (independent variable) – control condition • Measure responding during this phase in order to compare it to that during treatment • Treatment/Intervention • Implementation of the independent variable • Measure responding again to determine if there is a difference compared to baseline

  13. Single-Case Designs • “If repeated introductions and withdrawals of the variable produce corresponding changes in the behavior, we attribute causal status to the variable” (p. 33).

  14. The Purpose of Single-Case Designs • We use single-case designs because we’re interested in how the intervention affects each individual learner. • We individualize our teaching procedures because every person with autism is unique, with a different set of strengths, areas of deficits, and preferences.

  15. Group Designs • Although we typically use single-subject designs in applied behavior analysis, group designs are also useful. • Participants are randomly divided into at least two groups: experimental (those who receive the treatment) and control (those who do not receive the treatment). • Randomized to control for as many confounds as possible • The groups are compared in the same way that an individual’s performance is compared across baseline and treatment.

  16. Evaluating Scientific Evidence • “Expert consensus guidelines” • Some organizations have established standards to evaluate research and/or interventions • These may be seen as interchangeable • Research either supports or refutes the use of a procedure, which (hopefully) leads to the acceptance or rejection or the treatment

  17. Expert Consensus Guidelines • American Association on Mental Retardation • Assessed psychosocial and pharmacological interventions for behavior problems and psychiatric disorders • Ranged from 1 (“extremely inappropriate: a treatment you would never use”) to 9 (“extremely appropriate: this is your treatment of choice”) • New York State Department of Health (1999) • Professionals, service providers and parents reviewed the literature. • Selected studies included: adequate information about the method, participants whose ages fit the specified range, high degrees of experimental control and sound experimental designs, and functional outcomes • 18 interventions were reviewed (scientific to alternative)

  18. Past “Standards” of the Medical Community • Observations/past clinical experience • Basic understanding of diseases and pathologies • Textbooks • Other “expert” colleagues • “The ‘Introduction’ and ‘Discussion’ sections of a journal article would be considered an adequate way of gaining the relevant information from it” (p. 36).

  19. Current Standards • Agency for Healthcare Research and Quality (AHRQ) (U. S. Department of Health and Human Services) • Supports a number of “Evidence-Based Practice Centers” associated with North American universities • Report on evidence and assess technology • 19 grading systems to rate the quality of studies, 7 to rate the overall strength of evidence (from all studies)

  20. Strength of Evidence • Quality • Refers to the rigor of the studies, including the extent to which the design eliminated confounds • Quantity • Refers to the extent of the actual treatment effect, the number of studies that show an effect, and the number of participants across studies • Consistency • Refers to replication: other studies show similar results, using varied experimental designs

  21. Scheme for Grading Strength of Evidence in Medical Research

  22. “Manifesto for a Science of Clinical Psychology” • McFall (1991) • “presidential address to the Section for the Development of Clinical Psychology as an Experimental/Behavioral Science “(p. 37). • Four criteria: • The procedure must be clearly described. • The benefits must be explicit stated. • The benefits must be scientifically validated. • The benefits must outweigh any possible negative side effects.

  23. Criteria for Empirically Supported Psychological Interventions

  24. Concerns When Conducting Research • Research environment is contrived • There exists the belief that the results will not generalize to applied settings • Little evidence for a lack of generalization – only findings have shown that the effects in natural settings may not be as great • Participants tend to only have one diagnosis • Therapists in applied settings have less specialized training

  25. Concerns When Using Research to Justify Funding • Some practitioners are concerned that, in the future, funding agencies will rely solely on scientific evidence; therefore, the only way for psychosocial interventions to remain covered (e.g., by insurance) is to show that they are effective according to specified standards. • Advocate for legislation that will place the burden of cost of services on insurance companies.

  26. How practitioners, including those at Caldwell College, address the concerns • More researchers are conducting studies in the participants’ natural environments, and incorporating ones with multiple diagnoses. • Therapists may be specialized in more areas, and more training may be provided to lower level staff members (e.g., instructional aides). • We maximize generalization by training across multiple exemplars, in various settings and using different trainers. • We increase public awareness of ABA.

  27. BEHAVING SCIENTIFICALLY • As we’ve stated, evaluating evidence means being able to recognize sound proof for an intervention, as well as conclude that one is ineffective due to a lack of sound evidence. • Fallibility • “the recognition that one’s current beliefs, despite all the attractions they hold, may still be wrong” (p. 40)

  28. False Errors • Descriptive statements • Possibly of another’s mental or internal status • Causal statements • Conduct an FBA to determine the function of a response • Ontological statements • Belief that things exist when they really don’t • Relational claims • E.g., that one procedure is more effective than another • Predictions • Professional ethical claims • Impose restrictions without informed consent • Allow the client to make decisions beyond his/her capacity

  29. Modern Sources of Evidence • Scientific evidence from controlled experiments • Case studies • Correlational studies • Textbooks • Colleagues • Anecdotes • Testimonials • Celebrities

  30. O’Donohue (1997) Considerations • Accept the possibility that you may be wrong. • Research to determine if your beliefs are in line with results reported in the scientific literature. • Seek criticism from peers and colleagues, particularly those with greater expertise in the area (or in science, in general). Ask for their arguments and listen to what they have to say. • Include opportunities to receive feedback about practice and management. • Give criticism to those whose practices are not sound.

  31. First do no harm . . . • Engaging in an ineffective intervention is not just a waste of time. • The individual with autism falls even farther behind his/her age-matched peers in terms of skill levels. • Dr. Reeve’s analogy to running a race • This is irresponsible and negligent at best; unethical, illegal, and harmful at worst. • In medicine, the first oath is to do no harm. While engaging in an ineffective treatment is not technically harming the individual, it is wasting time, causing him/her to suffer when an effective one could be implemented – this IS doing harm.

  32. Postmodernist Thinking • What, exactly, does “postmodern” mean? • Challenge scientific principles • Ironically, while these individuals advocate for the “liberation” of people with disabilities, they tend not to be knowledgeable, or concerned, about their symptoms, and refer to them as “social constructs”, while, at the same time, condemning those who actually are trying to abolish ineffective and harmful practices (p. 40).

  33. Conclusions • How do we successfully confront advocates of ineffective treatments? • Be familiar with the treatment, and the evidence both for and against it • Respectfully listen to the individual’s argument • Have evidence for a more effective treatment (e.g., ABA) • The articles assigned for today are good resources! • “Only to the extent that judgment is informed by the accumulated systematic, objective knowledge of the larger field does professional training improve quality of intervention” (p. 39).

  34. Activity • “Speculating that the problem originated in Mario’s ability to process and integrate tactile inputs within the central nervous system, we conclude that a treatment program designed to include enriched tactile experiences derived from participation in activities that Mario enjoys and finds meaningful will increase the likelihood that he will take in, process, and integrate tactile inputs as a basis for planning motor actions. While we cannot directly observe if this occurs, we can observe if Mario’s motor behavior improves” (Fisher & Murray, 1991, p. 6).

  35. Activity • “Speculating that the problem originated in Mario’s ability to process and integrate tactile inputs within the central nervous system, we conclude that a treatment program designed to include enriched tactile experiences derived from participation in activities that Mario enjoys and finds meaningful will increase the likelihood that he will take in, process, and integrate tactile inputs as a basis for planning motor actions. While we cannot directly observe if this occurs, we can observe if Mario’s motor behavior improves” (Fisher & Murray, 1991, p. 6).

  36. Questions?

  37. Thank you!

More Related