1 / 33

What Every Advocate Should Know about Psychological Evaluations

What Every Advocate Should Know about Psychological Evaluations. June 19, 2007 Natalie Rathvon, Ph.D. Questions for consideration. What kinds of assessors conduct psychological and psychoeducational evaluations?

Thomas
Download Presentation

What Every Advocate Should Know about Psychological Evaluations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What Every Advocate Should Know about Psychological Evaluations June 19, 2007 Natalie Rathvon, Ph.D.

  2. Questions for consideration • What kinds of assessors conduct psychological and psychoeducational evaluations? • What kinds of classification systems do evaluators use in making diagnoses and determinations? • What tests and measures are most frequently administered? • What questions should advocates consider when reviewing evaluations? • What remedies are available if test results and/or conclusions appear inaccurate or misleading?

  3. Types of psychological assessments and assessors • Psychological vs. psychoeducational assessments • Level of training and supervisory issues • Externs, interns, post-doctoral fellows, & master’s-level assessors (all must be supervised by licensed psychologists) • Certified school psychologists • Ph.D. or Psy.D. level clinical psychologists

  4. Classification systems • American Psychiatric Association • Diagnostic and Statistical Manual of Mental Disorders, 4th ed., Text revision (DSM-IV, TR) • IDEA 2004 • Specific disability categories • American Association on Intellectual and Developmental Disabilities • Mental retardation: Definition, classification and systems of support, 10th ed.

  5. AAMR definition of mental retardation • American Association on Mental Retardation (AAMR) is now the American Association on Intellectual and Developmental Disabilities (AAIDD). • 2002 AAMR definition of mental retardation – • Mental retardation is a disability characterized by significant limitations both in intellectual functioning and in adaptive behavior as expressed in conceptual, social, and practical adaptive skills.

  6. Diagnosis vs. determination • Eligibility determinations under IDEA are made in the context of a multi-disciplinary team (MDT). • Research and practice indicate that the psychologist’s opinion generally has the most weight. • Some medical diagnoses are closely aligned with IDEA categories, while others are not.

  7. The DSM-IV multiaxial format • Axis I Clinical Disorders Other Conditions That May be a Focus of Clinical Attention • Axis II Personality Disorders Mental Retardation • Axis III General Medical Conditions • Axis IV Psychosocial & Environmental Problems • Axis V Global Assessment of Functioning (GAF) scale of 10-100 (50 = serious symptoms)

  8. LD as an example: Category vs. diagnosis • Learning disabilities = a collective term representing multiple disorders in specific areas (oral expression, listening comprehension, written expression, basic reading skill, reading comprehension, reading fluency skills, mathematics calculation, mathematics problem solving) • Specific learning disability vs. global cognitive deficits • Category (collective term) vs. diagnosis (specific disorder)

  9. Examples of DSM-IV diagnoses vs. IDEA categories • DSM-IV, TR Reading Disorder vs. IDEA specific learning disability (in one of eight areas) • DSM-IV, TR Dysthymic Disorder, Generalized Anxiety Disorder, Psychotic Disorder NOS, etc. vs. IDEA serious emotional disturbance

  10. LD diagnosis: The ability-achievement discrepancy model • Exclusionary diagnosis: IQ was measured to rule out the possibility that learning problems resulted from low intelligence. • No research support for validity of LD diagnosis based on IQ-achievement discrepancies • Virtually impossible to get a discrepancy before Grade 3 on typical tests

  11. Changes in LD determination • No longer required to find a “severe” discrepancy between ability and achievement to determine LD • Can use response to intervention (RTI) – failure to respond to scientific, research-based intervention – or “some other alternative research-based procedures” • Additional procedures are now required for identifying children with SLDs (34 CFR Part 300, Subpart D) • Examples: Documentation of adequate instruction and repeated achievement assessments

  12. Frequently administered tests • The “standard battery” (one size fits all) • Same set of tests, regardless of the referral question • Major test categories • Cognitive ability/achievement batteries • Social-emotional measures • Adaptive behavior scales • Visual-motor tests (not reviewed here)

  13. Cognitive ability/achievement batteries: WISC-IV/WIAT-II • Wechsler Intelligence Scale for Children – 4th Edition (WISC-IV) • Ages 6:0 – 16:11 • 15 subtests (10 core, 5 supplementary) • Combine to yield 4 index scores and a full-scale IQ (no more Verbal IQ and Performance IQ) • Compared with the WISC-III, examinees show an average FSIQ decrease of 2.5 points on the WISC-IV.

  14. WISC-IV/WIAT-II, cont. • Co-normed with the Wechsler Individual Achievement Test, II (WIAT-II) • Conorming: same norm group; permits more reliable and valid comparisons • Ages 4:0 – 85+ • Covers the seven areas of learning disabilities specified in IDEA 1997 • Does have a measure [inadequate] for reading fluency

  15. Profile analysis: Does variability equal disability? • Common but unvalidated practice that involves analyzing score differences for diagnostic purposes • Lack of evidence of reliability and predictive validity • With multiple comparisons, increased likelihood of differences due to chance and overinterpretation • Prevalence rates of various profiles in the standardization sample are not provided. • Score differences CAN be evaluated for statistical significance (probability of difference occurring by chance) and clinical significance (prevalence rate in norm group).

  16. Cognitive ability/achievement batteries for young children • Wechsler Preschool and Primary Scale of Intelligence, 3rd ed. (WPPSI-III) • Ages 2:6 to 7:3 • Linked to WIAT-II but many of WIAT-II subtests are not appropriate for young and low-performing children

  17. Woodcock-Johnson tests • Woodcock Johnson Tests of Cognitive Ability (WJ COG) • Ages 2:0 – 90+ • Standard and Extended Batteries (10 tests each) • Co-normed with Woodcock Johnson Tests of Achievement (WJ ACH) • Standard Battery (12 tests) and Extended Battery (9 tests) • Watch out for comparisons between WISC-IV scores (apples) and WJ ACH scores (oranges)

  18. K-ABC/KTEA • Kaufmann Assessment Battery for Children, 2nd edition (KABC-II) • Ages 3 – 18 • Intended to be “culturally fair” • Minimizes verbal instructions and responses • Conormed with Kaufman Test of Educational Achievement–II (KTEA-II)

  19. Adaptive behavior measures • Must be administered if mental retardation is suspected • Multi-informant scales (teacher, parent/caregiver; sometimes includes examinee self-report) • Examples: • Adaptive Behavior Assessment System II (ABAS-2) • Vineland Adaptive Behavior Scales II • Scales of Adaptive Behavior, Revised

  20. Measures of social/emotional functioning • Behavior rating scales • Observational procedures • Self-report measures • Interviews • Projective methods

  21. Behavior rating scales • Behavior Assessment System for Children, Second Edition (BASC-2) • Clinical Assessment of Behavior • Child Behavior Checklist • Connors Scales • Scale for Assessing Children for Emotional Disturbance

  22. Projective measures • Much higher level of inference compared with behavioral measures • Very limited evidence of reliability and validity for most measures • Often administered but then reported with minimal detail or interpretative discussion • Examples • Draw-a-Person • Rorschach • Apperceptive personality tests (Thematic Apperception Test, Children’s Apperception Test, TEMAS)

  23. Additional considerations for special testing populations • Preschoolers and early primary grade children • Hard to document academic deficits with certain tests • “Floor” effects – not enough easy items to help identify very low performing examinees • English language learners • How to differentiate lack of English language proficiency or lack of instructional opportunities from cognitive deficits or learning disabilities • Students from high-poverty backgrounds • How to differentiate limited vocabulary and background knowledge and/or lack of adequate instruction from cognitive deficits or learning disabilities

  24. What about nonverbal IQ tests? • Nonverbal intelligence tests (CTONI, TONI, UNIT) are believed to reduce the effects of language and culture on the assessment of cognitive ability. • Use pointing formats, often pantomime directions • Effects cannot be completely eliminated. • Poorer predictors - tasks on nonverbal IQ tests don’t match school demands as closely as tasks on verbal IQ tests

  25. General questions to consider in reviewing evaluations • Is the evaluator qualified? • Does the assessment adequately sample the problem domains? • Does the assessment take into account contextual as well as child-specific factors (inadequate instruction, classroom variables, family stressors, etc.) • Are the tests administered psychometrically sound? (adequate reliability, validity, etc.) • Are they appropriate for examinees of this age? (adequate test floors for young examinees, etc.)

  26. More general questions • Have the most valid scores have been reported and used in the analysis (standard scores, percentiles, relative proficiency indices, not age or grade equivalents)? • Is there an overreliance on computer-generated test interpretive programs? • Do the assessment results match the criteria for the diagnoses and/or determinations made? • Is there diagnostic uncertainty (rule out, diagnosis deferred, unspecified disorder, NOS)?

  27. Still more general questions • Does the evaluation address prognosis with and without intervention? • Does the evaluation include recommendations for evidence-based treatments to address the identified problems – or does it rely on a “placement-as-treatment” approach?

  28. Questions to ask when academic deficits are an issue • Have the relevant achievement domains been adequately measured? • Were comprehensive ability and achievement measures administered (not brief or screening versions)? • If SLD has been excluded because no discrepancy has been identified, has an RTI approach been considered? • Are comparisons between ability and achievement based on co-normed instruments? • When score differences are described, are they evaluated for statistical significance (.05 or .01 level) and clinical significance (prevalence rate in the norm group)?

  29. Questions to ask when behavior/adjustment is an issue • Does the evaluation include rating scales, interviews, and observational procedures?

  30. Questions to ask when mental retardation is an issue • Is there documentation of low cognitive ability AND significant limitations in adaptive functioning?

  31. Questions to ask when English learners are evaluated • Is the student’s level of English language proficiency documented? • It can take 3 to 5 years to develop speaking proficiency and 4 to 7 years to develop academic vocabulary. • Were nonverbal intelligence tests and or receptive format (pointing) tests included? • Was the child tested in his/her native language and also in English to permit skill comparisons across languages? • Was the examiner bilingual? Was an interpreter available during the assessment? • Has the student had adequate instructional opportunities? • Has an RTI approach been implemented?

  32. Possible remedies if test results appear inaccurate or misleading • Review the evaluator’s qualifications. • Review the amount and quality of the evidence for the diagnostic conclusions and recommendations. • Request additional domain-specific testing that uses “best practices” assessment strategies and measures. • Curriculum-based assessments • Reading inventories and direct reading sampling • RTI approaches • Observational assessments • Validated measures of social/emotional functioning • Measures of contextual variables (e.g., teacher & parent interviews and rating scales; language proficiency measures)

  33. Case Examples!

More Related