1 / 30

Measuring and Assessing Severity of Involvement for Children with SSD

Measuring and Assessing Severity of Involvement for Children with SSD. Peter Flipsen Jr., PhD, S-LP(C), CCC-SLP Professor of Speech-Language Pathology Idaho State University flippete@isu.edu (208) 373-1727. Outline. 1. What is severity? What factors affect severity?

jamal
Download Presentation

Measuring and Assessing Severity of Involvement for Children with SSD

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring and Assessing Severity of Involvement for Children with SSD Peter Flipsen Jr., PhD, S-LP(C), CCC-SLP Professor of Speech-Language Pathology Idaho State University flippete@isu.edu (208) 373-1727

  2. Outline • 1. What is severity? • What factors affect severity? • Defining severity categories? • Age differences? • 2. Assessing Severity

  3. Severity of Involvement • How Bad is the Problem? • Is it mild? • Is it moderate? • Is it severe? • Depends somewhat on the disorder (we will focus on children with SSD).

  4. Why is severity important? • Sometimes it isn’t. It may be enough to simply say there is a disorder. • But … • 1. It may affect access to service. • Some payers will limit what they will pay for depending on severity.

  5. Why is severity important? • 2. It may affect caseload management. • Clinicians may group clients by severity. • OR • We may see severe clients more often than mild ones.

  6. Why is severity important? • 3. It may influence our treatment choices. • For example: • conventional minimal pair therapy MAY be better for milder cases • cycles or multiple oppositions approaches MAY be better for more severe cases.

  7. What factors might affect severity? • How do we decide on severity? • In general we might consider: • 1. Specific skills the speaker may be lacking (disability). • Generally the easiest for us to measure.

  8. What factors might affect severity? • 2. Effect of skill reduction on the speaker’s daily functioning (handicap). • Difficult to measure. • Including a measure of “intelligibility” is probably as much as we normally do.

  9. Gold Standard? • Ideally we would have some ultimate standard or reference to compare against. • Might allow us to identify the relevant factors, but such a standard doesn’t exist. • The judgment of experienced clinicians is usually seen as the next best thing. • Dollaghan (2003) referred to such interim standards as a “tin standard”.

  10. What do experienced clinicians use? • Flipsen, Hammer, and Yost (2005) • Based on ratings from 6 very experienced clinicians (>10 years in the field) • Concluded that theyconsider: • Number of errors • Types of errors • Consistency of errors • Intelligibility • Accuracy at the sound and whole word level

  11. Defining Severity Categories • How many categories should we have? • Is mild, moderate, and severe enough? • Should we include profound? • Should we have intermediate categories? • No definitive answers. • May be defined for us by payers, administrators, or test developers. • May be left up to us to decide.

  12. Defining Severity Categories • How do we know what is mild vs. moderate vs. severe? • Where do we draw the line between the categories?

  13. Defining Severity Categories • Some norm-referenced speech sound tests offer severity categories with defined boundaries: • HodsonAssessment of Phonological Patterns-3 • Major DeviationsCategory • 1-50 Mild • 51-100 Moderate • 101-150 Severe • > 150 Profound

  14. Defining Severity Categories • Problems with boundaries set by test developers: • 1. They are usually arbitrary. • 2. Not clear how they would relate to boundaries used by a different test developer. • Hard to compare for transfer cases where clinicians use different tests.

  15. Age Considerations • Age is an important issue. • Clearly if a 7 year old and a 3 year old show similar speech performance, the older child will be of a greater concern. • Norm-referenced tests give us standard scores that account for age • BUT norm-referenced tests rely solely on number of errors and don’t consider other relevant factors. • They also rely solely on singe word productions which don’t always represent typical performance.

  16. Measuring Severity • Still lots of unanswered questions. • So what do we do? • Currently we don’t have any ideal measures available. • But we do have options.

  17. 1. Perceptual Rating Scales • Common practice. • Make a judgment based on listening and observing the child and assign them to a category. • A common 5 point scale might include: Normal, Mild, Moderate, Severe, Profound. • May include anywhere from 3-9 points. • Clinician uses whatever they feel is appropriate to make the judgment.

  18. Concerns with Rating Scales • 1. Different clinicians may consider different factors. • Ratings can vary considerably across clinicians. • E.g., Rafaat, Rvachew, and Russell (1995) had 15 clinicians (5+ years of experience) rate 45 children on a 5 point scale. • Only 61% exact agreement. • Even very experienced clinicians don’t agree very well. • Flipsen et al. (2005) found an intra-class correlation of 0.60 for the 6 clinicians on 17 samples.

  19. Concerns with Rating Scales • 2. Lack of reference standards. • Even if clinicians all considered the same factors, where do we draw the line between categories? • Different clinicians may draw the lines at different places. • Probably not the best approach.

  20. 2. PCC in conversation • One measure that has undergone some validation (and is often used in research) is Percentage Consonants Correct (PCC) from conversational speech samples. • Narrow phonetic transcription • Look at each attempt at a consonant and score as correct or incorrect. • Any change (including distortions) = error. • Calculate % correct over the entire sample.

  21. 2. PCC in conversation • Shriberg and Kwiatkowski (1982) had a large group (52) of clinicians rate severity on conversational speech samples. • Found that ratings matched well onto the following categories: • PCC rangeRating • 85+ Mild • 60-85 Mild-moderate • 50-65 Moderate-severe • <50 Severe

  22. Concerns with PCC • Doesn’t account for age. • Only looks at consonants. • Doesn’t consider other potentially important factors. • Based on conversational speech which is time consuming to evoke and transcribe.

  23. PCC and Age • More recently Austin and Shriberg (1997) published some reference data (not really norms) for PCC from conversational speech samples. • Provides means and standard deviations for males and females at different ages. • Allows for calculation of z-scores (# of standard deviations from the mean).

  24. 3. PCC in Imitated Sentences • To accommodate concerns with transcribing conversational speech, Johnson, Weston, and Bain (2004) developed a sentence imitation task. • Can score as child imitates each sentence (cross out any phonemes in error). • Simple calculation.

  25. 3. PCC in Imitated Sentences • Johnson et al showed that PCC in conversation was not significantly different from PCC on this task. • Useful alternative? • No age reference data available however.

  26. 4. Alternative Severity Measures • Several other measures might be used. For example: • Overall intelligibility (% words understood in conversation). • Shriberg et al (1997) proposed several variations on PCC • E.g., PVC, PPC, PCC-R • Ingram and Ingram (2001) proposed several measures that consider the whole word: • Phonological Mean Length of Utterance • Proportion of Whole Word Proximity • Proportion of Whole Word Variability

  27. 4. Alternative Severity Measures • Flipsen et al. (2005) compared many of these alternative measures to PCC. • Looked at how they correlated with ratings from very experienced clinicians. • Several were just as good but none of the alternatives appeared to be any better than PCC. • That included intelligibility. • Most involved more complicated calculations.

  28. Conclusions • Severity estimates are often very necessary. • To date we still don’t fully understand the best way to estimate severity. • We have several options available. • Perceptual rating scales should probably be avoided. • To date few of the available measures have been validated. • None so far seems any better than the oldest, objective measure – PCC.

  29. References • Austin, D., & Shriberg, L. D. (1997). Lifespan reference data for ten measures of articulation competence using the speech disorders classification system (SDCS) (Tech. Rep. No. 3). Phonology Project, WaismanCenter, University of Wisconsin‑Madison. • Dollaghan, C. A. (2003). One thing or another? Witches, POEMS, and childhood apraxia of speech. In Shriberg, L. D., & Campbell, T. F. (Eds.) Proceedings of the 2002 Childhood Apraxia of Speech Research Symposium (pp. 231-237). Carlsbad, CA: The Hendrix Foundation. • Flipsen, P., Jr., Hammer, J. B., & Yost, K. M. (2005).  Measuring severity of involvement in speech delay: Segmental and whole-word measures.  American Journal of Speech-Language Pathology, 14(4), 298-312. • Ingram, D., & Ingram, K. D. (2001). A whole-word approach to phonological analysis and intervention. Language, Speech and Hearing Services in Schools, 32, 271-283. • Johnson, C. A.,Weston, A. D., & Bain, B. A. (2004). An objective and time-efficient method for determining severity of childhood speech delay. American Journal of Speech-Language Pathology, 13, 55-65. • Rafaat, S. K., Rvachew, S., & Russell, R. S. C. (1995). Reliability of clinician judgments of severity of phonological impairment. American Journal of Speech-Language Pathology, 4(3), 39-46. • Shriberg, L. D., Austin, D., Lewis, B. A., McSweeny, J. L., & Wilson, D. L. (1997). The percentage of consonants correct (PCC) metric: extensions and reliability data. Journal of Speech, Language, and Hearing Research, 40, 708-722. • Shriberg, L. D., & Kwiatkowski, J. (1982). Phonological disorders III: A procedure for assessing severity of involvement. Journal of Speech and Hearing Disorders, 47, 256-270.

More Related