1 / 24

Elaine Wethington Cornell University & Joyce Serido University of Arizona May 20, 2005

A Case Approach to Rating Events and Difficulties in the National Comorbidity Survey 2 (aka “Down and Dirty with the Data”). Elaine Wethington Cornell University & Joyce Serido University of Arizona May 20, 2005. Acknowledgments. Ronald C. Kessler (Harvard) George W. Brown (London)

calix
Download Presentation

Elaine Wethington Cornell University & Joyce Serido University of Arizona May 20, 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Case Approach to Rating Events and Difficulties in the National Comorbidity Survey 2 (aka “Down and Dirty with the Data”) Elaine Wethington Cornell University & Joyce Serido University of Arizona May 20, 2005

  2. Acknowledgments • Ronald C. Kessler (Harvard) • George W. Brown (London) • William Eaton (Johns Hopkins) • Students • Catherine J. Taylor • Lauren Beckles, Karina Chapman,Sarah Howe, Ninfa Leal, Dhurgha Reddy, Jessica Richards,

  3. Aims of the Presentation • Introduce a case-review approach to coding and rating conventional survey measures of life events and difficulties • Will apply method to prediction of onsets of disorder • Why we did it: • Useful to the life course approach • Make the most of conventional survey methods and a pre-existing dataset • Reduce cost of producing detailed data on stressors

  4. Sample • Re-interview in 2000-2003 of respondents from National Co-Morbidity Survey (NCS I – 1990-1992) • N=5006 • 85% retention rate from wave 1 • Interview questions at the National Comorbidity Survey web site: http://www.hcp.med.harvard.edu/ncs/

  5. Measures in the National Comorbidity Survey 2 • Approximately 200 questions about life events and difficulties -- based on NCS-1 (Kessler et al., 1994); Detroit Area Survey 1985 (Kessler et al., 1984); StructuredLife Event Interview (Wethington et al., 1995) • Onsets of depression, anxiety disorders, IED, PTSD, substance abuse (lifetime and 12 month); self-reported disability associated with disorders • Social support, personality, mood, childhood conditions, demographics

  6. Methods • Constructed a one-year case history of life events and difficulties (follow-up year only) • SAS programming step: generated event and difficulty records from close-ended questions and dates • Case review steps: • Coded open-ended responses using narrative text and limited demographic characteristics • Scanned entire case record and eliminated duplicate mentions of events/difficulties • Rated events and difficulties on key dimensions

  7. Dimensions Rated for First Analyses • Event vs. difficulty • Severity, defined as long-term threat (estimated threat 10-14 days after the event occurrence) • Measure presented today conflates estimated level of severity and certainty of the rating • NOT “contextual threat” ratings • Content • Focus (who the event happened to: subject, other, joint) and relationship • Loss (Lazarus; Brown & Harris) • Danger (Brown & Harris; Dohrenwend)

  8. Process • Strategy – put together information from 3 pieces of paper • SAS program step to generate lists for each respondent • Coding and rating done by 5 students • Reduced multiple event/difficulty records to one, when appropriate • Each case checked by investigators • Data checked and re-checked extensively • Total process: about 9 months

  9. Unlike some case review and stressor rating methods… • Social context (e.g. availability of support from others) NOT used in rating severity • “Objective” details only • (However, humiliation and entrapment ratings were not possible – dependent on knowing social context) • ALL information preserved for future use • Other coding methods possible, e.g. • Short-term threat • More detailed information about focus and content

  10. Comparison of SAS Generated and Case-Reviewed Events and Difficulties Source of Question SAS Reviewed 9/11 1066 300 Traumas (12 mo.) 1203 483 Health Screening 0 28 R Illness 1044 1030 Employment 2569 2322 Finances 1751 1188 Spouse/Partner Rel. 2050 1988 Children 1428 1118 Social Networks 5099 4976 Other Life Events 1837 1316 Total 18,047 14,749

  11. Key Characteristics of Reviewed and Rated Events and Difficulties Events 10,057 68.2% Difficulties 4,692 31.8% Events • SAS coding sufficient 8,688 86.2% • Intervention necessary 1,389 13.8% Difficulties • SAS coding sufficient 3,473 74.0% • Intervention necessary 1,219 26.0%

  12. Focus of Rated Events/Difficulties and Relationship to Subject Relationship Spouse Children Other /Partner Self 5769 39.1% Joint 3042 20.6% 65.6% 15.0% 19.4% Other 5938 40.3% 9.4% 12.0% 78.6%

  13. Number of Events/Difficulties Reported: Number of Cases Reporting 0 to 12 or more SAS After Review Count % Count % 0 574 11.5 652 13.0 1 825 16.5 970 18.5 2 776 15.5 924 15.6 3 708 14.1 782 11.9 4 600 12.0 595 7.3 5 444 8.9 365 5.6 6 324 6.5 280 3.6 7 239 4.8 181 2.0 8 159 3.2 102 2.3 9 105 2.1 57 1.1 10 73 1.5 37 .7 11 46 .9 33 .7 12 or more 133 2.6 38 .5

  14. Severity Ratings, by Method SAS ª After Review Events Difficulties Events Difficulties Severity Count % Count % Count % Count % Severe 450 4.2 445 11.4 573 5.7 507 10.8 Probably Severe 2682 25.0 2406 61.5 2126 21.1 2509 53.4 Possibly Severe 7592 70.8 1062 27.1 7179 71.4 1617 34.5 Not severe 0 0.0 0 0.0 179 1.8 59 1.3 ª 3409 entries generated from open-ended questions and not classifiable by SAS as either events or difficulties are excluded

  15. “Severe” and “Probably Severe” Reports: Percent by Sex (Case Review: Weighted) Events Difficulties Severe Probably Severe Probably Male (48.6%)* 43.2 48.0 37.9 46.8 Female (51.4%) 56.8 52.0 62.1 53.2 *Proportion in sample

  16. “Severe” and “Probably Severe” Reports: Percent by Age Groups (Case Review: Weighted) Events Difficulties Severe Probably Severe Probably 25 – 34 (22.6%)* 19.8 25.9 22.0 21.0 35 – 44 (28.9%)* 31.8 33.3 36.8 29.8 45 – 54 (29.2%)* 29.2 28.7 26.0 30.1 Over 54 (19.3%)* 19.2 12.1 15.2 19.1 *Proportion in sample

  17. “Severe” and “Probably Severe” Reports: Percent by Level of Education (Case Review: Weighted) Events Difficulties Severe Probably Severe Probably Less than HS (12.7%)*22.7 13.3 14.9 17.6 HS degree (30.5%)* 26.5 29.7 32.4 31.3 Some college (28.0%)* 30.8 30.4 27.2 29.0 College degree (28.8%)* 20.026.6 25.522.1 *Proportion in sample

  18. Reliability and Validity • Inter-rater reliability • Fall-off over 12 months • Predictive validity (relationship to onsets)

  19. Inter-rater Reliability Kappa Alpha Event vs. Difficulty .95 .93 Loss (yes/no) .95 .97 Danger (yes/no) .95 .97 Severity .89 .90 Focus .89 .89 Classification code ---- .82

  20. Preliminary Analyses of Predictive Validity • Case review method cleanly distinguishes events from difficulties • Both SAS generated method and Case review method show: • Severe “occurrences” in month of onset are related to onset of depression • Preliminary findings indicate that severe events are related to onset of depression within 30-60 days (after that effect decays)

  21. Limitations • Men reported less detail in open-ended questions (affects rating) • Stigmatized behavior under-reported (e.g. jail time had to be inferred) • More complicated contextual rating schemes using trained interviewers are much better at: • Dating onsets and offsets of difficulties • Matching related events and difficulties to each other

  22. Findings • Falloff appears to be reduced – perhaps artifactually? • Reduces cost of using case-review methods • Trained coders but conventionally trained interviewers • Takes less time to code more interviews • Method can be used in very large datasets • 5006 cases rated and entered in 2 months • 4 months additional checking • Previous study using more complicated methods took 9 months to interview, code, and rate 100 interviews

  23. SAS generation techniques could be applied to many pre-existing datasets • But you have to live with ambiguity… • Preserves more information about events and difficulties than other case rating methods • Test hypotheses about different rating schemes?

More Related