1 / 18

Janet C. Titus, Ph.D. Michelle K. White, M.A. Michael L. Dennis, Ph.D.

Maximizing the Validity of Interviewer–Collected Self-Report Data: A Quality Assurance Model in Action with the GAIN. Janet C. Titus, Ph.D. Michelle K. White, M.A. Michael L. Dennis, Ph.D. Lighthouse Institute Chestnut Health Systems. Abstract

agatha
Download Presentation

Janet C. Titus, Ph.D. Michelle K. White, M.A. Michael L. Dennis, Ph.D.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Maximizing the Validity of Interviewer–Collected Self-Report Data: A Quality Assurance Model in Action with the GAIN Janet C. Titus, Ph.D. Michelle K. White, M.A. Michael L. Dennis, Ph.D. Lighthouse Institute Chestnut Health Systems

  2. Abstract Conclusions drawn from scientific studies are only as solid as the quality of the data on which they are based. In interviewer-administered assessments, one source of variation that impacts the quality - and thus validity - of the data is the quality of the assessment administration. Interviewers’ misunderstandings about the meanings of items, inaccuracies in recording, and lack of clarification of ambiguous responses are only a few of the difficulties that contribute to deterioration of validity. This is especially an issue in multi-site studies where site differences in interviewer training and supervision compound negative influences on the quality of the data. To address these problems in our studies, we have developed a quality assurance model organized around four core areas of an assessment administration: Documentation, Instructions, Items, and Engagement.

  3. Abstract, continued Each core area contains a set of guidelines against which the quality of the administration is evaluated, and certification in assessment administration is earned when the interviewer demonstrates mastery in all four areas. Although it was developed for use with the family of instruments we use in our treatment studies -- the Global Appraisal of Individual Needs -- the model can easily be adapted to fit virtually any semi-structured, interviewer-administered data gathering instrument. Our quality assurance model has been successfully implemented in over 100 research and clinical sites across the U.S. Several hundred staff have been trained in the model and close to 200 have been certified in assessment administration. (Supported by CSAT contract 270-2003-00006)

  4. Quality Assurance in Assessment Administration Quality assurance (“QA” for short) is a circular process consisting of: • the monitoring of an interviewer’s skills at administering an assessment protocol, and • the provision of evaluative feedback.

  5. Monitoring can be done live or via audiotapes • Feedback can be in person or in writing • Once the interviewer’s skills reach a predetermined level of competence, the interviewer is “certified” in the assessment administration. • QA can continue post-certification to monitor ongoing adherence to protocol.

  6. Four Core Areas of Assessment QA Documentation accuracy and completeness of recording responses and administrative info Instructions accuracy and clarity of explanations, directions, and transitional statements Items delivery & clarification of the items Engagement quality of the interaction between the interviewer and client

  7. Criteria for Assessing the Quality of an Administration • Most of the following criteria under each core area are generic and can be applied to any assessment. • Some criteria will be tailored to your specific assessment so as to account for sections not typical to most instruments. • Definitions of the criteria for evaluating QA of the GAIN are in the handout for this poster or in Chapter 4 of the GAIN manual (www.chestnut.org/LI/gain/Manuals.pdf).

  8. ~ Documentation ~(* - GAIN-specific) • Cover page (front & back)* • Check for Cognitive Impairment* • General Directions, Literacy and Initial Administration Questions* • Time to complete* • Urgency & Denial-Misrepresentation* • Administration Ratings* • Documentation of participant answers

  9. ~ Instructions ~(* - GAIN-specific) • Introduction to assessment • Check for Cognitive Impairment* • Timeline* • Additional Instructions for oral/self administration* • Introduction of scales/Use of transitional statements • Use of the cards & defining of response choices • Repeating response choices when necessary • Handling of participant questions about instructions

  10. ~ Items ~(* - GAIN-specific) • Item order & skips • Word order • Use of stems & time frames • Use of parenthetical statements* • Clarification of client’s responses for coding • Appropriate handling of client-initiated questions • Responsiveness to apparent misunderstandings, inattentiveness, & inconsistencies

  11. ~ Engagement ~ • Flow of the interview • Appropriate voice articulation and inflection • Use of encouraging or motivational statements • Sensitivity to client’s needs • Rapport

  12. Rating the Quality of an Assessment Administration • Performance in each of the four core areas is assessed on a 4 point scale: Excellent Sufficient Minor Problems Problems • Definitions of each scale value are in the QA chapter in the GAIN manual (www.chestnut.org/LI/gain/Manuals.pdf).

  13. Guidelines for Preparing Feedback • Feedback should be balanced -- it contains both things done well and things to improve. • Feedback should be specific and behavioral.

  14. Certification in GAIN Administration • A QA reviewer evaluates the administration using the hardcopy assessment and an audiotape of the session (can use live monitoring and/or oral feedback). • To be certified in GAIN administration, each core area needs to earn a rating of “Sufficient” or better. • This usually happens within four monitored assessments.

  15. We Use a Two-Tiered QA Model • “Train the Trainer” – A certified QA reviewer oversees the certification process of a research or clinical site trainer who is in charge of assessment training. • “Trainer trains research staff” - Once certified in administration and the provision of QA feedback, the trainer oversees the certification process and ongoing quality assurance monitoring of the staff.

  16. Current Status • Over 400 users have been trained to administer the GAIN. • Close to 200 research and clinical staff are certified in GAIN administration. • Close to 50 research and clinical staff are certified to train their own staff and provide QA feedback. • About 15 QA reviewers currently review tapes. • In the first quarter of 2004, our certification program reviewed an average of 70 GAIN tapes per month.

  17. Next Steps • Efforts are currently underway to analyze the effects of the QA protocol on the quality of data. • We hypothesize the protocol will positively impact validity by producing… Fewer inconsistencies across items Greater internal consistency on scales Less missing data Shorter duration of interviews

  18. Further Information & Acknowledgement • For further information contact: Ms. Michelle White, Chestnut Health Systems, 720 W. Chestnut St., Bloomington, IL 61701 (mwhite@chestnut.org). This poster is at www.chestnut.org/LI/Posters. • The development of the GAIN QA model was supported by the Center for Substance Abuse Treatment (CSAT) through the Cannabis Youth Treatment study [5 UR4 TI11321].

More Related