1 / 26

The Dual Tasks of Interviewers

This study explores the roles and qualities of interviewers in the survey process, including recruiting and administering surveys. It examines the impact of interviewer performance on survey error and the relationship between interviewer experience and data quality.

felica
Download Presentation

The Dual Tasks of Interviewers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Dual Tasks of Interviewers Ting Yan Colm O‘Muircheartaigh Jenny Kelly Pat Cagney Rebecca Jessoe NORC at University of Chicago Kenneth Rasinski University of Chicago Gary Euler Centers for Disease Control and Prevention

  2. What do interviewers do? • Recruiting potential respondents • Introducing survey to potential respondents • Gaining cooperation • Screening for eligible respondents • Administering interviews • Reading questions • Recording answers • Probing • Providing definitions

  3. Desired qualities of interviewers • When recruiting respondents • Adaptive and flexible (Converse & Schuman, 1974) • Tailoring (Groves & McGonagle, 2001; Houtkoop-Steenstra & van den Bergh, 2002; Maynard & Schaeffer, 2002) • Maintaining interaction (Groves & McGonagle, 2001) • Those who developed their own approach had lower refusal and higher cooperation than those who follow a standard script • When administering interviews • Technician like (Converse & Schuman, 1974) • Standardized interviewing (Fowler and Magione, 1990) • Conflicting?

  4. How do interviewers affect survey error? • Recruiting respondents • Nonresponse error • If interviewer consistently attract respondents with a certain characteristic • Administering interviews • Measurement error • Interviewer bias • Interviewer variance • If interviewers consistently influence responses in a certain way

  5. Research questions • Is there a relationship between interviewers’ performance at recruiting respondents and administering interviews? • Are interviewers who are good at recruiting respondents also good at collecting data of good quality? • How does interviewer experience mediate this relationship, if the relationship exists?

  6. Data • National Immunization Survey (NIS) • Nationwide, list-assisted random digit-dialing (RDD) survey conducted by the NORC for the Centers for Disease Control and Prevention • Monitors the vaccination rates of children between the ages of 19 and 35 months. • 2007 Q3 data • 712 interviewers worked • 499,490 telephone numbers dialed • 4,438 interviews obtained

  7. Which interviewers were includedin the analysis? • Interviewers who had completed interview(s) on first contact • 295 interviewers • 3114 completes

  8. Measures of recruitment task • (First contact) Refusal rate =# refusals/# first contact cases • (First contact) Completion rate =# completes/# first contact cases • (First contact) Eligibility rate =# eligibles/# first contact cases • Denominator: first contact cases • Virgin (fresh) cases or cases that were dialed by autodialers only. • They haven’t been touched by humans before sent to the current interviewer. • Refusal conversion rate =# converted refusals/# refusals

  9. Measures of administration task • Interviewer effect (ρint) • Adherence to standardized interviewing (monitoring data) • Item nonresponse • Interview time (cost)

  10. Good openers vs. Bad openers • Good openers: 3 out of 4 rates are above medians

  11. Good openers vs. Bad Openers (II) • When experience is introduced • Median split on # of days worked at NORC

  12. Good openers vs. Bad Openers (III) • When experience is introduced • Median split on # of days worked at NORC

  13. ρint • ρint : Intra-interviewer correlation • Deffint=1+ ρint*(m-1) • Hierarchical linear models • Respondent data as level 1 data • Interviewer data as level 2 data • Unconditional model with no explanatory variables at either level • ρint=between-interviewer variance/total variance

  14. ρint (II)

  15. ρint (III)

  16. ρint (IV)

  17. Monitoring scores • Monitoring items • Read questionnaire verbatim • Probe without biasing or leading/Probing for Don’t Knows • Reads scales as directed etc. • Scores • 1=Error • 2=No Error • 3=Outstanding • Item-level monitor score for each interviewer • Overall summary score for each interviewer

  18. Monitoring scores (II) • Good openers on average have higher mean scores than bad openers, but difference sig. only for one monitoring item • Read Questionnaire Verbatim • Verifies dates and confirms spelling • Properly obtains all provider information • Use job aids as needed • Reads scales as directed • Records open-end response verbatim • Probes without biasing or leading/Probes Don’t Knows

  19. Summary scores across monitoring items

  20. Item Nonresponse • A set of 24 questions every one had to answer • Item nonresponse rate=# of times R didn’t provide an answer /24

  21. Average Interview Duration (cost) • Time spent on completing an interview • The longer the interview time, the more costly

  22. Provider consent rate (79.8%) (74.9%)

  23. Conclusions and Discussion • Good-opener interviewers • More completes • Higher refusal conversion, completion, and eligibility rates • Lower refusal rate • Good-opener interviewers • Higher intra-interviewer correlation • But more adherence to standardized interviews (higher monitoring scores) • More missing data • Are good openers also good at collecting data of good quality? • No one clear answer • Depends on which measures of interviewing tasks • Experience didn’t matter much

  24. Limitations and Next Steps • Only used various rates to measure interviewers’ performance at the recruitment stage • Demographic compositions by interviewer status • Nonresponse error by interviewer status • Only used proxy measures of data quality • Direct measures of measurement bias • Interviewer characteristics and respondent characteristics not considered • Bringing in interviewer and respondents characteristics into the picture • Examining the effect of matched interviewer and respondents characteristics

  25. Thank You! Yan-ting@norc.org

More Related