1 / 17

Interviewer Effects in Face-to-face Surveys: A Function of Measurement Error or Nonresponse?

This study examines the potential causes of interviewer effects in face-to-face surveys, including measurement error and nonresponse. The analysis is based on data from the PASS survey conducted by the IAB in Germany. The results suggest that both measurement error and nonresponse contribute to interviewer effects.

carlys
Download Presentation

Interviewer Effects in Face-to-face Surveys: A Function of Measurement Error or Nonresponse?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Interviewer Effects in Face-to-face Surveys:A Function of Measurement Error or Nonresponse? Brady T. West Michigan Program in Survey Methodology (MPSM) Frauke Kreuter Joint Program in Survey Methodology (JPSM) Institute for Employment Research (IAB) Ludwig Maximilian University of Munich ITSEW 2011

  2. Acknowledgements • Mark Trappmann, Director of the PASS survey at the IAB, for allowing us to access and analyze the PASS data • Ursula Jaenichen and Gerrit Müller from the IAB, for their tremendous help in accessing and navigating the various data sets necessary to conduct this research ITSEW 2011

  3. Interviewer Variance: The Problem • An undesirable product of the data collection process, given interpenetrated sample designs • Responses collected by the same interviewer are more similar than responses collected by different interviewers (ρint = ICC) • Estimates vary across interviewers, despite random assignment of subsamples • Leads to inflation of variance in survey estimates (like a design effect) and reduction of power ITSEW 2011

  4. Interviewer Variance: The Problem • ρint generally ranges between 0.01 and 0.10 in practice, with 80% of values less than 0.02 (Groves and Magilavy, 1986) • ρint can be larger than within-cluster correlation in area probability samples (Schnell and Kreuter, 2005; Davis and Scott, 1995) • Example: 30 cases per interviewer ρint = 0.01  An expected increase of 13.6% in the standard error of an estimated mean ITSEW 2011

  5. Interviewer Variance: The Literature • ρint may arise from correlated deviations of responses from true values within the same interviewer (see Groves, 2004; Biemer and Trewin, 1997; Hansen et al., 1960) • Possible sources of non-zero ρint values: • Complex questions (Collins and Butcher, 1982) • Complex interviewer / respondent interactions (Mangione et al., 1992) • Geographic effects in non-interpenetrated designs (O’Muircheartaigh and Campanelli, 1998) ITSEW 2011

  6. Interviewer Variance: The Literature • Published studies of interviewer variance frequently report strangely high values of ρint (0.03 – 0.12) for factual (simple) survey items: • Age (Kish, 1962, face-to-face) • Ethnicity (Fellegi, 1964, face-to-face) • Receipt of a daily newspaper (Freeman and Butler, 1976, face-to-face) • Present employment (Groves and Kahn, 1979, telephone) • Type of school last attended (Collins and Butcher, 1982, face-to-face) • Many other examples… ITSEW 2011

  7. Research Question • The literature also provides substantial evidence of interviewer variance in response rates • One estimates ρint with respondent data only, ignoring the possible contribution of nonresponse error variance across interviewers to intra-interviewer correlations in responses • West and Olson (2010) suggest that total interviewer variance for some variables may be driven by nonresponse error variance, based on a limited analysis of telephone survey data • Do we see the same phenomenon in a face-to-face survey, with no case switching? ITSEW 2011

  8. The PASS Survey • Conducted by the IAB in Nuremberg, Germany • Panel survey using both CAPI and CATI • Collects labor market, household income, and unemployment benefit receipt data from a nationally representative sample of Germans • Covers 12,000+ households annually • Two annual samples: recipients of un-employment benefits (UB), general population • Samples refreshed annually; 4 waves finished ITSEW 2011

  9. PASS Data • Pooled data set of cases attempted using CAPI from the first two waves of PASS • CAPI interviewers work one PSU only • Focus on UB sample cases only (due to presence of administrative records) • All cases from Wave 1; only refreshment cases from Wave 2 • Focus on one-person households only (Kreuter et al., 2010) ITSEW 2011

  10. PASS Data • HHs without a listed number (15%), HHs not willing to participate on phone, and HHs switched from CATI to CAPI due to non-contact • Variables: welfare benefit recipiency status, age, gender, foreign status • n = 2,574 households assigned to 158 interviewers (16.3 per interviewer; only 4.7 respondents per interviewer  power issues) • NOTE: responses can only be linked to admin records given consent; prevented calculation of individual response deviations in this study ITSEW 2011

  11. Analytic Approach • Replicate three-step analysis of West and Olson (2010) • Estimate (in order): • Variance in true values among interviewers (sampling) • Variance in true values for respondents (sampling + NRE) • Variance in reports for respondents (sampling + NRE + ME) • Critical assumption: sampling errors, nonresponse errors, and measurement errors are independent (valid?) • Variance components estimated using REML or pseudo-ML estimation using xtmixed and xtmelogit in Stata • Variance components tested against zero using asymptotic likelihood ratio tests, with test statistics referred to mixtures of chi-square distributions ITSEW 2011

  12. Results: Response Rates Estimated ICC of response indicators: ITSEW 2011

  13. Estimated Variance Components ITSEW 2011

  14. Interpretation of Results • Age: Evidence of initial sampling variance, increase in interviewer variance due to nonresponse error variance, slight increase due to measurement error variance • Gender: No evidence of interviewer variance • Foreign Status: Substantial sampling variance (expected), differential nonresponse error attenuates variance (interviewers German speakers?), slight increase due to ME variance • Benefit Receipt: No sampling variance, no added variance from nonresponse error, substantial measurement error variance ITSEW 2011

  15. Conclusions • First study to consider contributions of nonresponse error variance and measurement error variance to total interviewer variance in a face-to-face setting • Relative contributions are item-specific: evidence of nonresponse error variance in age is consistent with findings of West and Olson (2010), but response error variance still matters • Sampling variance due to within-PSU clustering is not the only source of interviewer variance in FTF settings (Schnell and Kreuter, 2005) ITSEW 2011

  16. ITSEW Discussion Points • Assumptions about independence of error sources need further study (consent issue in PASS prevented this) • Replications in other large FTF surveys, with record data available, are needed to fully understand the problem • Interviewer training and monitoring implications • Current estimators of ρint are inadequate; estimators that recognize nonresponse error variance (in contacts and refusals) need development • Multilevel modeling idea: 1) impute MEs for NR; 2) examine interviewer variance in intercepts and response indicator effects when predicting reported values for both R and NR; 3) estimate covariance of random intercepts (IWER response errors) and response indicator effects (IWER nonresponse errors) [Point 1 above] ITSEW 2011

  17. Thank You! • Please direct additional questions, comments, or paper requests to: Brady (bwest@umich.edu) and/or Frauke (fkreuter@survey.umd.edu) ITSEW 2011

More Related