1 / 41

Challenges in the Future of Scientific Surveys

Challenges in the Future of Scientific Surveys. Robert M. Groves University of Michigan Joint Program in Survey Methodology. Overview. Measurement of nonresponse error CAI application design and testing Budgeting under uncertainty Cost calibration of quality increases.

fathia
Download Presentation

Challenges in the Future of Scientific Surveys

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Challenges in the Future of Scientific Surveys Robert M. Groves University of Michigan Joint Program in Survey Methodology

  2. Overview • Measurement of nonresponse error • CAI application design and testing • Budgeting under uncertainty • Cost calibration of quality increases

  3. Measurement of Nonresponse Errors • Nonresponse rates in household surveys are increasing • In “high quality” surveys the cost of a refusal is higher than the cost of an interview • The scientific survey world can’t tell clients whether the money spent to reduce refusals produces higher quality

  4. Nonresponse Does NotProduce Error in Surveys

  5. Summary of Findings:Nonresponse Does Not Affect Survey Statistics -1 • Pew Center nonresponse study • Rigorous vs. 5-day identical surveys • Rigorous (61%), 5-day (36%) • Over 91statistics measuring percentages in the population, average deviation is 2 points Keeter et al. (2000) Public Opinion Quarterly, 64:125-168.

  6. Summary of Findings:Nonresponse Does Not Affect Survey Statistics -2 • Survey of Consumer Sentiment, 1979-96 • Response rate relatively constant (68-71%) • Mean number of calls to interview doubled (3.9 to 7.9) • What happens if one omits the interviews obtained through extra effort? Almost nothing Curtin et al. (2000), Public Opinion Quarterly, 64:413-428.

  7. Summary of Findings:Nonresponse Does Not Affect Survey Statistics -3 • Voter News Services exit polls, 1992, 1996 • Sample precinct response rates vary greatly, mostly between 45-75% • Compare respondent vote to reported precinct vote • No relationship between response rate and deviation Merkle and Edelman, (2002), Survey Nonresponse, Wiley, 243-257)

  8. Nonresponse DoesProduce Error in Surveys

  9. Summary of Findings:Nonresponse Does Affect Survey Statistics -1 • Mail questionnaire incentive experiment • Small effects on rates for those with high community involvement; large effects for those with low community involvement • Effects on survey statistic: • percentage with community involvement, 70%, in survey using incentives • percentage with community involvement, 80% in survey without incentives

  10. Summary of Findings:Nonresponse Does Affect Survey Statistics -2 • Survey on household composition • Statistic of interest, percentage of single person households • Compare one call survey to 5 call survey • Estimated percentage single person household • one call survey, 6% • five call survey, 27%

  11. Summary of Findings:Nonresponse Does Affect Survey Statistics -3 • Experiment measuring effects of topic interest on cooperation • Higher cooperation among groups interested in stated topic of survey • Example, survey on “education and schools” • teacher response rate, 56% • RDD response rate, 32%

  12. Needed Developments • Demonstrated linkage between design features and error • features can be fixed (incentives, questionnaire burden) • features can be variable (effort to contact, extent of refusal conversion) • Likely culprits for “bad nonresponse” • topic interest, sponsorship

  13. Overview • Measurement of nonresponse error • CAI application design and testing • Budgeting under uncertainty • Cost calibration of quality increases

  14. CAI Application Design and Testing • With modern CAI we have lost all constraints on the complexity of measurements we can introduce • However, we have not developed indicators of the cost/risk/benefits of complexity • Also, our testing procedures were designed for a simpler world

  15. 0 1 Relatively Low Risk Application Design 2 3 4 5 6 7 8 9 10 11 13 12 Adapted from McCabe and Watson, 1994 14 15

  16. 0 1 2 3 High Risk Complex Application 4 5 7 6 12 11 8 13 14 15 16 18 17 18 19 9

  17. Effects of Complexity • Specification writer risks errors • Programmers risk misinterpretation of specifications • Testing all paths through the application soon exceeds time available • in many real survey applications there is an infinite number of paths

  18. Stealing Tools from Software Developers • Complexity measures practically used to inform authors of impact on costs/risks • Asking questionnaire authors to estimate relative frequency of paths to direct testing attention (model-based testing) • Automated testing

  19. .70 .20 .10 .10 1.0 .60 .30 1.0 1.0 1.0

  20. .70 1.0 1.0 P=.70

  21. .20 .60 P=.12 1.0

  22. .10 P=.10 1.0

  23. .20 .30 P=.06 1.0

  24. .20 .10 P=.02 1.0

  25. Four Different Paths • One has a .7 probability of occurring • The rarest has a .02 probability of occurring • Clearly, we should give more priority to the first

  26. Overview • Measurement of nonresponse error • CAI application design and testing • Budgeting under uncertainty • Cost calibration of quality increases

  27. Budgeting Under Uncertainty • Traditional survey design fixes a variety of features prior to the initiation of data collection • sampling frame, sample design, questionnaire, mode, interviewer behavior • When the essential survey conditions can be predicted with some certainty,this suffices

  28. Budgeting Under Uncertainty • Modern survey researchers increasingly tackle measurement problems with large uncertainties • eligibility in a household population of infants 9-35 months old • tracking rate and costs for respondents from high school survey 30 years earlier • CAI applications with unknown frequencies for alternative paths

  29. We’ve Done This for Years • Examples -- • centralized phone facilities with screening studies (size of sample to release to obtain targets) • attrition training for interviewer staff in middle of surveys • But we too often do it in crisis mode

  30. Responsive Design - Definition • Frameworks of optional frame, sample, measurement, mode, or interviewer features pre-specified before initiation of the data collection • Selection of final design features based on quantitative measures of cost-error tradeoffs of options

  31. Missing Ingredients • Increasingly, cost/effort is becoming a design uncertainty • how many calls required to reach a sample case? • what level of cooperation is likely with a new questionnaire? • how productive will new interviewers be?

  32. Ingredients of the Solution • Process data with real time access • real time outcome of calls • real time access to interview data • real time access to interviewer-level effort data (hours, nonsalary costs) • Cost models reflecting different costs for different effort • Models for proxy indicators of error

  33. Ingredients of the Solution • Sequential release for measurement of sample replicates • Design decisions made prior to each release, gradually fixing all design features

  34. Overview • Measurement of nonresponse error • CAI application design and testing • Budgeting under uncertainty • Cost calibration of quality increases

  35. Quality Often Costs Money • Reduction of nonresponse error often requires increasing response rates • Reduction of CAI errors requires more testing time • Interviewer training costs money

  36. How much money should we spend? • Is every dollar spent equally productive at reducing error? • What is the relationship between costs and the error?

  37. Proportion of Initial Refusals Converted by Number of Refusal Conversion Calls

  38. Key Conditions for Noncontact Error • Limited set of statistics that are functions of causes of contactability • These form the set subject to noncontact error  Cause 2 Cause 1 Cause k Contactability

  39. Estimated Percent Employed, Living Alone, in Units with Access Impediments among Respondents, Cumulating Sample by Call Number of First Contact

  40. Needed Developments • Specification of effort/cost related to quality property • e.g., interviewer effort on refusal conversion • Real time process data access to control data collection decisions

  41. Summary • All these challenges share an attribute • Cost models must become part of our design process • Designs responsive to intermediate results must be mounted • This demands new skills of data collection staffs

More Related