slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Does Interviewing Method Matter? Comparing Consumer Satisfaction Results across Internet and RDD Telephone Samples PowerPoint Presentation
Download Presentation
Does Interviewing Method Matter? Comparing Consumer Satisfaction Results across Internet and RDD Telephone Samples

Loading in 2 Seconds...

  share
play fullscreen
1 / 31
brooke

Does Interviewing Method Matter? Comparing Consumer Satisfaction Results across Internet and RDD Telephone Samples - PowerPoint PPT Presentation

124 Views
Download Presentation
Does Interviewing Method Matter? Comparing Consumer Satisfaction Results across Internet and RDD Telephone Samples
An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Does Interviewing Method Matter? Comparing Consumer Satisfaction Results across Internet and RDD Telephone Samples Forrest V. Morgeson III, Ph.D. Director of Research, American Customer Satisfaction Index Barbara Everitt Bryant, Ph.D. Research Scientist-Emerita, University of Michigan Reg Baker President, Market Strategies International Presented at the 66th Annual American Association for Public Opinion Research Conference

  2. Discussion Agenda • Overview: Research Questions and Findings • The American Customer Satisfaction Index (ACSI) • Extant Research on Interviewing Method Differences • Data and Analysis Methods • Results and Findings • Conclusions and Implications

  3. Research Questions and Findings • Research Questions: Does interview method matter? Do the results produced in a multi-industry consumer satisfaction study differ significantly across a sample collected through RDD/probability sampling and telephone interviewing, and one collected via online panel/nonprobability sampling and Internet interviewing? • Research Design: We utilize a multi-method sample of consumer satisfaction data, structural equation modeling techniques, and two tests of difference to investigate the significance of differences in survey responses across samples drawn and interviewed using these two methods • Findings: While some differences are observed, interview method only marginally impacts the means of the survey responses or the parameter estimates from the structural models. Overall, the findings suggest that mixed-method interviewing is feasible and reliable for consumer-oriented survey research projects

  4. Discussion Agenda • Overview: Research Questions and Findings • The American Customer Satisfaction Index (ACSI) • Extant Research on Interviewing Method Differences • Data and Analysis Methods • Results and Findings • Conclusions and Implications

  5. Overview of the ACSI • Established in 1994, ACSI is the only standardized measure of customer satisfaction in the U.S. economy, covering approximately 225 companies in 45 industries and 10 economic sectors; companies measured account for roughly one-third of the U.S. GDP • 100+ departments and agencies of the U.S. federal government also measured on an annual basis, along with local and state government measures • Results from all surveys are published monthly in various media and on the ACSI website, www.theacsi.org

  6. Structure of the ACSI National ACSI Utilities Information Accommodation & Food Services E-Business PublicAdministration/ Government Finance &Insurance Retail Trade E-Commerce Transportation &Warehousing Health Care & Social Assistance Manufacturing/Durable Goods Manufacturing/Nondurable Goods EnergyUtilities Newspapers Motion Pictures Broadcasting TV News Software Fixed LineTelephone Service Wireless TelephoneService Cable & Satellite TV Hotels Limited-Service Restaurants Full-Service Restaurants News &Information Portals/ Search Engines Social Networking Local Government Federal Government Banks Life Insurance Health Insurance Property & Casualty Insurance Airlines U.S.Postal Service ExpressDelivery Hospitals Personal Computers Electronics(TV/VCR/DVD) Major Appliances Automobiles& Light Vehicles Cellular Telephones Food Manufacturing Pet Food Soft Drinks Breweries Cigarettes Apparel Athletic Shoes Personal Care& CleaningProducts Supermarkets Gasoline Stations Department &Discount Stores Specialty Retail Stores Health & Personal Care Stores Retail Brokerage Travel

  7. The ACSI Model and Methodology • In ACSI methodology, customer satisfaction is imbedded in a system of relationships, and analyzed as part of a structural equation model. The model produces two critical pieces of data useful to researchers and firms/agencies: • The model provides mean scores (on a 0-100 scale) for each measured composite or latent variable • • The model provides parameter estimates (or path coefficients) indicating what most strongly influences satisfaction, and in turn how satisfaction influences future consumer behaviors Customer Complaints Perceived Quality Customer Satisfaction • Overall • Customization • Reliability • Complaint Behavior Perceived Value • Price Given Quality • Quality Given Price • Satisfaction • Comparison w/ Ideal • Confirm/Disconfirm • Expectations Customer Expectations Customer Loyalty • Overall • Customization • Reliability • Repurchase Likelihood • Price Tolerance • (Reservation Price)

  8. ACSI Data Collection • Each year, including all private sector, public sector and custom research projects, ACSI collects approximately 125,000 interviews of consumers • From 1994 through 2009, nearly all of this data (with a few exceptions for e-commerce companies) was collected over the telephone using random-digit-dial probability sampling and CATI • Beginning in 2010, and following pilot testing that produced promising results, ACSI moved to a multi-method interviewing approach, with roughly half the data for any measured company/government agency collected using RDD probability sampling and CATI, and the other half collected using a nonprobability panel of double opt-in respondents interviewed online

  9. Discussion Agenda • Overview: Research Questions and Findings • The American Customer Satisfaction Index (ACSI) • Extant Research on Interviewing Method Differences • Data and Analysis Methods • Results and Findings • Conclusions and Implications

  10. Extant Research • While a handful of studies comparing results for samples interviewed online to samples interviewed over the telephone exist,* these studies have focused almost exclusively on political opinions, voter preference, etc. • There remains very little research into what differences (if any) are likely to be observed across these two interviewing methods for consumer-oriented data, where a significant portion of data collection and survey research is focused *Chang, L. and J.A. Krosnick (2009). “National Surveys via RDD Telephone Interviewing Versus The Internet: Comparing Sample Representativeness and Response Quality,” Public Opinion Quarterly, 73(4), 641–678. Fricker, S., M. Galesic, R. Tourangeau and T. Yan (2005). “An Experimental Comparison of Web and Telephone Surveys,” Public Opinion Quarterly, 69(3), 370-392. Vannieuwenhuyze, J., G. Loosveldt and G. Molenberghs (2010). “A Method for Evaluating Mode Effects in Mixed-Mode Surveys,” Public Opinion Quarterly, 74(5), 1027-1045.

  11. Findings from the AAPOR Online Task Force • Findings from the AAPOR Online Task Force* suggest that there is no theoretical basis for assuming that samples drawn from nonprobability online panels are representative of a larger population, and that therefore results may differ when compared to an RDD probability sample interviewed over the telephone • However, this research also concludes there may be instances in which online panels are useful and reliable, and we conduct a series of empirical tests to see if customer satisfaction data (ACSI) is such a case *Baker, R. et al. (2010). “Research Synthesis: AAPOR Report on Online Panels,” Public Opinion Quarterly, 74(4), 711–781.

  12. Discussion Agenda • Overview: Research Questions and Findings • The American Customer Satisfaction Index (ACSI) • Extant Research on Interviewing Method Differences • Data and Analysis Methods • Results and Findings • Conclusions and Implications

  13. Research Questions • From the perspective of the ACSI project and its methodology, two questions regarding multi-method interviewing are most relevant and important: • Do mean scores exhibit significant differences between a sample interviewed online when compared to a sample interviewed using RDD/CATI? • Do model parameter estimates exhibit significant differences between a sample interviewed online when compared to a sample interviewed using RDD/CATI?

  14. Data • To seek answers to our research questions, we utilize a sample of data consisting of approximately 9000 interviews • Roughly half of these cases were collected via Internet interviewing (from a sample balanced to Census demographics from a large online panel (the Research Now panel)), and the other half collected using RDD and CATI, allowing us to test the similarities/differences produced by these two interviewing methods • The ACSI model (shown earlier) was estimated independently for each industry and each interviewing method, producing distinct mean scores and estimates (path coefficients) facilitating these comparisons

  15. Data • The data represent consumer responses to questions measuring satisfaction (and the other modeled variables) with companies and industries in six NAICS sectors (for more information on the companies included in the sample, see Appendix A): • Apparel manufacturing (Manufacturing/nondurable goods) • Personal computers (Manufacturing/durable goods) • Fast food restaurants (Food services) • Insurance (Finance and insurance) • Supermarkets (Retail) • Wireless phone service (Information)

  16. Tests of Difference • To test for significant differences in mean scores across the two interviewing methods for each ACSI variable in each of the industries included in the sample, independent sample t-tests were utilized • To test for significant differences in parameter estimates for the structural model for each of the industries included in the sample, chi-square difference tests were utilized, with parameters constrained to equality and significant chi-square statistics indicative of significant parameter estimate differences

  17. Discussion Agenda • Overview: Research Questions and Findings • The American Customer Satisfaction Index (ACSI) • Extant Research on Interviewing Method Differences • Data and Analysis Methods • Results and Findings • Conclusions and Implications

  18. Results and Findings • Across all of the tests – which included comparisons of 36 sets of mean scores across the two interviewing methods, and 54 sets of model parameter estimates – some significant differences were observed • In total, 36% of the mean scores (13 of 36) compared across the two modes exhibited significant differences. Scores skewed higher on the Internet, with 9 of 13 significant differences reflecting “better” ratings among Internet respondents (i.e. higher ratings, fewer complaints) • Moreover, 39% of the model parameter estimates (21 of 54) from the structural models compared across the two methods exhibited significant differences • (Two industry examples follow. All test results provided in Appendix A)

  19. Example 1: Supermarket Industry Results • For the tests for this industry, one variable mean score of the six tested was significantly different across the two samples, while two of nine parameter estimates were significantly different *All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between the Telephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.

  20. Example 2: Wireless Industry Results • For the tests for this industry, four of the variable mean scores exhibited significant differences, with scores skewing higher (and complaint rate lower), and two of the parameter estimates exhibited significant differences *All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between the Telephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.

  21. Results and Findings • The above are “hard tests” of multi-method interviewing. As many projects (including ACSI) have not traded telephone-only for Internet-only interviewing, a “fairer” test is to compare the telephone interview results to the mixed-method, mixed-sample results • For these tests, the results are more promising. Looking only at differences in mean scores, of the 36 sets of means compared only 11% (4 of 36) exhibited significant differences • (Two industry examples follow. Full results for these tests are included in Appendix A)

  22. Example 3: Mixed-Sample vs. Telephone-Only *All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between the Telephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.

  23. Discussion Agenda • Overview: Research Questions and Findings • The American Customer Satisfaction Index (ACSI) • Extant Research on Interviewing Method Differences • Data and Analysis Methods • Results and Findings • Conclusions and Implications

  24. Conclusions • While some differences in both mean scores and model parameter estimates are exhibited when comparing telephone-only interviewing to Internet-only interviewing, the differences account for a minority in both cases • The results are even more promising when comparing mean scores for telephone-only and mixed-method interviewing; only a small fraction of the comparisons are significantly different in this case

  25. Implications and Future Research • These tests provide evidence for the feasibility and reliability of mixed-method sampling for consumer-oriented survey research projects • For projects working with this kind of data, both means scores and model estimates appear to be relatively stable across interviewing methods • However, because we examine only consumer-oriented data, those working with dissimilar types of data should perform tests similar to ours to examine the reliability of mixed-method interviewing, as results may vary • Research expanding the types of data tested should help market researchers determine the feasibility of multi-method interviewing for particular client engagements

  26. Appendix A: Supplemental Results and Information

  27. Interview Data by Industry/Company

  28. Apparel and PC Industries Results *All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between the Telephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.

  29. Fast Food and Insurance Industries Results *All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between the Telephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.

  30. Mixed-Method vs. Telephone-Only Means Tests (1) *All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between the Telephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.

  31. Mixed-Method vs. Telephone-Only Means Tests (2) *All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between the Telephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.