1 / 29

New Approaches to Surveying Organizations AEA January 5 th 2010

New Approaches to Surveying Organizations AEA January 5 th 2010 Nick Bloom (Stanford) & John Van Reenen (LSE). Why survey organizations?. Large variation across firms & countries in productivity, which differences in organizations and management may help to explain

Download Presentation

New Approaches to Surveying Organizations AEA January 5 th 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. New Approaches to Surveying Organizations AEA January 5th 2010 Nick Bloom (Stanford) & John Van Reenen (LSE)

  2. Why survey organizations? Large variation across firms & countries in productivity, which differences in organizations and management may help to explain But little data on management and organizations So we have been developing survey techniques to collect more data

  3. Outline • Reducing Respondent (e.g. manager) bias • Reducing Interviewer bias • Obtaining interviews • Evaluating survey responses

  4. Blind surveys - where the responder does not know the purpose of the survey - can reduce bias • Psychology literature finds respondents like to give answers they believe interviewers want to hear • For example, Schwartz (1999) asked subjects their views on newspapers stories on mass murderers • One group was given paper headed “Institute of personality research”. Other group was given paper entitled “Institute of social research” • First group focused on personality and the second group on social factors behind mass murders

  5. Open questions & objective scales reduce bias Open/Closed questions: • Open questions have no fixed set of responses: “Tell me how you monitor production” • Closed questions have a fixed set of responses: “Do you monitor production daily” [Yes/No] – potentially leading Objective/Subjective scales: • Objective scales are absolute: “How frequently is production monitored?” [Daily, Weekly, Monthly, Yearly] • Subjective: “How frequently is production monitored?”, [Very frequently, frequently, average, rarely, very rarely]

  6. Asking for examples also helps reduce bias • In some areas theory may differ from practice, and asking for examples can help to elicit the truth • One example is practices around firing underperformers: • Asking “If your firm had a persistently poor performer what would you do?” may not induce an accurate response • We found following up with a request for an example was informative: “Can you give me a recent example”

  7. Example of a blind survey, with open questions, subjective scoring and requests for examples In Bloom and Van Reenen (2007) used MBA students to survey managers in manufacturing firms about management practices. Blind: MBAs told managers they were students wanting to interview them about manufacturing practices. No mention of scoring. We also did not tell our MBA interviewers in advance about the firm’s performance to further reduce bias (so double blind)

  8. Open questions: • Questions to score production monitoring (dimension 5/18): • What kind of KPIs would you use for performance tracking? • Who gets to see these and how frequently? • If I were to walk through your factory what KPIs could I see? Objective scoring:

  9. Finally, impossible to eliminate all bias, but you can collect interview characteristics to control for this • Different seniorities and tenures may respond in different ways (we find senior managers with low tenure are most positive) • → So collect detailed information on respondents • Khanemann et al. (2004) report that happiness varies systematically over the day (we find responses are more positive earlier in the day and later in the week) • →Collect information on interview time and length • Different interviewers may score tougher or softer (we find significant interviewer fixed-effects) • →Collect information on interviewers

  10. Outline • Reducing Respondent Bias • Reducing Interviewer Bias • Obtaining interviews • Evaluating survey responses

  11. Calibrate early, calibrate often • The downside of open questions with objective scales is the scoring grid requires more interpretation • So hiring a good team (e.g. MBA types) and calibrating the whole team to the same scoring system is essential • To do this we: • Run all interviews from a common location (video) • Have initial one week training and calibration • Have ongoing weekly re-calibration sessions • Rotate interviewers across countries

  12. Outline • Reducing Respondent Bias • Reducing Interviewer Bias • Obtaining interviews • Evaluating survey responses

  13. Team organization and incentives matter • Initially we paid interviewers flat rate salaries, with direct encouragement to run more interviews • But scheduling and running interviews is hard work so we moved to piece-rate pay with (flat rate pay) supervisors • Discovered (much like Lazear 2000) that piece-rate pay led to massive improvements in interview productivity • Also find that team bonuses is helpful in addition to piece-rate pay – generates good internal monitoring by peers

  14. Correct interview positioning is essentials • Most switchboards in the US refuse to connect surveys or market research • So when approaching firms we simply asked the switchboard “Can I speak to the plant manager please”, and if pushed say, “This is Nick Bloom, I am calling from the LSE in London”. • If pushed further, state we were doing a “piece of work on manufacturing…” - never using the word “survey” or “research”

  15. Different countries require different approaches • After being connected to managers in Europe and Asia we would use our endorsement letters, which took time to obtain but have been very valuable • In the US managers don’t care about Government but are generally willing to talk anyway, but getting them is hard • So our MBAs discovered an effective trick..…. • In all cases during the interview we stated (and upheld): • Confidential – no information released • No financials – only management questions

  16. Outline • Reducing Respondent Bias • Reducing Interviewer Bias • Obtaining interviews • Evaluating survey responses

  17. Important to have a data validation procedure • Using double-blind interviews, with open question, objective scoring, calibration and bias controls is good practice • But, we found some people were still sceptical • So we used two types of survey validation procedure

  18. Internal validation: re-rater evaluation Re-interviewed 222 firms with different interviewers & managers Firm average scores (over 18 question) Firm-level correlation of 0.627 2nd interview 1st interview

  19. Iternal validation: different survey instruments • We have also been using different management survey instruments on the same firms as another validation test • Grous (2009) and Bloom et al. (2009) ran field visits and found strong consistency of field and double-blind telephone data • EBRD (2009) surveyed 418 of our Bloom and Van Reenen (2007) firms using another survey instrument, with a good match

  20. External validation: compare to other data, for example firm performance data Performance measure organization or management data ln(capital) other controls ln(labor) ln(materials) • Note – not a causal estimation, but only an association to check survey data is correlated with external data

  21. External validation: compare to other data, for example firm performance data Includes controls for country, with results robust to controls for industry, year, firm-size, firm-age, skills etc. Significance levels: *** 1%, ** 5%, * 10%.

  22. My favourite quotes: The traditional British Chat-Up [Male manager speaking to an Australian female interviewer] Production Manager: “Your accent is really cute and I love the way you talk. Do you fancy meeting up near the factory?” Interviewer “Sorry, but I’m washing my hair every night for the next month….”

  23. MY FAVOURITE QUOTES: The traditional Indian Chat-Up Production Manager: “Are you a Brahmin?’ Interviewer “Yes, why do you ask?” Production manager “And are you married?” Interviewer “No?” Production manager “Excellent, excellent, my son is looking for a bride and I think you could be perfect. I must contact your parents to discuss this”

  24. MY FAVOURITE QUOTES: The difficulties of defining ownership in Europe Production Manager: “We’re owned by the Mafia” Interviewer: “I think that’s the “Other” category……..although I guess I could put you down as an “Italian multinational” ?” Americans on geography Interviewer: “How many production sites do you have abroad? Manager in Indiana, US: “Well…we have one in Texas…”

  25. MY FAVOURITE QUOTES: The bizarre Interviewer: “[long silence]……hello, hello….are you still there….hello” Production Manager: “…….I’m sorry, I just got distracted by a submarine surfacing in front of my window” The unbelievable [Male manager speaking to a female interviewer] Production Manager: “I would like you to call me “Daddy” when we talk” [End of interview…]

  26. Conclusion • Using blind surveys, with open questions, subjective scoring grids and calibrated interviews can reduce survey bias • Given the lack of data on firm organization and management the returns to collecting this type of data is very high • We are currently working with Lucia Foster and Ron Jarmin (Census Bureau) and Erik Brynjolfsson (MIT) to add an organizations and management module to the ASM • Aim to create a high quality, large sample, public panel database on organizational and management practices Future research

  27. FINALLY, OUR LARGE SAMPLE ILLUSTRATES THE RISKS OF RELYING ONLY ON CASE STUDIES Case studies provide invaluable firm-level detail. But the self-selection of these firms combined with variation in management practices means these can be misleading(e.g. Enron, was a case-study favorite). So case studies and large data sets combined are best Log of Sales/employee ($’000) Management score

More Related