1 / 23

Empirical steps towards a research design in multi-attribute non-market valuation

Empirical steps towards a research design in multi-attribute non-market valuation. Camp Resource XVII June 24-25, 2010. Plan of the talk. Focus on SP data and survey development Experimental designs evolved from orthogonality Choice of elicitation method (incentive compatibility)

derick
Download Presentation

Empirical steps towards a research design in multi-attribute non-market valuation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Empirical steps towards a research design in multi-attribute non-market valuation Camp Resource XVII June 24-25, 2010

  2. Plan of the talk Focus on SP data and survey development Experimental designs evolved from orthogonality Choice of elicitation method (incentive compatibility) GumbelHeteroskedasticityvs Utility Heteroskedasticity Generalised logit Decision heuristics as systematic components of heterogeneity ATTRIBUTE ATTENDANCE Maybe also.... Order effects in repeated choices Context effects Subjective scenario conjectures

  3. Basic generic question What should I think about when starting a new SP survey/study for multi-attribute non-market valuation? What do recent research results suggest? 1st issue is whether the new SP data are need to enrich existing RP data, or they are to serve stand-alone If data enrichment the typical concern is to supplement the existing RP data and break away from multicollinearity (need special experimental designs “pivoted” on existing data)

  4. Assume the study is only SP: typical steps? • Define research question • Draft survey, decide on: • specify provision rules, • policy deliverables, • preference elicitation mode, • incentive compatibility, • administration mode (face2face, CAPI, web-based, phone supported, paper and pencil, etc. ) • info to deliver, • feedback on respondent’s understanding of this info • info on attribute processing, scenario conjectures etc. • Run focus groups • Get starting designs (Orthogonal on the Diff.)

  5. Typical steps (Cont’ed) • Simulate data, estimate specification of interests and welfare estimates for scenarios of interest (here you find if the data you collect give you back the spec you need, e.g. Animal welfare study) example, plot • Run pilot(s) • Amend draft survey • Obtain priors to optimize sample size use • Prior on parameter estimates (beta hats) • Prior on specific functional form (Choice probabil.)

  6. ExpDes: One shot vs sequential • One shot • Use priors and select design criterion (or combination of design criteria and their respective weights) • D-efficiency, S-efficiency, C-efficiency, Minimum entropy, Minimum complexity, etc. • Sequential • Determine size of sampling waves • Decide Bayesian rule to adopt to embed sequential learning • Each previous phase “informs” design of all following stages • Same sample size can give 1/3 more accuracy

  7. Elicitation Methods Pair-wise Pair-wise + status quo Full Ranking Rating Best-worst

  8. Inter agency Joint decision-making and group interactions diadic (e.g. Couples, Beharry et al.) or triadic (couples + child, Marcucci et al.)) Consensus seeking with interaction (e.g. Connected business solutions, location decisions, etc.

  9. Types of heteroskedastic effects • Gumbel error heteroskedasticity • Common form sigma=exp(z’theta), so that >0 • z= vector of choice-task related effects (e.g. measures of choice complexity, Swait and Admowicz 2001, DeShazo and Fermo 2002) • Utility heteroskedasticity • Var(U1,2) Var(Usq) (Scarpa et al. 2005, Hess and Rose 2009) • Common form additional error component • Both forms are likely to co-exist, and SQ choice-task often induce the latter

  10. Scale and utility effects in logit Taste heterogeneity, MXL Scale heterogeneity, S-MNL Generalised Logit, G-MNL

  11. G-MNL to WTP-space “Utility Space” “WTP Space” Set gamma =0 and phi=1

  12. From –beta_n/phi_n From WTP_n

  13. Attribute processing: non-attendance • Either Ask people which attributes they attended to • Yes/no to attendance to each attribute • Or Likert scale • Or infer it from observed sequence of choices • Zero constrained latent classes • Variable selection model (spike model in CV) • Recent evidence: attendance may not be the same across all the sequence of choices (choice-task non attendance)

  14. From a WTP estimate of Euro 790/year down to Euro 20/year!!! For preservation of mountain land landscaped

  15. Variable selection (spike model equivalent)

  16. Conclusions Multi attribute research design is becoming increasingly complex Need to simultaneously address many issues before one can retrieve “unconfounded” utility structures Respondent interaction and feedback are increasingly becoming as validating and informative

  17. Order effects WTP estimates depend on the order at which you estimate them in the sequence of choices Learning effects? Strategic response effects? Heterogeneity?

  18. Marginal WTP (odor) Marginal WTP (color) Confidence intervals Order effects in WTPs

  19. Context effects in choice-tasks with 3 alternatives From Rooderkerk, van Heerde and Bijmolt, 2009

  20. Scenario adjustments Proposed scenarios may be mis-construed or subjectively adjusted (e.g. Risk latency in micro-risk (Cameron and DeShazo)) Subjective perception of Status-quo attribute levels versus objectively measured ones (Marsh et al.)

  21. Conclusions Multi attribute research design is becoming increasingly complex Need to simultaneously address many issues before one can retrieve “unconfounded” utility structures Respondent interaction and feedback are increasingly becoming as validating and informative

More Related