1 / 19

Modelling Cardinal Utilities from Ordinal Utility data: An exploratory analysis

Modelling Cardinal Utilities from Ordinal Utility data: An exploratory analysis. Peter Gilks, Chris McCabe, John Brazier, Aki Tsuchiya, Josh Solomon. Background. Limitations of conventional methods of utility elicitation Early work suggesting ordinal data can predict cardinal preferences

trutherford
Download Presentation

Modelling Cardinal Utilities from Ordinal Utility data: An exploratory analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modelling Cardinal Utilities from Ordinal Utility data: An exploratory analysis Peter Gilks, Chris McCabe, John Brazier, Aki Tsuchiya, Josh Solomon

  2. Background • Limitations of conventional methods of utility elicitation • Early work suggesting ordinal data can predict cardinal preferences • SF6D and HUI 2 surveys used ranking exercises as warm up prior to SG valuation tasks • Opportunity to test and develop methods proposed by Solomon

  3. SF-6D valuation data sets • Ranked seven SF-6d states (including pits and full health) and death • SG valuations of five states against full health and pits and then chained using valuation of pits against full health and death (respondents asked to confirm pits ranking against death) • 611 respondents sampled from the general population • 249 mean SG health states values ranging from .21 to .99; averaged 14 valuations per state

  4. HUI2 valuation data set • Ranked 9 HUI2 states (including pits and full health) and death • SG valuations of 8 states against full health and death (respondents asked to confirm ranking of state against death) • 198 respondents sampled from the general population • 249 mean SG health states values ranging from -.064 to .77; averaged 24 valuations per state

  5. Methods Aim: To model the predicted health state valuations using the ordinal preference data • Statistical model Conditional logistic regression (McFadden choice model) based on random utility theory (previous attempts used Thurstone’s Comparative Judgement Model) 2) Value function Relating the health state descriptive system to the utility value

  6. The Statistical Model • Respondent i has latent utility value for state j, Uij. • Respondent will choose state j as best from a group of states k=1,…,n if Uij > Uikfor all k  j. • Utility function Uij = μj + εij. Where μjrepresents the underlying tastes of the population and εij represents the peculiar choice of the individual. • Odds of choosing state j over state k are exp{μj – μk} • So we want to model the dependent variable μ against the dimensions of the descriptive systems: SF6D and HUI2.

  7. Assumption: independence of irrelevant alternatives • Model is based on assumption that the ranking exercise is equivalent to the respondent making a series of individual choices from smaller and smaller groups of states. For example, to rank 10 health states; • Selects first preference from all 10, rank 1 • Selects best from remaining 9, rank 2 • Selects best from remaining 8, rank 3 and so on…… NB. This assumes that the choice over a given pair does not depend on the other alternatives available

  8. Value function The expected value of each unobserved utility was assumed to be a linear function of the categorical ratings on the domains of each dataset respectively. The specifications are; For HUI2: μ = β1S2 + β2S3 + β3S4 + β4M2 + β5M3 + β6M4 + β7M5 + β8E2 + β9E3 + β10E4 + β11E5 + β12C2 + β13C3 + β14C4 + β15SC2 + β16SC3 + β17SC4 + β18P2 + β19P3 + β20P4 + β21P5 + βdDeath For SF6D: μ = β1PF2 + β2PF3 + β3PF4 + β4PF5 + β5PF6 + β6RL2 +β7RL3 + β8RL4 + β9SF2 + β10SF3 + β11SF4 + β12SF5 + β13P2 + β14P3 + β15P4 + β16P5 + β17P6 + β18MH2 + β19MH3 + β20MH4 + β21MH5 + β22V2 + β23V3 + β24V4 + β25V5 +βdDeath • Note: no constant term and a coefficient for death! This facilitates re-scaling results on to the Full-Health Death (1,0) Scale.

  9. Rescaling The scale of the latent variable μ is arbitrarily defined by the identifying assumptions in the model. • Normalise to observed SG scale (originally proposed by Josh Solomon) Multiply coefficients by the ratio: βri = βi * min. obs. SG/ Predicted PITS value 2) Normalise to death βri = βi / |βd| This anchors death at zero and perfect health at 1 NB. states can still be valued as worse than death.

  10. Model Assessment Methods • Main aim is to compare the predictive performance of the rank model and the original standard gamble model: • Check coefficients for sign and consistency. • Plot predictions against observed for rank model and SG model for both datasets. • Statistical tests of predictive performance. • Look for systematic patterns in the errors.

  11. HUI2

  12. Mean values, predicted values and error (predict - mean) for Rank model including death and SG Model (OLS) HUI2 SG Model Rank Model Smooth line = mean health state values ranked by severity Top line is predictions Bottom line is error.

  13. SF6D

  14. Mean values, predicted values and error (predict - mean) for Rank model including death and SG Model (6) SF6D Rank Model SG Model • Both Models: • Under predict large means • Over predict low means Smooth line = means Top messy line is predictions Bottom messy lines is error.

  15. Summary of Findings • Rank models able to predict actual mean SG health states nearly as well as the SG models – associated with modest increase in in MAE • Evidence that it has produced less systematic error in SF-6D data set and improvements in consistency

  16. Issues – taking results at face value • Is the ranked model good enough? Could we start using it……… • Given ranking is a warm up, results could be better if more care taken over this part of the exercise • Ranked methods are probably cheaper • What evidence is there that ranking exercises impose a lower cognitive burden? Seems to be higher levels of completion.

  17. Issues – harder questions • Is the selection process of the ranking task assumed by the model correct? • Why should the relationship between the latent utility value and SG (in this case) cardinal values be linear? • What other functional forms might theory suggest? • Is the latent utility value similar to Dyer and Sarin’s ‘value function’ or something else? • Does rank data elicit preferences or simply how good or bad a health state is, and does it matter?

  18. Issues – the death question • Not a major problem here because all mean health state values above zero • The MVH EQ-5D data has been analysed in a similar way by Josh Solomon, but the ranking of death was very different to the implied ranking from the TTO – only state 33333 is ranked worse than death compared to 16/43 states by TTO! Ranked model normalised to death and full health does not predict TTO values worse than death very well

  19. Further work – more suggestions welcome • See how well SG data predicts ranking at the individual level • Consider interactions • Model different functional relationships between latent variable and SG • examine completion rates and extent to which ranking will extend the vote to more vulnerable populations

More Related