1 / 39

Economic evaluation of health programmes

Economic evaluation of health programmes. Department of Epidemiology, Biostatistics and Occupational Health Class no. 17: Economic Evaluation using Decision Analytic Modelling III Nov 5, 2008. Plan of class. Patient-level simulations: an example

dale-mejia
Download Presentation

Economic evaluation of health programmes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Economic evaluation of health programmes Department of Epidemiology, Biostatistics and Occupational Health Class no. 17: Economic Evaluation using Decision Analytic Modelling III Nov 5, 2008

  2. Plan of class • Patient-level simulations: an example • Assessment of uncertainty in decision-analytic models • Assessment of uncertainty due to sampling variation in individual studies

  3. 3rd alternative: patient-level simulation • Each individual encounters events with probabilities that can be made path-dependent • Virtually infinite flexibility • But how to “populate” all model parameters?

  4. Example of a study using patient-level simulation: Stahl JE, Rattner D, et al., Reorganizing the system of care surrounding laparoscopic surgery: A cost-effectiveness analysis using discrete simulation, Medical Decision-Making Sep-Oct 2004, 461 – 471.

  5. Study background

  6. Base case process of care

  7. Arriving or exiting process

  8. Sensitivity analysis results

  9. Average cost of patients cared for per day

  10. Conclusions • New system yields a lower cost per patient treated • Slightly higher cost if patient volume is lower • Reason: higher cost per minute more than compensated by higher throughput of patients • Sensitivity analyses point to robustness of conclusion to several changes in assumptions, evaluated one at a time

  11. Significance • Use of a simulation model allows representation of a complex process that neither decision tree nor Markov model could represent • Obtaining valid data may be an issue – process represented in greater detail, but on what basis are those details defined?

  12. Dealing with uncertainty in decision-analytic models

  13. Types of uncertainty in DAMs and how to handle them (From Box 3.3 of Drummond et al. 2005)

  14. Limitations of one-way sensitivity analyses • Stahl et al. 2004 varied key parameters one at at time • Limitations to this: • Conscious or unconscious bias in selection of parameters to vary • Subjective interpretation – when do we conclude results are too sensitive to variation in a parameter or other feature of the model? • Variation one at a time ignores potential interactions and also covariation • In many DAMs there are too many parameters to be able to represent results of such analyses meaningfully

  15. Probabilistic sensitivity analysis • Represent uncertainty in parameters by means of a distribution • Example, beta distribution for parameter between 0 and 1 • Use joint distribution for parameters that are correlated • Propagate uncertainty through the model • Monte Carlo simulation • E.g., 10,000 replications using each time a different set of randomly-selected parameter values

  16. The beta distribution

  17. Probabilistic sensitivity analysis • Present implications of parameter uncertainty • Confidence intervals around ICER, or around incremental net benefit • Cost-effectiveness acceptability curve, • May also show scatter plot on cost-effectiveness plane

  18. Cost-effectiveness acceptability curves

  19. Incremental cost-effectiveness ratio Average cost per person: Experimental Tx (E) Control group(C) C E - C C E E - E C RCEI= Average value of effectiveness measure: Experimental group Control group (C)

  20. Representing uncertainty of the ICER • Ratio nature complicates things • Analytic methods exist but tend to oversimplify reality • Bootstrapping methods are now widely used instead

  21. Using the bootstrap to obtain a measure of the sampling variability of the ICER • Suppose we have nEXP et nCON observations in the experimental and control groups, respectively. One way to estimate the uncertainty around an ICER is to: • Sample nCON cost-effect pairs from the control group, with replacement • Sample nEXP cost-effect pairs from the experimental group, with replacement • Calculate the ICER from those two new sets of cost-effect pairs • Repeat steps 1 to 3 many times, e.g., 1000 times. • Plot the resulting 1,000 ICER values on the Cost-effectiveness plane See Drummond & McGuire, Eds., Economic evaluation in health care, Oxford, 2001, p. 189

  22. An illustration of step 1 (Note: These are made-up data)

  23. Going over the next steps again… • Do exactly the same steps for data from the experimental group, independently. • Calculate the ICER from the 2 bootstrapped samples • Store this ICER in memory • Repeat the steps all over again • Of course, this is done by computer. Stata is one program that can be used to do this fairly readily.

  24. Bootstrapped replications of an ICER with 95% confidence interval Note: ellipses here are derived using Van Hout’s method and are too big; the bootstrap gives better results Source: Drummond & McGuire 2001, p. 189

  25. 2 common problems with bootstrapped confidence intervals • The magnitude of negative ICERs conveys no useful information: • A: (1 LY, - $2,000): ICER = -2,000 $/LY • B: (2 LY, - $2,000): ICER = -1,000 $/LY • C: (2 LY, - $1,000): ICER = - 500 $/LY • B is preferred yet is intermediate in value • Positive ICERs from the NE and SW quadrants have opposite interpretations • In NE quadrant, fewer $ for an increase in LY favors new treatment; in SW quadrant, fewer $ saved from a reduction in LY favors old one • As a result, if enough bootstrapped replications fall in other quadrants than the NE, the 95% confidence interval will be uninterpretable.

  26. Bootstrapped replications that fall in all 4 quadrants Source: Drummond & McGuire 2001, p. 193

  27. A solution: the Cost-effectiveness acceptability curve • Strategy: We recognize that the decision-maker may in fact have a ceiling ratio, or shadow price RC – a maximum amount of $ per unit benefit he or she is willing to pay • So we will estimate, based on our bootstrapped replications, the probability that the ICER is less than or equal to the ceiling ratio, as a function of the ceiling ratio • If the ceiling ratio is $0, then the probability that the ICER is less than or equal to 0 is the p-value of the statistic from testing the null hypothesis that the costs of the 2 groups are the same • Recall that the p-value is the probability of observing the difference in costs seen in the data set (or a smaller one) by chance if the true difference is in fact 0.

  28. Cost-effectiveness acceptability curve (CEAC) Source: Drummond & McGuire 2001, p. 195

More Related