1 / 19

A Two Level Monte Carlo Approach To Calculating Expected Value of Sample Information:-

A Two Level Monte Carlo Approach To Calculating Expected Value of Sample Information:- How to Value a Research Design. Alan Brennan and Jim Chilcott, Samer Kharroubi, Tony O’Hagan University of Sheffield IHEA June 2003. What are EVPI and EVSI?. Typical Process.

quinta
Download Presentation

A Two Level Monte Carlo Approach To Calculating Expected Value of Sample Information:-

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Two Level Monte Carlo Approach To Calculating Expected Value of Sample Information:- How to Value a Research Design. Alan Brennan and Jim Chilcott, Samer Kharroubi, Tony O’Hagan University of Sheffield IHEA June 2003

  2. What are EVPI and EVSI?

  3. Typical Process

  4.  = uncertain model parameters d = set of possible decisions NB(d, ) = net benefit (λ*QALY – Cost) for decision d, parameters  i = parameters of interest – possible data collection -i = other parameters (not of interest, remaining uncertainty) Expected net benefit (1) Baseline decision = (2) Perfect Information on i = two expectations Partial EVPI = (2)–(1) (3) Sample Information on i = Partial EVSI = (3)–(1)

  5. Expected Value of Sample Information • 0)Decision model, threshold, priors for uncertain parameters • 1) Simulate data collection: • sample parameter(s) of interest once ~ prior • decide on sample size (ni) (1st level) • sample the simulated data | parameter of interest • 2) combine prior + simulated data --> simulated posterior • 3) now simulate1000 times • parameters of interest ~ simulated posterior • unknown parameters ~ prior uncertainty(2nd level) • 4) calculate best strategy = highest mean net benefit • 5) Loop 1 to 4 say 1,000 times Calculate average net benefits • 6) EVSI parameter set = (5) - (mean net benefit | current information) e.g. 1000 * 1000 simulations

  6. Bayesian Updating: Normal Prior 0= mean, 0= uncertainty in mean (standard deviation) = precision of the prior mean 2pop = patient level uncertainty ( needed for update formula) Simulated Data = sample mean (further data collection e.g. clinical trial ). = sample variance = precision of the sample mean . Simulated Posterior N(1,1 )

  7. Normal Posterior Variance – Implications . • 1 always <0 • If n is very small, 1 = almost 0 • If n is very large, 1 = almost 0

  8. Bayesian Updating: Beta / Binomial • e.g. % responders to a treatment • Prior • % responders ~ Beta (a,b) • Simulated Data • n cases, • y successful responders • Simulated Posterior • % responders ~ Beta (a+y,b+n-y)

  9. Bayesian Updating: Gamma / Poisson • e.g. no. of side effects a patient experiences in a year • Prior • side effects per person • ~ Gamma (a,b) • Simulated Data • n samples, (y1, y2, … yn) • from a Poisson distribution • Simulated Posterior • mean side effects per person ~ Gamma (a+ yi , b+n)

  10. Bayesian Updating without a Formula • WinBUGS • Put in prior distribution • Put in data • MCMC gives posterior (‘000s of iterations) • Use posterior in model • Loop to next data sample • Other approximation methods Conjugate Distributions

  11. EVSI Results for Illustrative Model

  12. Common Properties of EVSI curve • Fixed at zero if no sample is collected • Bounded above by EVPI, monotonic, diminishing returns ? EVSI (n) = EVPI * [1 – exp -a*sqrt(n) ]

  13. Correct 2 level EVPI Algorithm • 0)Decision model, threshold, priors for uncertain parameters • 1) Simulate data collection: • sample parameter(s) of interest once ~ prior • decide on sample size (ni) (1st level) • sample the simulated data | parameter of interest • 2) combine prior + simulated data --> simulated posterior • 3) now simulate1000 times • parameters of interest ~ simulated posterior • unknown parameters ~ prior uncertainty(2nd level) • 4) calculate best strategy = highest mean net benefit • 5) Loop 1 to 4 say 1,000 times Calculate average net benefits • 6) EVPI parameter set = (5) - (mean net benefit | current information) fixed at sampled value e.g. 1000 * 1000 simulations

  14. Shortcut 1 level EVPI Algorithm • 1) Simulate data collection: • sample parameter(s) of interest once ~ prior • decide on sample size (ni) (1st level) • sample the simulated data | parameter of interest • 2) combine prior + simulated data --> simulated posterior • 3) now simulate1000 times • parameters of interest ~ simulated posterior • unknown parameters ~ prior uncertainty(2nd level) • 4) calculate best strategy = highest mean net benefit • 5) Loop 1 to 4 say 1,000 times Calculate average net benefits • 6) EVPI parameter set = (5) - (mean net benefit | current information) 2) fix remaining unknown parameters constant at prior mean value Accurate if .. (a) net benefit functions are linear functions of the -i for all d,i, and (b) i and -i are independent.

  15. How many samples for convergence ? • outer level - over 500 samples, converges to within 1% • inner level - required 10,000 samples, to converge to within 2%.

  16. How wrong is US 1 level EVPI approach for a non-linear model? • E.g – squared every model parameter • Adjusted R2 for simple linear regression = 0.86 Linear Model Non-linear Model

  17. Maximum of Monte Carlo Expectations:- Upward Bias • simple Monte Carlo estimates are unbiased • But, bias occurs when use maximum of Monte Carlo estimates. E{max(X,Y)} > max{E(X),E(Y)}. • E.g. X ~ N(0,1), Y ~ N(0,1) • E{max(X,Y)} = 1.2 , max{E(X),E(Y)} = 1.0 • If variances of X and Y reduces to 0.5, E{max(X,Y)} = 1.08 • E{max(X,Y)} continues to fall variance reduces, but stays > 1.0 • Illustrative model:- 1000*1000 not enough to eliminate bias. • Increasing no. of samples reduces bias, since it reduces the variances of the Monte Carlo estimates.

  18. Computation Issues: Emulators • Gaussian Processes (Jeremy Oakley, CHEBS) • F() ~ NB(d, ) • Bayesian non-linear regression (complicated maths) • Assumes only a smooth functional form to NB(d, ) Benefits • Can emulate complex time consuming models with formula i.e. speed up each sample • Can produce a quick approximation to inner expectation for partial EVPI • Similar quick approximation for partial EVSI but only for one parameter

  19. Summary • 2 level algorithm for EVPI and EVSI is correct approach • Bayesian Updating for Normal, Beta, Gamma OK Others – WinBUGS / approximations • There are issues of computation • Shortcut 1 level EVPI algorithm is accurate if … • net benefit functions are linear and • the parameters are independent. • Emulators (e.g. Gaussian Processes) can be helpful

More Related