1 / 54

The Information Needs of Healthcare Reform -- Comparative Effectiveness Research

The Information Needs of Healthcare Reform -- Comparative Effectiveness Research. John M. Brooks, Ph.D. Professor Program in Pharmaceutical Socioeconomics College of Pharmacy & College of Public Health University of Iowa. H ealth E ffectiveness R esearch Ce nter. Outline

kamal
Download Presentation

The Information Needs of Healthcare Reform -- Comparative Effectiveness Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Information Needs of Healthcare Reform -- Comparative Effectiveness Research John M. Brooks, Ph.D. Professor Program in Pharmaceutical Socioeconomics College of Pharmacy & College of Public Health University of Iowa HealthEffectivenessResearchCenter

  2. Outline • Economists can like healthcare reform too. • Need for better/more information in the present version of healthcare reform. • Healthcare reform information-seeking initiatives. • Scary things about the ability to satisfy this information need (with examples).

  3. Why is Healthcare Different? • Asymmetry of Information? • Moral Hazard? • Principle/Agent Problem – Demand Inducement? • Positive Externalities? Ding, Ding, Ding. We all gain when others consume healthcare.

  4. So even the economist “grim reapers” among us can agree to a goal of expanded access… The Question then is: “How to best do this?”.

  5. Existing US Healthcare System: • Generous benefits for those covered resulting in… • Unsustainable cost escalation related to both price and utilization. • Expensive healthcare services for those not covered. • Hodge-podge of efforts to control costs. → Diagnostic Related Groups – DRGs → Resource-Based Relative Value Scale – RBRVS → Managed Care (and its associated “bean counters’)

  6. PPACA: • More generous benefits for those covered which will more likely result in … • Unsustainable cost escalation related to both price and utilization (Sisko et al., Health Affairs, Sept 2010). • Expensive healthcare services for those not covered (fewer of these). • Hopefully improved hodge-podge of efforts to control costs.

  7. Wall Street Journal Editorial (09/11/09): It’s like a variation of the old Marx Brothers routine: “The Soup is Terrible and the Portions are Too Small.”

  8. Where does the Cost-Cutting Optimism Come From? • Significant observed variation in healthcare costs (Fisher et al. Annals Int Med. 2003a). • Little relationship with outcomes (Fisher et al. Annals Int Med. 2003b). • Suggestsresources are wasted in high cost areas. • Let’s find out how care is provided in low cost areas and spread the word…

  9. Geographic Variation in Medicare Healthcare Spending Source: http://www.dartmouthatlas.org/downloads/reports/Spending_Brief_022709.pdf

  10. An Interesting Cross-Current… • Other research suggests that folks are not getting enough care(Jencks et al. JAMA, 2003). • It could be that in “low cost areas” folks are getting “correct care”, thereby lowering costs.

  11. Also… • Positive correlations between provider supply, specialty mix and costs have been found. •But no correlations between provider supply, specialty mix, and outcomes.(e.gBaicker& Chandra 2004a , Baicker & Chandra 2004b, Leonard, Stordeur, & Roberfroid, 2009) • Which has led to calls for modifying physician training mix. (Wennberg et al. 2008)

  12. • So something is “right” out there, if only we can find it and tell everyone about it. • But often the relevant question is not whether a treatment should be used at all, but whether a treatment is over- or underused in practice. Or … Which Rate is Right? Wennberg, New England Journal of Medicine, 1986

  13. PPACA-related Efforts to Gain this Information • The American Recovery and Reinvestment Act (ARRA) contains $1.1 billion for comparative effectiveness research (CER) to compare treatments and strategies to improve health. “Comparative effectiveness research is the conduct and synthesis of research comparing the benefits and harms of different interventions and strategies to prevent, diagnose, treat and monitor health conditions in ‘real world’ settings.”(Agency for Healthcare Quality and Research – AHRQ)

  14. PPACA-related Efforts to Gain this Information • Patient-Centered Outcomes Research Institute (PCORI) – (sec 6301) • Center for Medicare and Medicaid Innovation within CMS – (sec 3021) • Numerous idiosyncratic demonstration projects throughout.

  15. Example: Acute Myocardial Infarction (AMI) Secondary Prevention Quandary • 4 drugs (beta blockers, renin-angiotensin system antagonists, statins, and antiplatelets) recommended, based on singleton trials. (van derElst et al. Heart, 2007) • Older patients found to get fewer – ”treatment risk paradox” (Lee, JMCP, 2007)

  16. Data Like These: • Reveal the substantial treatment variation that many suggest needs to be remedied. → The remaining 97% should receive all 4 drugs, right? • Provide substantial “treatment variation” from which to potentially answer this question. → But also provide numerous pitfalls from which to make inferences, why were some patients treated and others not?

  17. Where the notion of “evidence-based medicine” comes from… • Perhaps “evidence” is confusing or improperly presented to many providers. • Perform literature searches and “meta-analysis” of existing research. • Synthesize results in a manner easily digestible to providers. → Note AHRQ Effective Healthcare Program goals http://www.effectivehealthcare.ahrq.gov/index.cfm/what-is-the-effective-health-care-program1/

  18. Note in Section 1302… • Health care plans must provide “essential health benefits”. • And how do we find these: → Surveying current health plans → Periodic review. Seems easy enough

  19. Steinberg and Luce in 2005 Health Affairs article point out (p 85 paraphrased): • The same researchers that plead to make all treatment decisions “evidence-based” … • in their next breath reject treatment effectiveness estimates from observational studies because surely the treated and untreated patients differ.

  20. From Friedrich Hayek’s The Fatal Conceit (p76) “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” What if, in contrast to the “naïve” story, U occurs because of:

  21. James Duesenberry (1960): Economics is all about how people make choices; Sociology is all about how they don’t have any choices to make. Duesenberry J. 1960. Comment on `An Economic Analysis of Fertility’. In Demographic and Economic Change in Developed Countries, ed. NBER. Princeton: Princeton University Press.

  22. Loaded Graphic • Treatment allocation involves “choice”. • What are all the consequences of a “treatment” choice? → What factors are inside “benefits”? → What factors are inside “costs”? • Do patients place the same value on each of these distinct treatment consequences? • How many of these consequences are usually compared in a randomized controlled trial?

  23. Loaded Graphic con’t • Treated and untreated patients are different. • Provider extrapolation of evidence. • Because evidence outside RTC patients is not clear, shape of blue line beyond is group needed. →Internal vs. External Validity. Is U “right” above? If provider expectations are “right” across patients, then “yes”, but truth may be different…

  24. Loaded Graphic con’t • Existing “evidence” is insufficient to describe the shape of blue line beyond the RCT group. →Internal vs. External Validity. • Shape of blue line outside the RCT population is fodder for research. → Answer is not a single number.

  25. Evidence for ? • More RCTs? → expensive → ethics → who wants to be randomized anyway? → compliance with randomization • Plenty of treatment variation out there. U% treated, (1-U%) not treated. Can we exploit this for research?

  26. Research Challenges Using Observational Data • Treated and Untreated patients are “different” • Not all information affecting treatment choice and outcomes is available in databases. •What information about the blue line can we learn from available estimation methods or “estimators”?

  27. Risk Adjustment Estimators • Methods to control for measured differences between treated and non-treated patients. → multiple variable regression → matching methods (propensity scores). • Assumes that after controlling for measured differences, unmeasured differences across patients are “ignorable”. •At BEST, provides an estimate of the Average Treatment Effect for the Treated (info on the height of the blue line from 0 to U.)

  28. Natural Experiment-Based (instrumental variable) Estimators • Estimates average treatment effectiveness for the subset of patients whose treatment choices were affected by an outside factor (instrument). • Assumes instrument is available that affects “outcome” only through its affect on treatment choice. •Provides an estimate of a Local Average Treatment Effect (LATE) (at its BEST provides info on the height of the blue line around U).

  29. Where do instruments come from? • Theory on what motivated choices, not theory on how choices can be motivated. → Observed differences in: -- guideline implementation (timing/interpretation) -- product approval rules across payers -- reimbursement differences across payers/geography -- area provider “treatment signatures” -- geographic access to relevant providers -- provider market structure/competition • Generally, “Natural Experiments” (Angrist and Krueger, 2001)

  30. Estimators are Not Panaceas • All estimates are conditional on assumptions and assumption failure results in biased estimates. • Without further assumptions cannot provide information of: → average treatment effect across all patients. → average treatment effect for the untreated. → average treatment effect for individual patients. •Analysis of patient subsets can move toward estimates to help with patient-centered care.

  31. Empirical Examples • McClellan, McNeil, & Newhouse (1994) provide the seminal Instrumental Variable (IV) analysis in healthcare. → Goal: estimate the mortality reduction from surgery for patients with acute myocardial infarction (AMI) using Medicare claims. → They noted AMI patients receiving surgery were less severe and less sick in measured ways perhaps they were less severe/sick in unmeasured ways. → Authors expected that direct risk-adjustment estimates of mortality reduction associated with surgery would be biased high (more mortality reduction).

  32. → They then theorized that since IV estimates would be “unbiased” that they would be closer to zero than risk-adjustment estimates. Behold....

  33. They attribute the differences between estimates to IV’s ability to remove bias problem... Yet note, that their estimate implies that surgery has no effect on mortality. Quick to qualify… “When valid, IV methods estimate a somewhat different effect than controlled trials; the average effect of treatment over the marginal range of probabilities of use ... not the average effect of treatment over an entire population” Thinking treatment effects are homogeneous across patients will get you in trouble here!!!!!

  34. Follow-up: • We showed that AMI patients with more generous insurance had higher treatment rates, but lower estimates of LATE (Brooks, McClellan, Wong 2000).

  35. So What Can be Tested? • With “sorting on gain” researchers can only identify ATT and LATE. In context these are useful for policy makers. → H0: current system allocates treatments correctly HA: treatments are misallocated in current system Yongming Zhao showed (unpublished):

  36. Current Research Example Survival Implications Associated with Variation in Mastectomy Rates for Early-Staged Breast Cancer (ESBC) Co-investigators: Nancy Keating Mary Beth Landrum Elizabeth Chrischilles Kara Wright Gang Fang Eric Winer Rita Volya Preliminary Results: Do not quote without permission

  37. Background • RCT evidence suggesting survival equivalence between mastectomy and breast-conserving surgery plus radiation (BCSXRT) leads to guideline recommending BCSXRT. (NIH Consensus Conference, 1991) • Despite guideline, mastectomy remains widely used with recent rate increases. (Keating et al. 2010; Katipamula et al. 2009) Preliminary Results: Do not quote without permission

  38. Background • Patients with more severe disease are more likely to receive mastectomy Keating et al. 2010; Schonberg et al. 2010) suggesting beliefs that: → surgery choice effects are heterogeneous across patients; and → mastectomy has advantages for patients with greater disease severity. • Substantial geographic variation exists. (Keating et al. 2010) Preliminary Results: Do not quote without permission

  39. Background • Estimates using “risk-adjustment” estimators with observational data found: → mastectomy has a survival disadvantage relative to BCSXRT; and → this disadvantage increases with severity. (Keating et al. 2010; Schonberg et al. 2010) • Results suggest either: → unmeasured confounders favoring BCSXRT; or → BCSXRT benefit for more severe patients that has not been previously revealed. Preliminary Results: Do not quote without permission

  40. Research Objective • Exploit the geographical variation in the use of mastectomy to assess its affect on patient survival. • Interpret estimates in light of properties of estimators and existing evidence (pieces of a puzzle). Preliminary Results: Do not quote without permission

  41. Study Characteristics • 32,733 ESBC patients from SEER-Medicare, 1992-1999. • Assessed 6-year survival. • Used measure of local-area mastectomy practice style as “instrument” in instrumental variable analysis (Gang, Brooks, Chrischilles, 2010). Instruments must be: → related to treatment choice; and → theoretically, have no direct relationship with outcome or unmeasured confounders. Preliminary Results: Do not quote without permission

  42. Preliminary Results: Do not quote without permission

  43. Preliminary Results: Do not quote without permission

  44. I am certain that there is too much certainty in the world. Michael Crichton in State of Fear Or in the words of Kansas (Carry on Wayward Son): “And if I claim to be a wise man, it surely means that I don't know….”

More Related