Design decisions and lessons learned in the improving chronic illness care evaluation
This presentation is the property of its rightful owner.
Sponsored Links
1 / 21

Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation PowerPoint PPT Presentation


  • 95 Views
  • Uploaded on
  • Presentation posted in: General

Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation. Emmett Keeler September 2005 [email protected] Evaluation questions. Did the Collaboratives induce positive changes?

Download Presentation

Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Design decisions and lessons learned in the improving chronic illness care evaluation

Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

Emmett Keeler

September 2005

[email protected]


Evaluation questions

Evaluation questions

  • Did the Collaboratives induce positive changes?

  • Did implementing the Chronic Care Model (CCM) improve processes of care and patient health?

  • What did participation and implementation cost?

  • What factors were associated with success?

  • www.improvingchroniccare.org is a great website for information about the Chronic care model.


  • Icice results in 1 slide

    ICICE results in 1 slide

    • Collaborative sites made more than 30 systemic changes over the year, on average

    • These changes let us test if moving towards the CCM is good for patients

    • Process, self management and some outcomes improved more for intervention patients than control sites

      • improving most in emphasized areas such as patient goal-setting

  • http://www.rand.org/health/ICICE presents findings from the 15 accepted papers and other information about the study.


  • Outline

    Outline

    • Design considerations and what we did

    • Improving science base for QI evaluations

      • Dealing with challenges to validity

      • Reducing per subject costs


    Drug rct paradigm

    Drug RCT paradigm

    Aims at internal validity of treatment estimates

    • Reduce unwanted variation by

      • Many subjects

      • Tight criteria for enrollment

      • Tight protocols for treatments

    • Eliminate potential bias by

      • Randomization

      • Blinding

  • How much of this paradigm is possible in evaluating systemic change in many organizations?


  • Is randomization feasible

    Is Randomization Feasible?

    • Organizations want to improve care, not do research

      • Are afraid patients won’t like being subjects

    • Many changes are at site level: new appointment system; staff training; new information system.

      • Can’t be applied to half the patients at the site

      • Sites might be randomized in system QI trials, but not patients

    • So patient RCTs are not feasible in these evaluations, but many reviewers for medical journals believe non-RCTs have little value.


    Components of alternate strong studies

    Components of alternate strong studies

    • Before and after with a matched control group

    • Multiple sources of data and an evaluation logic model

    • Planning for and testing potential biases


    Evaluation design

    Evaluation Design

    Pilot

    Group

    Experimenting with Care

    Implementing Improved Care

    Baseline Measures

    Post

    Measures

    Changes in pilot measures

    Difference

    in Changes

    Changes in control measures

    Control

    Group

    Post

    Measures

    Secular Trends

    Baseline Measures

    An Evaluation of Collaborative interventions to improve Chronic Illness Care:

    Framework and Study Design”, Cretin S et al., Evaluation Review, 2004


    Picking a control group

    Picking a control group

    • External control sites

      • No contamination from intervention

      • But a different set of shocks

      • Less likely to cooperate, more expensive to include

    • We tried to pick internal sites in the same organization

      • Asked for a similar site, that was not likely to get CCM this year.


    Patient sampling

    Patient Sampling

    • Use sampling frame from site registry of patients with disease of interest

      • Registries are needed for the CCM interventions

      • A few sites had to develop them, which we helped

    • We usually took everyone in registry at our sites.

      • Not a “tight” design, but better for external validity

    • We later discarded patients who said they did not have disease, or did not get care at the sites.


    Evaluation data sources

    Evaluation Data Sources

    Record

    Staff Survey

    IHI

    IHI

    Patient Telephone Surveys

    Medical Record Abstraction

    Clinical & Administrative Staff Surveys

    Monthly Progress Reports

    Final Calls with Leader


    Telephone survey

    Telephone Survey

    • Patient is best source for what care provider does:

      • Education, Knowledge, Adherence,

      • Communication, Satisfaction,

      • General and disease specific health, limitations

      • Utilization, Demographics and insurance.

    • Can use to see if improvements in charts are just documentation or are real.

    • Cost ~$100 each for the ~4000 patients phoned.


    Advantages of charts

    Advantages of charts

    • IRB and consent difficulties delayed recruitment

      • Phone surveys were at end of collaborative, but

      • Charts still provide true before and after

        • can add baseline variables from charts to analyses of measures from phone surveys.

      • Cost ~ $300 each, but give before and after.


    Monthly progress reports

    Monthly progress reports

    IHI

    IHI

    • The collaborative asked the team at each intervention site to fill out a brief report each month -- for their leaders, and for IHI

      • what they had done that month,

      • tracking their most important statistics.

    • Very helpful in finding out what sites did.

      • We needed to develop method to code change activities

      • Used changes as both dependent and independent variable.

    • Lack of Standardization reduced value of their statistics to us.

      • e.g. for depression, 3 time periods for “in treatment”, 3 for time for a follow-up , 2 for big improvement at N months.


    Outline1

    Outline

    • Design considerations and what we did

    • Improving science base for QI evaluations

    • Dealing with challenges to validity

    • Reducing per subject costs


    Longer run effects

    Longer Run effects ?

    • We mainly looked at care before and during the collaborative to shorten the study, saving time-to-results and resources.

    • How can one get longer run effects?

      • Use risk factors as a proxy for future health?

        • Cholesterol, HbA1c, blood pressure in people with diabetes.

      • Check in a year later with key staff for:

        • Reflections on successes and barriers

        • See what happened next (maintenance, spread of quality improvements).


    Evaluators need to know what control sites are doing

    Evaluators need to know what control sites are doing

    • In the first collaborative, control sites made big improvements, making intervention sites insignificantly better. What was up?

    • Bleed of collaborative quality improvement to controls:

      • We measured how “close” the control sites were using a scale including geography, overlap of staff, and of organizations. (only 6 control sites were truly external).

      • This scale was a mild predictor of control sites doing better.

    • Other Quality improvement activities: QI activities unique to control site reduce the estimates of intervention success.

      • asking about QI was a big part of the control exit phone call.


    Bias from volunteer organizations and teams

    Bias from volunteer organizations and teams?

    • External validity: Organizations volunteered to improve their care

      • We do not know how effective the intervention would be on organizations that do not care to improve quality.

    • internal Validity: Site staff in the organization often volunteered to go first.

      • We compared staff attitudes of the chronic care delivery teams in intervention and control sites.

        • Similar attitudes towards quality improvement


    Selection bias from organizations agreeing to be evaluated

    Selection Bias from organizations agreeing to be evaluated?

    • Compare organizations in the collaborative that participated in the evaluation with non-participating.

    • IHI Faculty gave 1-5 ratings of success at end.

      • Participating organizations averaged 4.1, non-participating had 3.9, non-significant

      • None of the 7 organizations that dropped out of the collaboratives had signed up to participate.


    Bias from funder s pressure

    Bias from Funder’s pressure

    • Real and perceived problem that increases with less rigid designs.

    • Separate evaluation team from the intervention team

      • But we need their help to enhance cooperation and help us understand the intervention process

    • Try to ensure independence up-front in contract

      • Funders can review but not censor results

      • Evaluators share “unpleasant”results ASAP and

        • work with funder to understand and present them

    • But, researchers and funders are in long term relationship


    Lowering cost of evaluations

    Lowering cost of evaluations

    • Reducing multi-site IRB and consent costs:

      • Study QI in organizations

        • with many sites

        • with prior patient consent for quality improvement and QI research activities.

    • Reducing data collection costs

      • Electronic medical record

      • Clever use of existing data, like claims

      • Web-based and other unconventional surveys


  • Login