1 / 46

Research Utilization: Moving Research to Practice

Research Utilization: Moving Research to Practice. Laura Cohen, PhD Mobility RERC, Shepherd Center Stephen Sprigle, PhD Mobility RERC,Center, CATEA, GA Tech ISS 2007 .

patch
Download Presentation

Research Utilization: Moving Research to Practice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Utilization: Moving Research to Practice Laura Cohen, PhD Mobility RERC, Shepherd Center Stephen Sprigle, PhD Mobility RERC,Center, CATEA, GA Tech ISS 2007 Funding for this program was provided by NIDRR through the RERC for Wheeled Mobility (H133E030035) and the Research Utilization Support and Help (RUSH) Project (H133A031402).

  2. Agenda • Overview of issues related to Knowledge Translation (KT) • Models of Training Evaluation • Models of Research Utilization • Mobility RERC- KT project • Measures and Constructs Used • Our Project & Results • Future Steps - Discussion

  3. Research to Practice Why is it important? • Accountability • Results • Innovation • Results shape development, practice, & policy Who Cares? • Patient’s and families • Payors • Policy Makers

  4. What is Knowledge Translation (KT)? “KT is both a process and a strategy that can lead to utilization of research findings and improved outcomes for consumers” Canadian Institutes for Health Research (2004)

  5. Dissemination Challenges • Multiple challenges to disseminating innovations across clinical practice “Between the health care we have and the care we could have lies not just a gap but a chasm” (IOM, 2001) “In health care, invention is hard, but dissemination is harder” (Berswick, 2003)

  6. 3 things that influence adoption of innovation • How innovation is perceived • Characteristics of people who do/don’t adopt innovation (early & late adopters) • Contextual factors such as: • leadership • management • Incentives • communication

  7. Questions • Why is there a chasm between new knowledge and health care practice? • Why don’t clinicians readily incorporate the findings of clinical research quickly into their daily practice? • Is there a knowledge gap or is there something more fundamental and complex involved? • What can professions do to speed up the dissemination of innovations into clinical practice?

  8. Clinical Practice What is done Research What is known

  9. What has already been done? Research in effectiveness of education • Review of 600 articles published in medical education research journals • Only 4 studies measured pt outcomes • Remainder divided • measuring acquisition of knowledge • satisfaction • Same issue for rehab education

  10. Looking at professional education • How do we demonstrate that professional education is producing clinicians who deliver high-quality care? • What is the effect of professional education on improving patient care? • What is the potential for research using patient-centered clinical outcomes to measure the performance of professional education?

  11. Current Model of Training • Confounders • Funding • Workplace culture • Access to technology • Healthcare system

  12. Kirkpatrick’s HierarchyHow to evaluate the effectiveness of training Donald Kirkpatrick (1959)

  13. Acclaims Simple Pragmatic model for thinking about training Criticisms Implies hierarchy of value related to levels Assumption that levels are associated Implies causal relationship Fails to account for confounders Kirkpatrick’s Hierarchy

  14. Issues Affecting Evaluation

  15. Reaction Level • Little correlation between • learner reactions & measures of learning OR • learner reactions & measures of changed behavior • “Satisfaction” is not necessarily related to good learning and sometimes “discomfort” is essential. • Mixed results may indicate that what is measured at reaction level stage might be more informative about value of training

  16. Learning • Literature encourages use of pre/post questionnaires to gauge learning • Trainee might be able to repeat what they have learned but NOT be able to apply it • Performance during training may not be a predictor of post-training performance • Testing may not be appropriate for measuring attainment of skills

  17. Behavioral Change • Organizational factors • Work Culture, Administrative Support (top down) • Other factors • Perceived difficulty, Perceived usefulness, Job commitment • Individual Factors • Flexibility to change, Motivation to learn – curiosity, Learning curve • Evaluation of behavior change needs to account for these factors

  18. Organizational Results • Most difficult level of evaluation • Implies training must be evaluated using hard outcome data • Inherent difficulties • Linking soft skills training to hard results • Time delays in measurement • Hard measures miss much that is of value

  19. What is happening now • Training evaluation is becoming more common • Predominate level of analysis Level 1 • Few attempts at levels 3 or 4 • Few companies with comprehensive training evaluation attempt to justify ROI of training

  20. Conclusion • Kirkpatrick’s model remains useful • frames where evaluation might be made • Remember to: • Consider intervening factors affecting strength between links • Provide supports to help practitioner undertake meaningful evaluation of use to the organization • Know what not to evaluate • Keep it simple

  21. How do you do training? Models of Research Utilization • Best-Practice Knowledge Transfer • Collaborative Support • Knowledge Synthesis • Technology Transfer • Other models Includes evaluation to determine impact

  22. Best Practice Knowledge Transfer • Specific to setting • Generalizing research findings to real life • Uses conditions from research protocol and transfers to clinical setting • Effective when transfer of skills and behaviors among service providers is intended outcome

  23. Collaborative Support • Credible source of information • User’s perceptions of information • Probability of using information • Networking- careful selection of partners • Increase credibility with intended audiences and systems • Resources & means to disseminate • User-friendly format • Effective for awareness, attitudes and behaviors

  24. Knowledge Synthesis • Knowledge not something that can be “sent” and “received” • Requires understanding • Developers • Users • Effects short-term outcomes affecting awareness, learning and behaviors

  25. Technology Transfer Idea Prototype Useful Product or Technology • Activities/events that support movement prototype to adoption • push/pull forces • Useable and beneficial to target group • Effects outcomes in areas of awareness, motivations and decisions

  26. Other Models • Blend of models • Show theory-based approach for target audience, system or outcomes • Strategies vary depending on • characteristics of research results • target user and/or system

  27. Effect of an educational research dissemination program on practice patterns for professionals recommending MWCs Objective • To measure the utilizationof rehabilitation research training by measuring short and mid term impacts of knowledge, attitudes and behaviors of clinicians.

  28. Training Program Intervention • Program Design • Needs assessment • current research related to SM • how to compare equipment from one manufacturer to another • how to justify equipment • Clinicians responsible & low exposure to MWC evals • SADMERC identified cities in need of education/training • 15 Contact Hours (1.5 CEUs) • 5.5 hrs equipment & labs • 3 hrs case studies and group discussion • 6 Two day training programs

  29. Study Enrollment • 160 enrolled, 139 completed • 23 withdrew or changed groups (16 lost to f/u) • 2 changed groups (util. to conf only) • 1 changed util. to control • Reason? Lack of post conf WPR • 48 utilization group (n=38) • 291 pre WPR, 209 post WPR • 84 conference only group • 57 clinicians (n=52) • 27 suppliers (n=23) • 28 control group (n=26)

  30. Pre/Post Measures • Reaction (Kirkpatrick’s Level 1) • Conference eval form • Knowledge (Kirkpatrick’s Level 2) • Knowledge Questionnaire-15 multiple choice items • Attitudes (Kirkpatrick’s Level 3) • Manual Wheelchair (MWC) Questionnaire • Behaviors (Kirkpatrick’s Level 3) • Work Product Reviews (WPR) • Feature tracking (Utilization Practices) • Rubric scoring (Rationale)

  31. MWC Questionnaire(Kirkpatrick’s Level 3)

  32. Work Product Review • Reviewed LOMN and order forms • Feature Match • Surveyed range of features specified • Rubric • Appraised clinical rationale using rubric • Domains • Problem Identification • Feature Match • Solution Selection • Overall Impression • Reliability Testing • Intrarater reliability (n=1, 10 random files, 1 month apart) • Coefficient alpha .93 rubric • Coefficient alpha .95 feature match

  33. Cohort & Conf Only: degree, prof., yrs prac., yrs SM Suppliers: more hr SM, prof dev hrs

  34. Rubric Analysis • 38 subjects • Different # pre (291) and post (209) WPR • Weighted totals used for analysis • Paired sample correlations for pre/post administrations were significant (from r=.655 to r=.842) • Paired sample t-tests, Bonferonni corrected for multiple testing revealed no sig. change for any section between pre/post administrations

  35. Discussion • Pretest rubric scores most predictive of post test scores • Positive relationship between posttest scores and experience • Psychometric properties of rubric • Good intrarater reliability • May not be sensitive to change associated with training • ? Thwarted by # of cases, facility documentation systems • Further psychometric development • Reliability (interrater reliability) • Validity (content)

  36. Feature Utilization

  37. Discussion • Feature match appears to be a psychometrically good tool • Good test-retest reliability • Good internal consistency • Weighted feature match scores did show significant difference in features recommended as expected. • More features not necessarily better

  38. Conclusion • Positive changes in knowledge scores following training • Attitudes and behaviors were not significantly influenced • Utilization practices showed improvement in # of features specified yet quality of LOMN had no change.

  39. Further Measure Development • Psychometric development • MWC questionnaire • WPR measures (rubric, feature match) • Promising internal consistency • Test-retest reliability • Still need to determine responsivity, validity and reliability • Determine if results were due to sensitivity of measures OR impact of training

  40. Plan • New RUA project funded • create a web-based distance education program • use projects’ evidence-based training program • examine differences in effectiveness between in-person and distance training

  41. Discussion Future opportunities • Knowledge dissemination & training • Link clinical training to pt outcomes

  42. Our Project Partners

  43. Our SPONSORS

More Related