1 / 25

Presentation to the Research Technology and Development Technical Interest Group

Evaluation’s Contribution to the Measurement of Research, Technology and Development: From Prescription to Menu to Guidelines. Presentation to the Research Technology and Development Technical Interest Group American Evaluation Association Orlando, Florida November 2009

Download Presentation

Presentation to the Research Technology and Development Technical Interest Group

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation’s Contribution to the Measurement of Research, Technology and Development: From Prescription to Menu to Guidelines Presentation to the Research Technology and Development Technical Interest Group American Evaluation Association Orlando, Florida November 2009 steve.montague@pmn.net rvalentim@cancer.ca

  2. The Situation • Evaluation has influenced the performance measurement of RT&D • Traditionally – a prescription has been offered • Inputs • Outputs • Publication productivity • Citations • Some ‘impacts’ • Problem: Mission is sometimes neglected www.pmn.net

  3. Some Background • A short ‘history’ of RT&D indicators.. • Paper by Fred Gault, Statistics Canada (SPRU 2006) provides an excellent review of Canadian and international R&D and innovation statistics • Accepted input indicators include, GERD, R&D personnel, assumed to result in innovation (input / output model) • Output indicators link to IP (publications, patents) and new HQP also assumed to link directly to innovation, economic growth • Outcomes relate to increased innovation and need to consider a wide range of factors outside the direct influence of R&D • Recognition that the linear innovation model is simplistic and doesn’t reflect sectoral differences, influence of other policies and international economic situation • More recently we have seen these formalized as scorecards and ‘payback’ categories www.pmn.net

  4. Evaluating the Valuation Approaches www.pmn.net

  5. Recent Developments • Results logic to ‘inform’ indicators • Sort the levels • Suggest the connections www.pmn.net

  6. www.pmn.net

  7. A Generic Version of the Diffusion Model Appropriate to Knowledge Entities Broadcast Contagion • Replication: Emulate changes within Organization/household/ facility/firm • Changes at the same site • Changes at another site • Implemented: Behavior/ Practices Changed • Capital • - Larger, stronger knowledge community/ critical mass, core competencies • Invest in more research in this area • Design/Plan • Changed knowledge seeking behavior (e.g. research approach, how calculate savings, use different procedures, consider labeling criteria) • Considered limitations, benefits/ opportunities (e.g. in setting standards) • Implement • Changed attitude, perception of value used in negotiations, text books, technical specifications • Used tool to measure, choose option, shift loads • Maintain • - keep data base current, correct, consistent over time (e.g. climate data, wind resources data) Confirm Value • Knowledge valued in community • Knowledge cited by others • Knowledge used by others Innovators Early Adopters Early and late majority Access Information and are Aware of: - Knowledge gaps - information resources (curriculum, tech reports, etc.) - Program opportunities (program offerings/ labels, etc) - EERE potential/ opportunities Sustained Knowledge R&D community, Policy makers, Households, Firms use and share the knowledge routinely in textbooks, technical manuals, regulations, etc. Others observe changes and learn through peer interaction • Persuasion / information search • - More knowledge sought in this area • New research ideas developed • Obtained and tried knowledge tools (e.g., software, curriculum) • Demonstrated/ saw demonstration • Hands-on experience • Product filter: Knowledge/tool has value • Relative advantage • • More credible information • • Information confers • - economic benefit • - efficiency benefit • - production benefit • - quality benefit • Compatible • • With other knowledge/tools • • Production requirements • • Organizational goals/systems • • Culture • Complexity • • Easy to understand • • Simplifies existing knowledge • Can be tried by target audience • Benefits of use can be Observed • Advocacy Spreads word to others Awareness of confirmed value spreads through • Advocacy groups • Peer networks • One-to-one interactions Replication by those not directly involved with the program -Widespread acceptance of product among target group - Tip the market - Market penetration increases www.pmn.net Source: A Generic Theory BasedLogic Model for Creating Scientifically Based Program Logic Models. John Reed, AEA 2006

  8. Logic Chart for a Research and Technology Development and Deployment Program Source: Logic Models: A Tool for Telling Your Program's Performance Story . John A. McLaughlin and Getchen B. Jordan, July 1998 www.pmn.net

  9. CAHS Framework Logic Model of Health Research Progression to Impacts Source: Making an Impact, Canadian Academy of Health Sciences, Assessment Report ,January 2009. www.pmn.net

  10. The Challenge: Can We Go Farther? • Can we use results logic - or a simplified results hierarchy to sort RT&D indicators? • Can an Innovation diffusion model help? (Rogers, Bennett) • Can we increase the emphasis on learning.. As compared to ‘justification’? www.pmn.net

  11. CAHS ROI Indicators Over 60 Indicators of the following: • Advancing knowledge indicators and metrics include measures of research quality, activity, outreach and structure. Identified are some aspirational indicators of knowledge impacts using data that are highly desirable but currently difficult to collect and/or analyze (such as an expanded relative‐citation impact that covers a greater range of publications, including book‐to‐book citations and relative download‐rates per publication compared to a discipline benchmark). • Research capacity‐building indicators and metrics fall into subgroups that represent personnel (including aspirational indicators for improving receptor and absorptive capacity), additional research‐activity funding and infrastructure. • Informing decision‐making indicators and metrics represent the pathways from research to its outcomes in health, wealth and well‐being. They fall into health‐related decision‐making (where health is broadly defined to include health care, public health, social care, and other health‐related decisions such as environmental health); research decision‐making (how future health research is directed); health‐products industry decision‐making; and general public decision‐making. Provided are two aspirational indicators for this category (media citation analysis and citation in public policy documents). • Health‐impact indicators and metrics include those on health status, determinants of health and health‐system changes, and they include quality of life as an important component of improved health. Determinants of health indicators can be further classified into three major subcategories: modifiable risk factors, environmental determinants, and modifiable social determinants. • Broad economic and social impacts are classified into activity, commercialization, health benefit (specific costs of implementing research findings in the broad health system), wellbeing, and social‐benefit indicators (socio‐economic benefits). www.pmn.net

  12. The RAND Payback Categories www.pmn.net Source: The Returns from Arthritis Research, RAND Europe 2004

  13. Theory of Action - Levels of Evidence and the Logic Model Program Chain of Events (Theory of Action) Matching Levels of Evidence 7. Measures of impact on overall problem, ultimate goals, side effects, social and economic consequences 7. End results 6. Measures of adoption of new practices and behavior over time 6. Practice and behavior change 5. Measures of individual and group changes in knowledge, attitudes, and skills 5. Knowledge, attitude, and skill changes Program Design Hierarchy 4. What participants and clients say about the program; satisfaction; interest, strengths, weaknesses Hierarchy of Evaluation Criteria 4. Reactions 3. The characteristics of program participants and clients; numbers, nature of involvement, background 3. Participation 2. Implementation data on what the program actually offers or does 2. Activities 1. Resources expended; number and types of staff involved; time extended 1. Inputs Source: Adapted from Claude Bennett 1979. Taken from Michael Quinn Patton, Utilization-Focused Evaluation: The New Century Text, Thousand Oaks, California, 1997, p 235. www.pmn.net

  14. Results Chains vs. RAND Case Categories Results Chain RAND ‘Payback’ Scoring Areas • Final outcomes* • Wider economic sector benefits** • Health sector benefits** 7. End results • Adoption by practitioners and public* • Informing policy and product development** 6. Practice and behavior change 5. Knowledge, attitude, skill and / or aspirations changes • Knowledge production / capacity building** 4. Reactions • Research targetting (participation of groups)** 3. Engagement / involvement 2. Activities and outputs • Issue identification, Project selection, Research process* 1. Inputs • Inputs to research* *RAND Returns From Arthritis Research (2004) Section 2.1, ** Ibid Section 3.3.2 Note: These results categories can be tracked over time to tell a performance story. www.pmn.net

  15. A Results Chain Typical Indicator and Measurement “Menu” for CCSRI National Programs and Networks www.pmn.net

  16. The Vision • Refer to a hierarchy to sort key indicators • Focus on Mission as the key end outcome • Tell a performance story – over time • Emphasize behaviour change • Employ ‘Realist Evaluation’ Questions… not simply ‘payback’ questions www.pmn.net

  17. T1 (0-5) (Res) $ 5 year operating grant for CCSRI www.pmn.net

  18. T1 (0-5) T2 (6-10) • Publication • ---------- • ---------- (Res) (Res) $ $ 5 year operating grant for CCSRI Fellowship / student award 5 year operating grant for CCSRI www.pmn.net

  19. T1 (0-5) T2 (6-10) T3 (11-15) • Research used by • ---------- • ---------- • Clinical Trials (CTs) by • ---------- • ---------- • Publication • ---------- • ---------- • Publication • ---------- • ---------- (Res) (CTG) (Res) (Res) $ $ $ 5 year operating grant for CCSRI Fellowship / student award 5 year operating grant for CCSRI Funding for clinical trials CCSRI (Phase 1) www.pmn.net

  20. T1 (0-5) T2 (6-10) T4 (16-20) T3 (11-15) • Research used by • ---------- • ---------- Approval of use of X by ‘accrediting bodies’ A, B and C • Research used by • ---------- • ---------- • Clinical Trials (CTs) by • ---------- • ---------- Commitment to use and ‘practice’ learning (use of X) Published CT learning • Publication • ---------- • ---------- Published CT findings Community Support • Publication • ---------- • ---------- CTs Phase 2 (Res) (CTG) (Res) (CTG) (Res) (Res) $ $ $ $ Communications Activities 5 year operating grant CCSRI Fellowship / student award 5 year operating grant for CCSRI Funding for clinical trials CCSRI (Phase 1) Further clinical trial funding CCSRI (Phase 2) www.pmn.net

  21. T1 (0-5) T2 (6-10) T4 (16-20) T5 (21-25) T6 T3 (11-15) Reduced ‘burden’ (inc/mort/morb/ effects/etc.) by Y • Research used by • ---------- • ---------- Approval of use of X by ‘accrediting bodies’ A, B and C • Research used by • ---------- • ---------- • Use of X by • ---------- • ---------- • Clinical Trials (CTs) by • ---------- • ---------- Commitment to use and ‘practice’ learning (use of X) Published CT learning Commitment to use and ‘practice’ learning (use of X) • Publication • ---------- • ---------- Published CT findings Community Support • Publication • ---------- • ---------- Engagement of health organizations and health practitioners CTs Phase 2 (Res) (CTG) (Res) (CTG) (Res) (Res) $ $ $ $ $ Communications Activities Communications Activities 5 year operating grant for CCSRI Fellowship / student award 5 year operating grant for CCSRI Funding for clinical trials CCSRI (Phase 1) Further clinical trial funding CCSRI (Phase 2) Expanded Research and Implementation CCSRI funding www.pmn.net

  22. Mock-up Results Chain Applied Over Decades to a Research Initiative T1 (0-5) T2 (6-10) T4 (16-20) T5 (21-25) T6 T3 (11-15) Reduced ‘burden’ (inc/mort/morb/ effects/etc.) by Y (4*) • Research used by (2*) • ---------- • ---------- Approval of use of X by ‘accrediting bodies’ A, B and C (3*) • Research used by (2*) • ---------- • ---------- • Use of X by (3*) • ---------- • ---------- • Clinical Trials (CTs) by (2*) • ---------- • ---------- Commitment to use and ‘practice’ learning (use of X) (3*) Published CT learning (1*) Commitment to use and ‘practice’ learning (use of X) (3*) • Publication (1*) • ---------- • ---------- Published CT findings (1*) Community Support • Publication (1*) • ---------- • ---------- Engagement of health organizations and health practitioners CTs Phase 2 (Res) (Res) (Res) (CTG) (Res) (CTG) $ $ $ $ $ Communications Activities Communications Activities 5 year operating grant for CCSRI Fellowship / student award 5 year operating grant for CCSRI Funding for clinical trials CCSRI (Phase 1) Further clinical trial funding CCSRI (Phase 2) Expanded Research and Implementation CCSRI funding *Linkage to RAND Payback categories respectively: 1. Knowledge Creation; 2. Research Targetting, Capacity Building; 3. Informing Policy or Product Development; 4. Health Benefits; 5. Broader Economic Benefits Source: S Montague – ACOR Presentation May 27, 2009 www.pmn.net

  23. Realist Evaluation QuestionsAn explanatory compendium for complex RDT initiatives(Adapted from Evidence-based Policy, Ray Pawson) • Program theories – how is the RDT initiative supposed to work? • Reasoning and reactions of stakeholders – are there differences in the understanding of the RDT initiative theory? • Integrity of the implementation chain – is the RDT initiative theory applied consistently and cumulatively? • Negotiation and feedback in implementation – does the RDT initiative theory tend to bend in actual usage? • Contextual influences – does the RDT initiative theory fare better with particular individuals, interpersonal relations, institutions and infrastructures? • History of the RDT initiative and relationships with other policies – does the policy apparatus surrounding the theory advance or impede it? • Multiple, unintended, long-term effects – is the theory self-affirming or self-defeating or self-neutralizing? www.pmn.net

  24. Challenges • Research culture vs. management culture… Investigator vs. mission driven • The criticism that this might drive out explorative innovation… • ‘Getting’ the point that the real evaluation story is all about human behaviours www.pmn.net

  25. Conclusions • A hierarchy of results can help to sort RT&D indicators • A storyline is key to provide indicator context – evaluation logic models offer this.. but they need to be adapted for RT+D.. Stay tuned.. • Realist evaluation questions may prove to be more promising as a support to management than simple ‘payback’ enquiries… at least if we are serious about learning – as well as ‘justification’ • Principle – based guidelines work better than prescriptions www.pmn.net

More Related