1 / 13

Evaluating Technology Programs

Evaluating Technology Programs. Evvy Award: The US MEP Program - A System Model? Workshop on “Research Assessment: What’s Next,” Arlie House, May 18, 2001. Philip Shapira School of Public Policy Georgia Institute of Technology, Atlanta, USA Email: ps25@prism.gatech.edu. Overview.

pabla
Download Presentation

Evaluating Technology Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Technology Programs Evvy Award:The US MEP Program - A System Model?Workshop on “Research Assessment: What’s Next,” Arlie House, May 18, 2001 Philip Shapira School of Public Policy Georgia Institute of Technology, Atlanta, USA Email: ps25@prism.gatech.edu

  2. Overview

  3. Evaluation caseThe US Manufacturing Extension Partnership (MEP)  • MEP program aims: • “Improve the technological capability, productivity, and competitiveness of small manufacturers.” • “Transform a larger percentage of the Nation’s small manufacturers into high performance enterprises.” • Policy structure: federal-state collaboration • Management: decentralized partnership - 70 MEP centers • Services: 25,000 firms assisted/year • Assessments 18%; projects 60%; training 22% • Revenues: 99-00 ~ $280m • Federal $98m (35%); state $101m (35%); private $81m (29%)

  4. MEP Program Model Centers Development Outcomes Projects Intermediate Actions Business Outcomes Companies

  5. NIST Telephone survey of customers of projects based on center activity data reports Panel reviews of centers and staff oversight National Advisory Board review of program Special studies MEP Evaluation System Federal Oversight (e.g. GAO) 3rd Party sponsors Independent Researchers & Consultants State Evaluations MEP Program

  6. Centers Development Outcomes Projects Intermediate Actions BusinessOutcomes Companies Complex Management Context for Evaluation Center Benchmarking Center Plans Center Reviews NIST Plans GPRA goals National Advisory Board Activity Reporting Center Boards Customer Surveys Special Evaluation Studies Needs Assessments

  7. 30 MEP Evaluation Studies, 1994-99:Multiple Methods

  8. 30 MEP Evaluation Studies, 1994-99:Varied Performers MEP revenues 94-99: ~ $1.0 B. - $1.2 B. Evaluation expenditures: ~ $3m-$6m ?? =.25%-.50%

  9. Utility of Evaluative Methods(with schematic ranking, based on GaMEP experience 94-00) Note: Ranking (schematic): 5 = extremely important; 3 = somewhat important; 1 = not important. Ranking weights are schematic, based on experience.

  10. 30 MEP Evaluation Studies, 1994-99:Summary of Key Findings • More than 2/3 of customers act on program recommendations. • Enterprise staff time committed exceeds staff time provided (leverage) • More firms report impacts on knowledge and skills than are able to report hard dollar impacts • Networked companies using multiple public and private resources have higher value-added than more isolated firms (raises issues of attribution) • Robust studies show skewed results - important impacts for a few customers, moderate impacts for most • Service mix and duration matters in generating impacts • Case studies show that management and strategic change in companies is often a factor in high impact projects • In comparative studies, there is evidence of improvements in productivity, but these improvements are modest. Impact on jobs is mixed. • Cost-benefit analyses show moderate to strongly positive paybacks

  11. Advantages Multiple methods and perspectives Encourages methodological innovation Discursive - findings promote exchange, learning Can signpost improved practice Challenges Fragmented, many results, some contradict Program justification still prime Variations in quality; reliable measurement oftenl a problem; Dissemination Different evaluation methods are received and valued differently by particular stakeholders Agency interest in sponsorship may be waning - fear of “non-standard” results 30 MEP Evaluation Studies, 1994-99:Assessment

  12. Insights from the MEP case (1) • Technology program evaluation should not focus exclusively on narrow economic impacts; but also assess knowledge transfer, strategic change & stimulate learning and improvement • Multiple evaluation methods and performers are key to achieving this goal • Strong internal dynamic to promote assessment, benchmarking, discursive evaluation • Illustrates a “networked evaluation partnership” • Balancing of federal and state perspectives, with federal role adding resources and consistency to evaluation system • Local experimentation is possible and can be assessed • Emergence of an evaluation cadre and culture - development of methodologies • Highly discursive: signposts improved practice • Evaluation becomes a forum to negotiate program direction

  13. Insights from the MEP case (2) • Also illustrates threats • Variations in robustness, effectiveness, awareness of multiple evaluation studies • Oversight “demand” for complex evaluation system is weakly expressed - GPRA is a “low” hurdle to satisfy • Agency push for “results” and performance measurement (rather than evaluation) - fear of non-standard results • Vunerability to fluctuations in agency will to support independent outside evaluators • Translating evaluation fundings into implementable program change is a challenge, especially as program matures. • Threats to learning mode? Maturization; bureacratization; standard result expectations; political support.

More Related