1 / 14

Discussion From Republic of Science to Audit Society, Irwin Feller

Discussion From Republic of Science to Audit Society, Irwin Feller. S. Charlot ASIRPA , Paris , France; June 13, 2012. Outline. New Questions/Issues & What’s at Stake How They’re Answered? Validity of Performance Metrics and Methodological Choice(s )  Econometrics

bert
Download Presentation

Discussion From Republic of Science to Audit Society, Irwin Feller

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Discussion From Republic of Science to Audit Society, Irwin Feller S. Charlot ASIRPA, Paris, France; June 13, 2012

  2. Outline • New Questions/Issues & What’s at Stake How They’re Answered? • Validity of Performance Metrics and Methodological Choice(s)  Econometrics • Use, Non-Use and Misuse of Research Assessments

  3. Pre-New Public Management Assessment Paradigm • Republic of Science (M.Polyani) • Peer (Expert) Review • Social Contract

  4. New Public Management Paradigm • Accountability • Deregulation • Competition (among different uses of public funds) • Performance Measurement (for evaluating research uses)

  5. Promises of Research Performance Assessment • Objectives provide useful baseline for assessing performance. • Performance measurement focuses attention on the end objectives of public policy, on what’s happened or happening outside rather than inside the black box. • Well defined objectives and documentation of results facilitate communication with funders, performers, users, and others.

  6. Limitations of Research Performance Measurement • Returns/Impacts to research are uncertain, long-term, and circuitous • Specious precision in selection of measures • Impacts are typically dependent on complementary actions by agents outside of Federal agency control • Limited (public) evidence of contributions to improved decision making • Benefits from “failure” are underestimated • Distortion of Incentives : opportunistic behavior (young researchers to be employed and elder researchers to catch future funds) First comment/issue + Role of creativity/very innovative ideas in science progress (i. e. “scientific revolutions”)

  7. Overview of Evaluation Methodologies

  8. Second comment/question Complementarities between methodologies Econometric modeling needs analytical conceptual modeling of underlying theory to be pertinent Econometric analysis also needs to take into account the policy design, context… to be pertinent • Survey, case studies.. No econometric identification of impacts without these components in the evaluation model

  9. Complementarity second example Benefit -Cost Analysis can be made by econometric model Conduct Technical Analysis Identify Next Best Alternative Estimate Program Costs Estimate Economic Benefits Determine Agency Attribution Estimate Benefits of Economic Return RTI 2010

  10. Microeconometrics of policy evaluation τ + 1 τ τ τ + 1 Issue: Before/After Design Shows changes “related” to policy intervention, but does not adjust for “intervening” factors. (Threats to internal validity) Reframe Analysis: Did policy “cause” change(s) in treatment group different from those observable in a comparison/control group

  11. Third comment/question Econometric enhancements: Non parametric analysis  no a priori constraint on relationship between outcome (whatever the outcome chosen) and R&D spending or funding • No knowledge production function a priori Takingintoaccount the effect of non observable characteristics or time varingcharacteristics on outcomes  context, context, context

  12. Fourth comment/question“Dominant” U.S. but also European Union Methodologyis Expert Panels Problem of network effects Same issue as peer evaluation and bibliometrics  Only issue for « low impacts » (publications..) but not for high impacts???

  13. Is Anyone Listening? My small experience (one evaluation report): no one is listening as a researcher I agreethat “Doing good may not make you happy, but doing wrong will certainly make you unhappy” But for a novice at evaluating policy what are arguments not to stop this kind of intellectual exercise? (except publishing or funding researches) What type of advices? For ASIRPA?

  14. Thank you

More Related