evaluating the effectiveness of innovation policies
Download
Skip this Video
Download Presentation
Evaluating the effectiveness of innovation policies

Loading in 2 Seconds...

play fullscreen
1 / 30

Evaluating the effectiveness of innovation policies - PowerPoint PPT Presentation


  • 90 Views
  • Uploaded on

Evaluating the effectiveness of innovation policies. Lessons from the evaluation of Latin American Technology Development Funds Micheline Goedhuys [email protected] Structure of presentation. 1. Introduction to the policy evaluation studies: policy background features of TDFs

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Evaluating the effectiveness of innovation policies' - kasimir-cross


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
evaluating the effectiveness of innovation policies

Evaluating the effectiveness of innovation policies

Lessons from the evaluation of Latin American Technology Development Funds

Micheline Goedhuys

[email protected]

structure of presentation
Structure of presentation
  • 1. Introduction to the policy evaluation studies:
    • policy background
    • features of TDFs
    • evaluation setup: outcomes to be evaluated, data sources
  • 2. Evaluation methodologies:
    • the evaluation problem
    • addressing selection bias
  • 3. Results from Latin American TDF evaluation: example of results, summary of results, concluding remarks

DEIP, Amman June, 10-12 2008

1 a introduction policy background
1.A. Introduction: Policy background

Constraints to performance in Latin America

  • S&T falling behind in relative terms: small and declining share in world R&D investment, increasing gap with developed countries, falling behind other emerging economies
  • Low participation by productive sector in R&D investment: lack of skilled workforce with technical knowledge; macro volatility, financial constraints, weak IPR, low quality of research institutes, lack of mobilized government resources, rentier mentality

DEIP, Amman June, 10-12 2008

1 a introduction policy background1
1.A. Introduction: Policy background

Policy response: shift in policy

From focus on promotion of scientific research activities, in public research institutes, universities and SOE

To (1990-…) needs of productive sector, with instruments that foster the demand for knowledge by end users and that support the transfer of Know How to firms

TDF emerged as an instrument of S&T policy

DEIP, Amman June, 10-12 2008

1 a introduction policy background2
1.A. Introduction: Policy background
  • IDB: evaluating the impact of a sample of IDB S&T programmes and instruments frequently used:
  • Technology Development Funds (TDF): to stimulate innovation activities in the productive sector, through R&D subsidies
  • Competitive research grants (CRG)
  • OVE coordinated, compiled results for TDF evaluation in Argentina, Brazil, Chile, Panama (Colombia)

DEIP, Amman June, 10-12 2008

1 b introduction selected tdfs
1.B. Introduction: Selected TDFs

DEIP, Amman June, 10-12 2008

1 b introduction features of tdfs
1.B. Introduction: features of TDFs
  • Demand driven
  • Subsidy
  • Co-financing
  • Competitive allocation of resources
  • Execution by a specialised agency

DEIP, Amman June, 10-12 2008

1 c introduction evaluation setup
1.C. Introduction: evaluation setup
  • Evaluation of TDFs at recipient (firm) level
  • Impact on :
    • R&D input additionality
    • Behaviour additionality
    • Innovative output
    • performance: productivity, employment

and growth thereof

DEIP, Amman June, 10-12 2008

2 a the evaluation problem in words
2.A. The evaluation problem (in words)
  • To measure the impact of a program, the evaluator is interested in the counterfactual question:

what would have happened to the beneficiaries ,…

if they had not had access to the program

  • This is however not observed, unknown.
  • We can only observe the performance of non-beneficiaries and compare it to the performance of beneficiaries.

DEIP, Amman June, 10-12 2008

2 a the evaluation problem in words1
2.A. The evaluation problem (in words)
  • This comparison however is not sufficient to tell us the impact of the program, it presents rather correlations, no causality
  • Why not?
  • Because there may be a range of characteristics that affect both the possibility of accessing the program AND performing well on the performance indicators (eg R&D intensity, productivity…)
  • Eg. size of the firm, age, exporting…

DEIP, Amman June, 10-12 2008

2 a the evaluation problem in words2
2.A. The evaluation problem (in words)
  • This means, ‘being in the treatment group or not’ is not the result of a random draw, but there is a selection into a specific group, along both observable and non-observable characteristics
  • The effect of selection has to be taken into account if one wants to measure the impact of the program on the performance of the firms!!
  • More formally….

DEIP, Amman June, 10-12 2008

2 a the evaluation problem
2.A. The evaluation problem

Define:

YT = the average expenses in innovation by a firm in a specific year if the firm participates in the TDF and

YC = the average expenses by the same firm if it does not participate to the program.

  • Measuring the program impact requires a measurement of the difference (YT- YC) which is the effect of having participated in the program for firm i.

DEIP, Amman June, 10-12 2008

2 a the evaluation problem1
2.A. The evaluation problem
  • Computing (YT- YC) requires knowledge of the counterfactual outcome that is not empirically observable since a firm can not be observed simultaneously as a participant and as a non-participant.

DEIP, Amman June, 10-12 2008

2 a the evaluation problem2
2.A. The evaluation problem
  • by comparing data on participating and non-participating firms, we can evaluate an average effect of program participation, E[YT- YC]
  • Substracting and adding E[YC |D=1]

DEIP, Amman June, 10-12 2008

2 a the evaluation problem3
2.A. The evaluation problem
  • Only if there is no selection bias, the average effect of program participation will give an unbiased estimate of the program impact
  • There is no selection bias, if participating and non-participating firms are similar with respect to dimensions that are likely to affect both the level of innovation expenditures and TDF participation

Eg. Size, age, exporting, solvency… affecting RD expenditures and application for grant

DEIP, Amman June, 10-12 2008

2 b the evaluation problem avoided
2.B. The evaluation problem avoided
  • Incorporating randomized evaluation in programme design
  • Random assignment of treatment (participation in the program) would imply that there are no pre-existing differences between the treated and non-treated firms, selection bias is zero
  • Hard to implement for certain types of policy instruments

DEIP, Amman June, 10-12 2008

2 b controlling for selection bias
2.B. Controlling for selection bias

Controlling for observable differences

  • Develop a statistically robust control group of non-beneficiaries
  • identify comparable participating and non-participating firms, conditional on a set of observable variables X,
  • i.o.w.: control for the pre-existing observable differences
  • using econometric techniques:

e.g. propensity score matching

DEIP, Amman June, 10-12 2008

2 b propensity score matching psm
2.B. Propensity score matching (PSM)
  • If there is only one dimension (eg size) that affects both treatment (participation in TDF) and outcome (R&D intensity) , it would be relatively simple to find pairs of matching firms.
  • When treatment and outcome are determined by a multidimensional vector of characteristics (size, age, industry, location...), this becomes problematic.
  • Find pairs of firms that have equal or similar probability of being treated (having TDF support)

DEIP, Amman June, 10-12 2008

2 b psm
2.B. PSM
  • Using probit or logit analysis on the whole sample of beneficiaries and non-beneficiaries, we calculate the probability (P) or propensity that a firm participates in a program
  • P(D=1)=F(X)

X= vector of observable characteristics

  • Purpose: to find for each participant (D=1) at least one program non-participant that has equal/very similar chance of being participant, which is then selected into the control group.

DEIP, Amman June, 10-12 2008

2 b psm1
2.B. PSM
  • It reduces the multidimensional problem of several matching criteria to one single measure of distance
  • There are several measures of proximity:

Eg nearest neighbour, predefined range, kernel – based matching ...

DEIP, Amman June, 10-12 2008

2 b psm2
2.B. PSM
  • Estimating the impact (Average effect of Treatment on Treated):

ATT=E[E(Y1 | D = 1, p(x)) –E(Y0 | D = 0, p(x))| D=1 ]

Y is the impact variable

D = {0,1} is a dummy variable for the participation in the program,

x is a vector of pre-treatment characteristics

p(x) is the propensity score.

DEIP, Amman June, 10-12 2008

2 b difference in difference did
2.B. Difference in difference (DID)

The treated and control group of firms may also differ in non-observable characteristics, eg management skills.

  • If panel data are available (data of pre-treatment and post-treatment time periods) the impact of unobservable differences and time shocks can be neutralised by taking the difference-in-differences of the impact variable.
  • Important assumption: unobservables do not change over time
  • In case of DID, the impact variable is a growth rate.

DEIP, Amman June, 10-12 2008

3 example of results
3. Example of results

Impact of ADTEN (Brazil) on (private) R&D intensity

Single difference in 2000

[(RD/sales 2000 beneficiaries –

RD/sales 2000 control)] after PSM

92 observations each

  • beneficiaries 1.18%
  • Control group 0.52%
  • Difference: 0.66%
  • positive and significant impact,net of subsidy

DEIP, Amman June, 10-12 2008

3 example of results1
3. Example of results

Impact of FONTAR-ANR (Argentina)

on (public+private) R&D intensity (=R&D expenditures/sales)

Difference in difference with PSM

37 observations each

[(RDint. afterANR beneficiaries –RD/sales beforeANR ben.)-

RD/sales afterANR control-RD/Sales beforeANR control)]

  • Beneficiaries (0.20- 0.08) = 0.12
  • Control group (0.15 - 0.22) = -0.07
  • DID 0.19

positive and significant impact, GROSS of subsidy

DEIP, Amman June, 10-12 2008

3 results summary
3. Results: summary

The impact of the programs on firm behaviour and outcomes becomes weaker and weaker as one gets further from the immediate target of the policy instrument:

  • There is clear evidence of a positive impact on R&D,
  • weaker evidence of some behavioural effects,
  • and almost no evidence of an immediate positive impact on new product sales or patents.
  • This may be expected, given the relatively short time span over which the impacts were measured.

DEIP, Amman June, 10-12 2008

3 results
3. Results
  • no clear evidence that the TDF can significantly affect firms’ productivity and competitiveness within a five-year period, although there is a suggestion of positive impacts.
  • However, these outcomes, which are often the general objective of the programs, are more likely related to a longer run impact of policy.
  • The evaluation does not take into account potential positive externalities that may result from the TDF.

DEIP, Amman June, 10-12 2008

3 results1
3. Results

the evaluation design should clearly identify:

  • rationale
  • short, medium and long run expected outcomes;
  • periodic collection of primary data on the programs’ beneficiaries and on a group of comparable non-beneficiaries;
  • the repetition of evaluation on the same sample so that long run impacts can be clearly identified;
  • the periodic repetition of the impact evaluation on new samples to identify potential needs of re-targeting of policy tools.

DEIP, Amman June, 10-12 2008

3 concluding remarks
3. Concluding remarks
  • The data needs of this type of evaluation are evident
  • Involvement and commitment of statistical offices is needed to be able to merge survey data that allow these analyses
  • The merger and accessability of several data sources create unprecedented opportunities for the evaluation and monitoring of policy instruments

Thank you!

DEIP, Amman June, 10-12 2008

ad