1 / 14

Evaluating Job Training Programs: What have we learned?

Evaluating Job Training Programs: What have we learned? . Haeil Jung and Maureen Pirog School of Public and Environmental Affairs Indiana University Bloomington Nov 7 th 2009. What Policy Question to Ask and How to Answer. Policy Questions and Parameters of Interest I.

ronni
Download Presentation

Evaluating Job Training Programs: What have we learned?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Job Training Programs: What have we learned? Haeil Jung and Maureen Pirog School of Public and Environmental Affairs Indiana University Bloomington Nov 7th 2009

  2. What Policy Question to Ask and How to Answer

  3. Policy Questions and Parameters of Interest I • Average Treatment Effect: Universal Programs • αATE = E(∆|X)=E(Y1-Y0|X) • Treatment on the Treated: Programs with Self-Selection • αTT = E(∆|X, D=1)=E(Y1-Y0|X, D=1) • Local Average Treatment Effect: Program Expansion by Providing Inducements • αLATE = E(Y1-Y0|X, D(z)=1, D(z’)=0) • Intent To Treat: The Effect of Program Availability • αITT = E(Y1|D=1, R=1) - E(Y1|D=1, R=0) in the presence of attrition in the treatment group

  4. Policy Questions and Parameters of Interest II ATE for all who are eligible LATE for those who are eligible and are induced to participate because of new incentives to participate TT for those who are eligible and participate ITT for those who are eligible and are accepted to participate

  5. Policy Questions and Parameters of Interest II • Treatment on the Treated (TT) = αTT • It answers how the program changes the outcome of participants compared to what they would have experienced if they had not participated • This parameter is informative about the gross gain accruing to the economy from the existence of a program at a certain level compared to the alternative of shutting it down • It gives policy makers clear advice about whetherthey should keep the program or not

  6. Decomposition of Conventional Selection Bias Estimator= a αT T Conventional Selection Bias = B • Yi = β0 + αTTDi+Xβ + υi • E(Y1|D=1) –E(Y0|D=0)=E(Y1-Y0|D=1) +[E(Y0|D=1)-E(Y0|D=0)] • The conventional selection bias (B) can be decomposed into three sources of bias (Heckman et al., 1997) • B1= bias that arises from comparing non-comparable people: ex) discrepancy in years of age in the treatment and comparison group • B2= bias that arises from weighting comparable people incomparably: ex) different distribution over the same range of age in the treatment and comparison groups • B3= the “true” selection bias (selection on unobservables): ex) participation based on individual returns to the program that is unknown to researchers Heckman et al., 1997

  7. Conventional Selection Bias Over Different Comparison Groups • No-shows in the treatment group • Bias 3 only • The eligible but non-participating group (ENP): same local labor market and questionnaire • Biases 1 and 2 exist • Same amount of Bias 3 as no-shows • A comparison group from Survey of Income and Program Participation (SIPP) • Biases 1 and 2 exist • Large Bias 3 than ENP and no-shows

  8. If Random Experimentation is Not Possible, What Lessons Have We Learned about Designing a Quasi-Experiment? • Use the same questionnaire to obtain individual labor outcomes or demographic information • Match individuals in the treatment and comparison groups in the same local labor markets • Find a comparison group for which observed characteristics largely overlap that of program participants

  9. Sources of “True” Self-Selection Bias in Employment and Training Programs • Yit = β0 + αiDi+Xit β + θi+ Uit where Uit = ρUit-1 + εit • Three sources coming from individuals’ self-selection behavior (selection on unobservables) • αi= α0 + νi: individuals select into the program because they know they will earn higher returns from the program (overestimation of true αTT ) • θi: individuals select into the program because their latent or forgone earnings are low at the time of program entrance (underestimation of true αTT ) • Uit= ρUit-1 + εit: individuals’ earnings are depending on previous earnings that are low at the time of program entrance (underestimation of true αTT ) Heckman et al. (1999)

  10. Estimated Selection Bias by Different Empirical Methods over Simulation • Cross-sectional, difference-in-differences, and AR(1) regression estimators seem to work better under all three sources of selection bias • Three sources of bias offset to each other • Estimates with small bias can be misleading • Instrumental variable estimator and Heckman Selection correction model seem to work best under θiand Uit= ρUit-1 + εit • Selection caused by αi= α0 + νiheavily against these two models

  11. “How do non-experimental estimators work? It depends.” • Random assignment still works best • Propensity Score Matching: • Require a large sample • Selection on observables only • Sensitive to various matching methods • No clear guidance for superior matching procedures • Difference-in-Differences: • Require panel data • Selection on unobservables (time constant heterogeneity only) • Sensitive to choice of time before and after program • Regression Discontinuity: • Require a clear cut participation rule • Require a large sample around the threshold • Works like random assignment under the above two conditions • Effects are obtained only for individuals around the threshold for participation Pirog et al. (2009), Battistin and Rettore (2007), and Heckman et al. (1999)

  12. It is all about Data, Methods, and Self-Selection Behavior • Data availability: Cross-Sectional, Repeated Cross-Sectional, or Panel Data • Size of Data: Large sample or small sample • Quality of Data: Same Location and Same Survey Instruments for the Treatment and Control Groups • Self-Selection on Observables • Self-Selection on Unobservables • Heterogeneous response to the program • Time constant individual heterogeneity • Autocorrelation between earnings in different time periods Pirog et al. (2009), Heckman and Vytlacil (2007), and Heckman et al. (1999)

  13. Discussion and Conclusion • Random assignment still works best to estimate Treatment on the Treated • There is no “silver bullet” for program evaluation • How to reduce conventional bias • Use the same questionnaire to obtain individual labor outcomes or demographic information for the treatment and comparison groups • Match individuals in the treatment and comparison groups in the local labor markets • Find a comparison group for which observed characteristics largely overlap that of program participants • Be aware of interactions among biases from different sources

  14. Thank You

More Related