1 / 61

What Would It Take to Change an Inference? Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences

What Would It Take to Change an Inference? Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences. Abstract

pavel
Download Presentation

What Would It Take to Change an Inference? Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What Would It Take to Change an Inference? Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences Abstract We contribute to debate about causal inferences in educational research in two ways. First, we quantify how much bias there must be in an estimate to invalidate an inference. Second, we utilize Rubin’s causal model (RCM) to interpret the bias necessary to invalidate an inference in terms of sample replacement. We apply our analysis to an inference of a positive effect of Open Court Curriculum on reading achievement from a randomized experiment, and an inference of a negative effect of kindergarten retention on reading achievement from an observational study. We consider details of our framework, and then discuss how our approach informs judgment of inference relative to study design. We conclude with implications for scientific discourse. Keywords: causal inference; Rubin’s causal model; sensitivity analysis; observational studies Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. 2013.  What would it take to Change an Inference?: Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences.   Education, Evaluation and Policy Analysis.  Vol 35: 437-460.http://epa.sagepub.com/content/early/recent

  2. % bias necessary to invalidate the inference { }

  3. Quantifying the Discourse: Formalizing Bias Necessary to Invalidate an Inference δ =a population effect, =the estimated effect, and δ# =the threshold for making an inference An inference is invalid if: > δ # > δ. (1) An inference is invalid if the estimate is greater than the threshold while the population value is less than the threshold. Defining bias as -δ, (1) implies an estimate is invalid if and only if: Expressed as a proportion of the estimate, inference invalid if:

  4. % bias necessary to invalidate the inference { } δ#

  5. Interpretation of % Bias to Invalidate an Inference % Bias is intuitive Relates to how we think about statistical significance Better than “highly significant” or “barely significant” But need a framework for interpreting

  6. Framework for Interpreting % Bias to Invalidate an Inference: Rubin’s Causal Model and the Counterfactual • I have a headache • I take an aspirin (treatment) • My headache goes away (outcome) Q) Is it because I took the aspirin? • We’ll never know – it is counterfactual – for the individual This is the Fundamental Problem of Causal Inference

  7. Definition of Replacement Cases as Counterfactual: Potential Outcomes Definition of treatment effect for individual i: Fundamental problem of causal inference is that we cannot simultaneously observe

  8. Fundamental Problem of Inference and Approximating the Counterfactual with Observed Data (Internal Validity) But how well does the observed data approximate the counterfactual? 6? 6? 6? 9 10 11 3 4 5

  9. Symbolic: Fundamental Problem of Inference and Approximating the Counterfactual with Observed Data (Internal Validity) But how well does the observed data approximate the counterfactual? 6? 6? 6? Yt|X=t Yc|X=t Yc|X=c Yt|X=c

  10. Approximating the Counterfactual with Observed Data But how well does the observed data approximate the counterfactual? Difference between counterfactual values and observed values for the control implies the treatment effect of 1 8 9 10 1 1 1 3 4 5 6 9 is overestimated as 6 using observed control cases with mean of 4

  11. Using the Counterfactual to Interpret % Bias to Invalidate the Inference How many cases would you have to replace with zero effect counterfactuals to change the inference? Assume threshold is 4 (δ# =4): 1- δ#/ =1-4/6=.33 =(1/3) 6 6 6 0 0 0 9 10 11 3 4 5 6.00 4 6 The inference would be invalid if you replaced 33% (or 1 case) with counterfactuals for which there was no treatment effect. New estimate=(1-% replaced) +%replaced(no effect)= (1-%replaced) =(1-.33)6=.66(6)=4

  12. % bias necessary to invalidate the inference { } δ# To invalidate the inference, replace 33% of cases with counterfactual data with zero effect

  13. Fundamental Problem of Inference to an Unsampled Population (External Validity) counterfactual But how well does the observed data represent both populations? 9 10 11 3 4 5 8 8 8 6 6 6 4 6

  14. Fundamental Problem of Inference and Approximating the Unsampled Population with Observed Data (External Validity) How many cases would you have to replace with cases with zero effect to change the inference? Assume threshold is: δ# =4: 1- δ#/ =1-4/6=.33 =(1/3) 9 10 Yt|Z=p 11 3 4 Yc|Z=p 5 6 6 6 6 6 Yt|Z=p´ 6 Yc|Z=p´ 0 6 4

  15. % bias necessary to invalidate the inference { } δ# To invalidate the inference, replace 33% of cases with cases from unsampled population data with zero effect

  16. Review & Reflection Review of Framework Pragmatism  thresholds How much does an estimate exceed the threshold  % bias to invalidate the inference Interpretation: Rubin’s causal model • internal validity: % bias to invalidate number of cases that must be replaced with counterfactual cases (for which there is no effect) • external validity: % bias to invalidate  number of cases that must be replaced with unobserved population (for which there is no effect) Reflect Which part is most confusing to you? Is there more than one interpretation? Discuss with a partner or two

  17. Example of Internal Validity from Observational Study : The Effect of Kindergarten Retention on Reading and Math Achievement(Hong and Raudenbush 2005) 1. What is the average effect of kindergarten retention policy? (Example used here) Should we expect to see a change in children’s average learning outcomes if a school changes its retention policy? Propensity based questions (not explored here) 2. What is the average impact of a school’s retention policy on children who would be promoted if the policy were adopted? Use principal stratification. Hong, G. and Raudenbush, S. (2005). Effects of Kindergarten Retention Policy on Children’s Cognitive Growth in Reading and Mathematics. Educational Evaluation and Policy Analysis. Vol. 27, No. 3, pp. 205–224

  18. Data • Early Childhood Longitudinal Study Kindergarten cohort (ECLSK) • US National Center for Education Statistics (NCES). • Nationally representative • Kindergarten and 1st grade • observed Fall 1998, Spring 1998, Spring 1999 • Student • background and educational experiences • Math and reading achievement (dependent variable) • experience in class • Parenting information and style • Teacher assessment of student • School conditions • Analytic sample (1,080 schools that do retain some children) • 471 kindergarten retainees • 10,255 promoted students

  19. Effect of Retention on Reading Scores(Hong and Raudenbush)

  20. Possible Confounding Variables(note they controlled for these) • Gender • Two Parent Household • Poverty • Mother’s level of Education (especially relevant for reading achievement) • Extensive pretests • measured in the Spring of 1999 (at the beginning of the second year of school) • standardized measures of reading ability, math ability, and general knowledge; • indirect assessments of literature, math and general knowledge that include aspects of a child’s process as well as product; • teacher’s rating of the child’s skills in language, math, and science

  21. Calculating the % Bias to Invalidate the Inference:Obtain spreadsheet From https://www.msu.edu/~kenfrank/research.htm#causal Choose spreadsheet for calculating indices Access spreadsheet

  22. Calculating % Bias to Invalidate an Inference Choose % bias to invalidate

  23. Obtain t critical, estimated effect and standard error Estimated effect ( ) = -9.01 Standard error=.68 n=7168+471=7639; df > 500, t critical=-1.96 From: Hong, G. and Raudenbush, S. (2005). Effects of Kindergarten Retention Policy on Children’s Cognitive Growth in Reading and Mathematics. Educational Evaluation and Policy Analysis. Vol. 27, No. 3, pp. 205–224

  24. Calculating the % Bias to Invalidate the Inference:Entering Values and Calculating =the estimated effect = -9.01 standard error =.68 t critical= -1.96 δ# =the threshold for making an inference = se x tcritical, df>230 = .68 x -1.96=-1.33 [user can specify alternative threshold] % Bias necessary to invalidate inference = 1-δ#/ =1-1.33/-9.01=85% 85% of the estimate must be due to bias to invalidate the inference. }

  25. Using the Counterfactual to Interpret % Bias to Invalidate the Inference How many cases would you have to replace with zero effect counterfactuals to change the inference? Assume threshold is 4 (δ# =4): 1- δ#/ =1-4/6=.33 =(1/3) 6 6 6 0 0 0 9 10 11 3 4 5 6.00 4 6 The inference would be invalid if you replaced 33% (or 1 case) with counterfactuals for which there was no treatment effect. New estimate=(1-% replaced) +%replaced(no effect)= (1-%replaced) =(1-.33)6=.66(6)=4

  26. Example Replacement of Cases with Counterfactual Data to Invalidate Inference of an Effect of Kindergarten Retention Retained Promoted Comparison in observed data Counterfactual: promoted students, if they had been retained To invalidate, 85% of promoted students would have to have had most (7.2) of their advantage (conditional on pretests, motivation, ses, etc.) if all had been retained. Original distribution Replacement counterfactual cases with zero effect

  27. Example Replacement of Cases with Counterfactual Data to Invalidate Inference of an Effect of Kindergarten Retention Counterfactual: promoted students, if they had been retained Comparison in observed data To invalidate, 85% of promoted students would have to have had most (7.2) of their advantage (conditional on pretests, motivation, ses, etc.) if all had been retained. Retained Promoted Original cases that were not replaced Replacement counterfactual cases with zero effect Original distribution

  28. Interpretation 1) Consider test scores of a set of children who were retained that are considerably lower (9 points) than others who were candidates for retention but who were in fact promoted. No doubt some of the difference is due to advantages the comparable others had before being promoted. But now to believe that retention did not have an effect one must believe that 85% of those comparable others would have enjoyed most (7.2) of their advantages whether or not they had been retained. This is even after controlling for differences on pretests, mother’s education, etc. 2) The replacement cases would come from the counterfactual condition for the observed outcomes. That is, 85% of the observed potential outcomes must be unexchangeable with the unobserved counterfactual potential outcomes such that it is necessary to replace those 85% with the counterfactual potential outcomes to make an inference in this sample. Note that this replacement must occur even after observed cases have been conditioned on background characteristics, school membership, and pretests used to define comparable groups.

  29. Evaluation of % Bias Necessary to Invalidate Inference Compare Bias Necessary to Invalidate Inference with Bias Accounted for by Background Characteristics 1% of estimated effect accounted for by background characteristics (including mother’s education), once controlling for pretests More than 85 times more unmeasured bias necessary to invalidate the inference Compare with % Bias necessary to invalidate inference in other studies Use correlation metric • Adjusts for differences in scale

  30. % Bias Necessary to Invalidate Inference based on Correlationto Compare across Studies t taken from HLM: =-9.01/.68=-13.25 n is the sample size q is the number of parameters estimated Where t is critical value for df>200 % bias to invalidate inference=1-.022/.150=85% Accounts for changes in regression coefficient and standard error Because t(r)=t(β)

  31. Calculating % Bias to Invalidate in terms of Correlations to Compare Across Studies Choose impact and replacement

  32. Compare with Bias other Observational Studies

  33. % Bias to Invalidate Inference for observational studieson-line EEPA July 24-Nov 15 2012 Kindergarten retention effect

  34. Exercise 1 : % Bias necessary to Invalidate an Inference • Take an example from an observational study in your own data or an article • Calculate the % bias necessary to invalidate the inference • Interpret the % bias in terms of sample replacement • What are the possible sources of bias? • Would they all work in the same direction? • Debate your inference with a partner

  35. Application to Randomized Experiment: Effect of Open Court Curriculum on Reading Achievement • Open Court “scripted” curriculum versus business as usual • 917 elementary students in 49 classrooms • Comparisons within grade and school • Outcome Measure: Terra Nova comprehensive reading score Borman, G. D., Dowling, N. M., and Schneck, C. (2008). A multi-site cluster randomized field trial of Open Court Reading. Educational Evaluation and Policy Analysis, 30(4), 389-407.

  36. Value of Randomization Few differences between groups But done at classroom level Teachers might talk to each other School level is expensive (Slavin, 2009)

  37. n=27+22=49

  38. Obtaining # parameters estimated, t critical, estimated effect and standard error 3 parameters estimated, Df=n of classrooms- # of parameters estimated= 49-3=46. t critical = t.05, df=46=2.013 Estimated effect ( ) = 7.95 Standard error=1.83

  39. Differences between Open Court and Business as Usual Difference across grades: about 10 units 7.95 using statistical model “statistically significant” unlikely (probability < 5%) to have occurred by chance alone if there were really no differences in the population But is the Inference about Open Court valid in other contexts?

  40. Quantifying the Discourse for Borman et al:What would it take to change the inference? δ =a population effect, =the estimated effect = 7.95, and δ # =the threshold for making an inference = se x tcritical, df=46 =1.83 x 2.013=3.68 % Bias necessary to invalidate inference = 1- δ #/ =1-3.68/7.95=54% 54% of the estimate must be due to bias to invalidate the inference

  41. Calculating the % Bias to Invalidate the Inference:Entering Values and Calculating =the estimated effect = 7.95 standard error =1.83 t critical= 2.013 δ# =the threshold for making an inference = se x tcritical, df=46 = 1.83 x 2.013=3.68 [user can override to specify threshold] % Bias necessary to invalidate inference = 1-d#/d =1-3.68/7.95=54% 54% of the estimate must be due to bias to invalidate the inference.

  42. % Exceeding Threshold for Open Court Estimated Effect } 54 % above threshold=1-3.68/7.95=.54 δ#=3.68 54% of the estimate must be due to bias to invalidate the inference

  43. Fundamental Problem of Inference to an Unsampled Population (External Validity) But how well does the observed data represent both populations? 9 10 11 3 4 5 8 8 8 6 6 6 4 6

  44. Fundamental Problem of Inference and Approximating the Counterfactual with Observed Data (External Validity) How many cases would you have to replace with cases with zero effect to change the inference? Assume threshold is: δ# =4: 1- δ#/ =1-4/6=.33 =(1/3) 9 10 11 3 4 5 6 6 6 6 6 6 0 6 4

  45. Interpretation of Amount of Bias Necessary to Invalidate the Inference: Sample Representativeness To invalidate the inference: 54% of the estimate must be due to sampling bias to invalidate Borman et al.’s inference You would have to replace 54% of Borman’s cases (about 30 classes) with cases in which Open Court had no effect to invalidate the inference Are 54% of Borman et al.’s cases irrelevant for non-volunteer schools? We have quantified the discourse about the concern of validity

  46. Example Replacement of Cases from Non-Volunteer Schools to Invalidate Inference of an Effect of the Open Court Curriculum Open Court Business as Usual Original volunteer cases that were not replaced Replacement cases from non-volunteer schools with no treatment effect Original distribution for all volunteer cases

  47. Example Replacement of Cases from Non-Volunteer Schools to Invalidate Inference of an Effect of the Open Court Curriculum Open Court Business as Usual Original volunteer cases that were not replaced Replacement cases from non-volunteer schools with no treatment effect Original distribution for all volunteer cases

More Related