slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Sensitivity Analysis: Quantifying the Discourse About Causal Inference Kenneth A. Frank Help from Yun-jia Lo and Mik PowerPoint Presentation
Download Presentation
Sensitivity Analysis: Quantifying the Discourse About Causal Inference Kenneth A. Frank Help from Yun-jia Lo and Mik

play fullscreen
1 / 127

Sensitivity Analysis: Quantifying the Discourse About Causal Inference Kenneth A. Frank Help from Yun-jia Lo and Mik

78 Views Download Presentation
Download Presentation

Sensitivity Analysis: Quantifying the Discourse About Causal Inference Kenneth A. Frank Help from Yun-jia Lo and Mik

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Sensitivity Analysis: Quantifying the Discourse About Causal Inference Kenneth A. Frank Help from Yun-jiaLo and Mike Seltzer AERA workshop April 4, 2014 Topics Covered Participants will learn how to quantify concerns about causal inferences due to unobserved variables or populations. Participants will learn how to calculate the correlations associated with an unobserved confounding variable or the amount of one’s sample that would have to be replaced to invalidate an inference. I will also present a general framework for characterizing the robustness of inferences from randomized experiments or observational studies. Calculations for bivariate and multivariate analysis will be presented in SPSS, SAS, and Stata, with an excel spreadsheet for other applications. Additional topics include a typology of thresholds for making inferences, null hypotheses of non-zero effects, evaluating thresholds relative to characteristics of observed variables or populations, and extensions to non-linear models. Format The format will be a mixture of presentation, individual exploration, and group work. Participants may include graduate students and professors, although all must be comfortable with basic regression and multiple regression. Participants should bring their own laptop, or be willing to work with another student who has a laptop. Participants may choose to bring to the course an example of an inference from a published study or their own work, as well as data analyses they are currently conducting.

  2. overview Replacement Cases Framework (40 minutes to reflection) Thresholds for inference and % bias to invalidate and inference The counterfactual paradigm Application to concerns about non-random assignment to treatments Application to concerns about non-random sample Reflection (10 minutes) Examples of replacement framework Internal validity example: Effect of kindergarten retention on achievement (40 minutes to break) External validity example: effect of Open Court curriculum on achievement Review and Reflection Extensions Extensions of the framework Exercise and break (20 minutes: Yun-jia Lo, Mike Seltzer support) Correlational Framework (25 minutes to exercise) How regression works Impact of a Confounding variable Internal validity: Impact necessary to invalidate an inference Example: Effect of kindergarten retention on achievement Exercise(25 minutes: Mike Seltzer supports on-line) External validity (30 minutes) combining estimates from different populations example: effect of Open Court curriculum on achievement Conclusion (10 minutes)

  3. Materials for course https://www.msu.edu/~kenfrank/research.htm#causal

  4. Quick Survey Can you make a causal inference from an observational study?

  5. Answer: Quantifying the Discourse Can you make a causal inference from an observational study? Of course you can. You just might be wrong. It’s causal inference, not determinism. But what would it take for the inference to be wrong?

  6. I: Replacement of Cases Framework How much bias must there be to invalidate an inference? Concerns about Internal Validity • What percentage of cases would you have to replace with counterfactual cases (with zero effect) to invalidate the inference? Concerns about External Validity • What percentage of cases would you have to replace with cases from an unsampled population (with zero effect) to invalidate the inference?

  7. What Would It Take to Change an Inference? Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences Abstract We contribute to debate about causal inferences in educational research in two ways. First, we quantify how much bias there must be in an estimate to invalidate an inference. Second, we utilize Rubin’s causal model (RCM) to interpret the bias necessary to invalidate an inference in terms of sample replacement. We apply our analysis to an inference of a positive effect of Open Court Curriculum on reading achievement from a randomized experiment, and an inference of a negative effect of kindergarten retention on reading achievement from an observational study. We consider details of our framework, and then discuss how our approach informs judgment of inference relative to study design. We conclude with implications for scientific discourse. Keywords: causal inference; Rubin’s causal model; sensitivity analysis; observational studies Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. 2013.  What would it take to Change an Inference?: Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences.   Education, Evaluation and Policy Analysis.  Vol 35: 437-460.http://epa.sagepub.com/content/early/recent

  8. % bias necessary to invalidate the inference { }

  9. Quantifying the Discourse: Formalizing Bias Necessary to Invalidate an Inference δ =a population effect, =the estimated effect, and δ# =the threshold for making an inference An inference is invalid if: > δ # > δ. (1) An inference is invalid if the estimate is greater than the threshold while the population value is less than the threshold. Defining bias as -δ, (1) implies an estimate is invalid if and only if: Expressed as a proportion of the estimate, inference invalid if:

  10. % bias necessary to invalidate the inference { } δ#

  11. Interpretation of % Bias to Invalidate an Inference % Bias is intuitive Relates to how we think about statistical significance Better than “highly significant” or “barely significant” But need a framework for interpreting

  12. Framework for Interpreting % Bias to Invalidate an Inference: Rubin’s Causal Model and the Counterfactual • I have a headache • I take an aspirin (treatment) • My headache goes away (outcome) Q) Is it because I took the aspirin? • We’ll never know – it is counterfactual – for the individual This is the Fundamental Problem of Causal Inference

  13. Definition of Replacement Cases as Counterfactual: Potential Outcomes Definition of treatment effect for individual i: Fundamental problem of causal inference is that we cannot simultaneously observe

  14. Fundamental Problem of Inference and Approximating the Counterfactual with Observed Data (Internal Validity) But how well does the observed data approximate the counterfactual? 6? 6? 6? 9 10 11 3 4 5

  15. Symbolic: Fundamental Problem of Inference and Approximating the Counterfactual with Observed Data (Internal Validity) But how well does the observed data approximate the counterfactual? 6? 6? 6? Yt|X=t Yc|X=t Yc|X=c Yt|X=c

  16. Approximating the Counterfactual with Observed Data But how well does the observed data approximate the counterfactual? Difference between counterfactual values and observed values for the control implies the treatment effect of 1 8 9 10 1 1 1 3 4 5 6 9 is overestimated as 6 using observed control cases with mean of 4

  17. Using the Counterfactual to Interpret % Bias to Invalidate the Inference How many cases would you have to replace with zero effect counterfactuals to change the inference? Assume threshold is 4 (δ# =4): 1- δ#/ =1-4/6=.33 =(1/3) 6 6 6 0 0 0 9 10 11 3 4 5 6.00 4 6 The inference would be invalid if you replaced 33% (or 1 case) with counterfactuals for which there was no treatment effect. New estimate=(1-% replaced) +%replaced(no effect)= (1-%replaced) =(1-.33)6=.66(6)=4

  18. % bias necessary to invalidate the inference { } δ# To invalidate the inference, replace 33% of cases with counterfactual data with zero effect

  19. Fundamental Problem of Inference to an Unsampled Population (External Validity) counterfactual But how well does the observed data represent both populations? 9 10 11 3 4 5 8 8 8 6 6 6 4 6

  20. Fundamental Problem of Inference and Approximating the Unsampled Population with Observed Data (External Validity) How many cases would you have to replace with cases with zero effect to change the inference? Assume threshold is: δ# =4: 1- δ#/ =1-4/6=.33 =(1/3) 9 10 Yt|Z=p 11 3 4 Yc|Z=p 5 6 6 6 6 6 Yt|Z=p´ 6 Yc|Z=p´ 0 6 4

  21. Fundamental Problem of Inference and Approximating the UnsampledPopulation with Observed Data (External Validity) How many cases would you have to replace with cases with zero effect to change the inference? Assume threshold is: δ# =4: 1- δ#/ =1-4/6=.33 =(1/3) 9 10 11 3 4 5 6 6 6 6 6 Yt|Z=p´ 6 0 6 4

  22. % bias necessary to invalidate the inference { } δ# To invalidate the inference, replace 33% of cases with cases from unsampled population data with zero effect

  23. Review & Reflection Review of Framework Pragmatism  thresholds How much does an estimate exceed the threshold  % bias to invalidate the inference Interpretation: Rubin’s causal model • internal validity: % bias to invalidate number of cases that must be replaced with counterfactual cases (for which there is no effect) • external validity: % bias to invalidate  number of cases that must be replaced with unobserved population (for which there is no effect) Reflect Which part is most confusing to you? Is there more than one interpretation? Discuss with a partner or two

  24. Example of Internal Validity from Observational Study : The Effect of Kindergarten Retention on Reading and Math Achievement(Hong and Raudenbush 2005) 1. What is the average effect of kindergarten retention policy? (Example used here) Should we expect to see a change in children’s average learning outcomes if a school changes its retention policy? Propensity based questions (not explored here) 2. What is the average impact of a school’s retention policy on children who would be promoted if the policy were adopted? Use principal stratification. Hong, G. and Raudenbush, S. (2005). Effects of Kindergarten Retention Policy on Children’s Cognitive Growth in Reading and Mathematics. Educational Evaluation and Policy Analysis. Vol. 27, No. 3, pp. 205–224

  25. Data • Early Childhood Longitudinal Study Kindergarten cohort (ECLSK) • US National Center for Education Statistics (NCES). • Nationally representative • Kindergarten and 1st grade • observed Fall 1998, Spring 1998, Spring 1999 • Student • background and educational experiences • Math and reading achievement (dependent variable) • experience in class • Parenting information and style • Teacher assessment of student • School conditions • Analytic sample (1,080 schools that do retain some children) • 471 kindergarten retainees • 10,255 promoted students

  26. Effect of Retention on Reading Scores(Hong and Raudenbush)

  27. Possible Confounding Variables(note they controlled for these) • Gender • Two Parent Household • Poverty • Mother’s level of Education (especially relevant for reading achievement) • Extensive pretests • measured in the Spring of 1999 (at the beginning of the second year of school) • standardized measures of reading ability, math ability, and general knowledge; • indirect assessments of literature, math and general knowledge that include aspects of a child’s process as well as product; • teacher’s rating of the child’s skills in language, math, and science

  28. Calculating the % Bias to Invalidate the Inference:Obtain spreadsheet From https://www.msu.edu/~kenfrank/research.htm#causal Choose spreadsheet for calculating indices Access spreadsheet

  29. Calculating % Bias to Invalidate an Inference Choose % bias to invalidate

  30. Obtain t critical, estimated effect and standard error Estimated effect ( ) = -9.01 Standard error=.68 n=7168+471=7639; df > 500, t critical=-1.96 From: Hong, G. and Raudenbush, S. (2005). Effects of Kindergarten Retention Policy on Children’s Cognitive Growth in Reading and Mathematics. Educational Evaluation and Policy Analysis. Vol. 27, No. 3, pp. 205–224

  31. Calculating the % Bias to Invalidate the Inference:Entering Values and Calculating =the estimated effect = -9.01 standard error =.68 t critical= -1.96 δ# =the threshold for making an inference = se x tcritical, df>230 = .68 x -1.96=-1.33 [user can specify alternative threshold] % Bias necessary to invalidate inference = 1-δ#/ =1-1.33/-9.01=85% 85% of the estimate must be due to bias to invalidate the inference. }

  32. Using the Counterfactual to Interpret % Bias to Invalidate the Inference How many cases would you have to replace with zero effect counterfactuals to change the inference? Assume threshold is 4 (δ# =4): 1- δ#/ =1-4/6=.33 =(1/3) 6 6 6 0 0 0 9 10 11 3 4 5 6.00 4 6 The inference would be invalid if you replaced 33% (or 1 case) with counterfactuals for which there was no treatment effect. New estimate=(1-% replaced) +%replaced(no effect)= (1-%replaced) =(1-.33)6=.66(6)=4

  33. Example Replacement of Cases with Counterfactual Data to Invalidate Inference of an Effect of Kindergarten Retention Retained Promoted Comparison in observed data Counterfactual: promoted students, if they had been retained To invalidate, 85% of promoted students would have to have had most (7.2) of their advantage (conditional on pretests, motivation, ses, etc.) if all had been retained. Original distribution Replacement counterfactual cases with zero effect

  34. Example Replacement of Cases with Counterfactual Data to Invalidate Inference of an Effect of Kindergarten Retention Counterfactual: promoted students, if they had been retained Comparison in observed data To invalidate, 85% of promoted students would have to have had most (7.2) of their advantage (conditional on pretests, motivation, ses, etc.) if all had been retained. Retained Promoted Original cases that were not replaced Replacement counterfactual cases with zero effect Original distribution

  35. Interpretation 1) Consider test scores of a set of children who were retained that are considerably lower (9 points) than others who were candidates for retention but who were in fact promoted. No doubt some of the difference is due to advantages the comparable others had before being promoted. But now to believe that retention did not have an effect one must believe that 85% of those comparable others would have enjoyed most (7.2) of their advantages whether or not they had been retained. This is even after controlling for differences on pretests, mother’s education, etc. 2) The replacement cases would come from the counterfactual condition for the observed outcomes. That is, 85% of the observed potential outcomes must be unexchangeable with the unobserved counterfactual potential outcomes such that it is necessary to replace those 85% with the counterfactual potential outcomes to make an inference in this sample. Note that this replacement must occur even after observed cases have been conditioned on background characteristics, school membership, and pretests used to define comparable groups.

  36. Evaluation of % Bias Necessary to Invalidate Inference Compare Bias Necessary to Invalidate Inference with Bias Accounted for by Background Characteristics 1% of estimated effect accounted for by background characteristics (including mother’s education), once controlling for pretests More than 85 times more unmeasured bias necessary to invalidate the inference Compare with % Bias necessary to invalidate inference in other studies Use correlation metric • Adjusts for differences in scale

  37. % Bias Necessary to Invalidate Inference based on Correlationto Compare across Studies t taken from HLM: =-9.01/.68=-13.25 n is the sample size q is the number of parameters estimated Where t is critical value for df>200 % bias to invalidate inference=1-.022/.150=85% Accounts for changes in regression coefficient and standard error Because t(r)=t(β)

  38. Calculating % Bias to Invalidate in terms of Correlations to Compare Across Studies Choose impact and replacement

  39. Compare with Bias other Observational Studies

  40. % Bias to Invalidate Inference for observational studieson-line EEPA July 24-Nov 15 2012 Kindergarten retention effect

  41. Exercise 1 : % Bias necessary to Invalidate an Inference • Take an example from an observational study in your own data or an article • Calculate the % bias necessary to invalidate the inference • Interpret the % bias in terms of sample replacement • What are the possible sources of bias? • Would they all work in the same direction? • Debate your inference with a partner

  42. Application to Randomized Experiment: Effect of Open Court Curriculum on Reading Achievement • Open Court “scripted” curriculum versus business as usual • 917 elementary students in 49 classrooms • Comparisons within grade and school • Outcome Measure: Terra Nova comprehensive reading score Borman, G. D., Dowling, N. M., and Schneck, C. (2008). A multi-site cluster randomized field trial of Open Court Reading. Educational Evaluation and Policy Analysis, 30(4), 389-407.

  43. Value of Randomization Few differences between groups But done at classroom level Teachers might talk to each other School level is expensive (Slavin, 2009)

  44. n=27+22=49

  45. Obtaining # parameters estimated, t critical, estimated effect and standard error 3 parameters estimated, Df=n of classrooms- # of parameters estimated= 49-3=46. t critical = t.05, df=46=2.013 Estimated effect ( ) = 7.95 Standard error=1.83

  46. Differences between Open Court and Business as Usual Difference across grades: about 10 units 7.95 using statistical model “statistically significant” unlikely (probability < 5%) to have occurred by chance alone if there were really no differences in the population But is the Inference about Open Court valid in other contexts?

  47. Quantifying the Discourse for Borman et al:What would it take to change the inference? δ =a population effect, =the estimated effect = 7.95, and δ # =the threshold for making an inference = se x tcritical, df=46 =1.83 x 2.013=3.68 % Bias necessary to invalidate inference = 1- δ #/ =1-3.68/7.95=54% 54% of the estimate must be due to bias to invalidate the inference