1 / 32

Rules for Causal Graphs

Rules for Causal Graphs. References: Causality: Models, Reasoning and Inference by Judea Pearl - chapter 3 (2000) Causal diagrams for empirical research by Judea Pearl, Biometrika 82:669-710 (1995). Example of Causal Graph. Cd4 is confounding effect of medical risk.

skule
Download Presentation

Rules for Causal Graphs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rules for Causal Graphs References: Causality: Models, Reasoning and Inference by Judea Pearl - chapter 3 (2000) Causal diagrams for empirical research by Judea Pearl, Biometrika 82:669-710 (1995)

  2. Example of Causal Graph • Cd4 is confounding effect of medical risk. • Observational studies  confounding Concern Boiled/bottled Diarrhea CD4 Med. risk

  3. Prediction vs. Explanation • Prediction – create a model that the clinician will use to help predict risk of a disease for the patient. • Explanation – trying to investigate the causal association of a treatment or risk factor and a disease outcome. • Confounding really issue in explanation.

  4. The problem with observational studies: lack of randomization • If one has a treatment, or risk factor, with two levels (A and B), no guarantee that study populations (those getting A and B) will be roughly equivalent (in risk of the disease of interest). • In a perfect world can given everyone, every in study level A, record outcome, reset clock and then give level B. • Randomization means one can interpret estimates as if this is precisely what was done.

  5. Accounting for lack of randomization in the design phase • Assuming you have your risk factor or treatment of interest and have chosen the relevant disease outcome, the first step should be constructing a causal or path diagram. • The diagram will reflect the state of knowledge (or the best guess) regarding the disease process. • Can be used to decide the minimal number of variables one should include in a statistical procedure (e.g., regression).

  6. Using the Causal Graph - Pearl • Explicate the assumptions underlying model (i.e., make graph) • Decide whether assumptions are sufficient for obtaining consistent estimate of the target quantity (employ rules) • If answer to 2. is yes, provide expression for target quantity in terms of distributions of observed variables • If answer to 2 is no, suggest a set of observations and experiments which, if performed, would render a consistent estimate.

  7. Graph Terminology • Nodes – vertices on a graph • Edge – line or arrow connecting two nodes • Adjacent – two variables connected by an edge • Path – sequence of edges • Directed Path – arrows at the end of every edge • Acyclic – No loops • DAG – directed acyclic graph • Parents, children, descendants, etc. A Couple of Real DAGS

  8. Causal Graphs Simplify the Joint Distribution • Let the data be a sample of size n: X1, X2 ,…, Xn. • Without the graph, specify joint distribution as: • Note that given all the parents, PAi of node xi, xi is independent of all predecessors on the graph.

  9. Example • Joint distribution of graph (b) is:

  10. Formal Definition of Causal Graph • A DAG G is a causal graph if, for each node Xi ,with parents PAi, we have Xi =fi (PAi ,ei), eiindependent random variables and fi a deterministic function.

  11. Causal Graph Rules • Pearl and Robins have developed a set of graphical rules that one can apply to a graph to decide which variables might confound the effect of interest. • The “back-door” criterion (Pearl, Biometrika, 1995) states that no confounding exists in a causal diagram if every path from X (the risk factor) to Y (the outcome) that ends with an arrow pointing into X must also contain a pair of arrows pointing head to head.

  12. d-separation • Def: a path, p, is said to be d-separated (or blocked) by a set of nodes Z if and only if: • p contains a chain imj or a fork imj such that the middle node m is in Z, or • p contains an inverted fork (or collider) imj such that the middle node m is not in Z and such that no descendent of m is in Z. • A set Z is said to d-separate X from Y if and only if Z blocks every path from a node in X to a node in Y.

  13. Example of d-separated and d-connected • X={X2} and Y={X3} are d-separated by Z={X1}. • The path X2 – X4 – X3 is blocked by collider (and all descendents are out of Z). • However, X and Y are not d-separated by Z’={X1, X5} since X5 is a descendant of the collider, X4. So, knowing X5 causes X2 and X3 to be dependent???

  14. Causal Graphs and Berkson’s Paradox • May seem unintuitive that conditioning on a node not lying on a blocked path, may unblock the path. • However, general rule is that observations on a common consequence of two independent causes tend to render those causes dependent. • Phenomenon often called selection bias or Berkson’s paradox. • Example: admission to college based on either/or high test scores or special music talents.

  15. Example 1

  16. In this case, the back-door criterion suggests that the effect of X and Y is not confounded by A, Z or E. • The only arrow into X is the one traversing (X, E, Z, A, Y) and this path contains two arrows pointing head-to-head at Z. • Z is referred to as a barren proxy

  17. Consequences of Adjusting for Z • The standard statistical definition of a confounder implies that Z is a confounder in this case (this is wrong). • Statistically adjusting for Z when estimating the effect of X on Y will give a biased effect estimate • Thus, one should not necessarily “control” for every variable that is related to both the disease and the treatment/exposure of interest.

  18. Example 2 - Using Diagrams to select confounders. • Goal: get list of variables one will need to adjust for to get unconfounded effect of Xi (risk variable) on Xj (outcome).

  19. In this case, one could adjust for {X3,X4} or {X4,X5} but not just for {X4}.

  20. Path Diagrams and Residual Confounding Path diagrams can also be used to determine whether there will be residual confounding from unmeasured variables.

  21. Path Diagrams and Residual Confounding In this case, one can consistently estimate the effect of X on Y from the observed variables X, Z1, Z2, Z3 and Y.

  22. More on Sufficient Set of Adjustment • Let A be the explanatory variable of interest and Y the outcome of interest on a causal graph. • Let L be a measured set of non-descendents (of A). • Let U be the remaining non-descendents. • Let b(y|a) be the G-computational formula (Robins): • If A is independent of Ya, given L,U for each a, then: b(y|a)=P(Ya=y)

  23. What if U is unobserved? • Now (1) is not identifiable. • However, there are conditions under which L is a sufficient set of adjustment, that is:

  24. Back door path condition • No back door path from A to Y if Y is d-separated from A in GA, where GA is the graph obtained from G by deleting all outgoing arrows from A: • No back door path from A to Y, controlling for L, if Y and A are d-separated by L in GA: • Theorem: If there is no back door path from A to Y controlling for L, then L is sufficient for adjustment:

  25. Interventions in Causal Graphs • The causal effect of a variable (node) Xi can be defined as how the outcome, Y, changes when this variable is set to some value, thereby breaking the influence of predecessors. • This basic insight translates into the G-estimation algorithm of Robins (1986). • After intervening in the graph, by setting Xi = xi’, then the joint distribution of the data becomes:

  26. Front-Door Criterion • The orthodoxy is that one should not adjust for a variable on the causal pathway when estimating the causal effect. • However, there are exceptions: consider the following diagram.

  27. Front-Door Criterion, cont. • The joint distribution suggested by the graph is: • The intervention, set x = x’ induces the post-intervention distribution. • Summing over z and u gives:

  28. Front-Door Criterion, cont. • Using the conditional distribution assumptions in (3), one can show: • So, causal effect is identifiable • Front-Door: A set of variables Z is said to satisfy the front-door criterion relative to an ordered pair of variables (X,Y) iff: • Z intercepts all directed paths from X to Y; • there is no back-door path from X to Z; and • all back-door paths from Z to Y are blocked by X.

  29. Homework • Write down the distribution of the data implied by the following graphs A B C

  30. Example of Causal Diagram from HIV Study

  31. Marginal Model Effect of AZT on viral load • An example of a marginal model might be: where cum( ) is the cumulative dose of AZT until end of follow-up. • Given time-dependent confounders, traditional approach would be to form an adjusted model such as: where might be the cumulative CD4 count.

  32. Traditional Approach Incorrect • CD4 counts at the monitoring times are probably related to viral load at end of follow-up and are related to AZT treatment as well. However, earlier AZT treatment, for instance at time 0, also affect future CD4 counts. • Thus, CD4 is both a confounder (usual rule - control for) and on the causal pathway (usual rule - don’t control for). • Simple adjustment can yield very biased results.

More Related