1 / 21

Strategies for Determining Actual Cause

Strategies for Determining Actual Cause. Mark Hopkins UCLA Cognitive Systems Lab Tuesday, June 19, 2001. Overview. Review: what is actual cause? Determining actual cause (under the structural model-based definition proposed by Pearl and Halpern) is NP-hard. “Theorem-proving” approach.

jacob
Download Presentation

Strategies for Determining Actual Cause

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Strategies for Determining Actual Cause Mark Hopkins UCLA Cognitive Systems Lab Tuesday, June 19, 2001

  2. Overview • Review: what is actual cause? • Determining actual cause (under the structural model-based definition proposed by Pearl and Halpern) is NP-hard. • “Theorem-proving” approach

  3. Actual Cause – Motivation “Suzy and Billy both pick up rocks and throw them at a bottle. Suzy’s rock gets there first, shattering the bottle. Since both throws are perfectly accurate, Billy’s would have shattered the bottle if Suzy’s had not occurred.” (Hall 1998) Question: Did Suzy throwing her rock cause the bottle to shatter? Problem: A purely counterfactual approach does not give us the answer we expect!

  4. Assumptions • For the purposes of this discussion, we will be concerned with causation between events under a specific context. Hence we will largely concern ourselves not with a causal model M, but rather a causal world C = <M, u>, where u is a specific realization of the background variables U of M. • We will also assume that the underlying causal model is acyclic, hence if we represent M as a graph G (with an arrow from X to Y if X is part of the functional mechanism that determines Y), then G is a DAG.

  5. Back to Suzy and Billy ST 1 SH 1 BS 1 BT 1 BH 0 ST = Suzy Throws BT = Billy Throws SH = Suzy Hits BH = Billy Hits BS = Bottle Shattered

  6. Back to Suzy and Billy ST 1 SH 0 BS 1 BT 1 BH 1 ST = Suzy Throws BT = Billy Throws SH = Suzy Hits BH = Billy Hits BS = Bottle Shattered

  7. Definition – Actual Cause [Halpern and Pearl (2000)] X=x is an actual cause of Y=y in a causal world C if the following three conditions hold: AC1. In C, X=x and Y=y. AC2. There exists a setting (X=x’, W=w’) where X and W Vare disjoint, such that: (a) [X=x’, W=w’]Yy (b) [X=x, W=w’]Y = y (c) [X=x, W=w’, Z=z]Y = y, for all Z V– W (where Z=z in C). AC3. X is minimal; no subset of X satisfies conditions AC1 and AC2.

  8. Did Suzy’s throw cause the bottle to shatter? ST 0 SH 0 BS 0 BT 1 BH 0 Is ST=1 an actual cause of BS=1? AC2(a) [X=x’, W=w’]Yy

  9. Did Suzy’s throw cause the bottle to shatter? ST 1 SH 1 BS 1 BT 1 BH 0 Is ST=1 an actual cause of BS=1? AC2(b) [X=x, W=w’]Y = y AC2(c) [X=x, W=w’, Z=z]Y = y, for all Z V– W (where Z=z in C).

  10. The Difficulty with Determining Actual Cause • It turns out that determining actual cause is NP-hard (Hopkins 2001). • Unfortunately, it also seems that, under the full strength of the definition, determining actual cause is not even in NP (what kind of certificate could we give someone that would allow them to conclude X=x causes Y=y in polynomial time?)

  11. Simplifying Actual Cause X=x is an actual cause of Y=y in a causal world C if the following three conditions hold: AC1. In C, X=x and Y=y. AC2. There exists a setting (X=x’, W=w’) where X and W Vare disjoint, such that: (a) [X=x’, W=w’]Yy (b) [X=x, W=w’]Y = y (c) [X=x, W=w’, Z=z]Y = y, for all Z V– W (where Z=z in C). AC3. X is minimal; no subset of X satisfies conditions AC1 and AC2.

  12. Further simplifications • We will also be treating causation between single events, rather than conjunctions of events (as in the definition). • This is partly motivated by a theorem that states that there is no such thing as a conjunctive cause under the definition, i.e. any conjunctive cause violates the minimality requirement of AC3 (Hopkins 2001).

  13. Causal Network Pruning • We can limit the variables that we consider for inclusion in W (the variables that we set by external intervention). • In fact, to determine whether X=x causes Y=y, we only need to consider the set of variables V on a path from X to Y, and the parents of V {Y} for inclusion in W.

  14. Causal Network Pruning -Example X X Y

  15. One Approach to Determining Actual Cause BS = 0 eliminate BS BH = 0 SH = 0 ST SH SH SH BS BS BS eliminate BH BT BH BH BH SH = 0 BT = 0 SH = 0 SH = 1 BT = 0 SH = 0 SH = 1 BT = 1 SH = 0 BH = 0 SH = 0 eliminate SH ST = 0 BT = 0 SH = 0 BT = 0 BH = 0 ST = 0 BH = 0 SH = 0

  16. “Theorem-Proving Algorithm” • We refer to this algorithm as TP (for Theorem-Proving). Theorem: Any intervention found by algorithm TP gives rise to Y y in the causal world. Theorem: TP finds every intervention [W=w] of the variables in the causal world C such that [W=w]Y y, subject to the following conditions: (a) W contains a node on every path from a root node to Y in the causal network associated with C. (b) If W – V contains a node on every path from variable V to Y, then V W. Theorem: TP returns that X=x is an actual cause of Y=y in causal world C if and only if X=x is an actual cause of Y=y in causal world C.

  17. Time Complexity Analysis • In fact, since the TP search tree contains no duplicate nodes, we can get a better complexity bound by revisiting this theorem: • Thus, we can assert that TP runs in time O((ck + N) * |S|), where S is the subset of interventions that satisfy the conditions of the theorem, N is the number of variables in the causal world, c is the maximum cardinality of the variables, and k is the maximum number of parents per node. • The TP search tree contains |S| nodes • To check whether a node is a goal node requires us to check AC2(b) – a maximum of N calculations. • To generate the successor states of a node requires us to iterate through the truth table of a variable – in the worst case, O(ck) time. Theorem: TP finds every intervention [W=w] of the variables in the causal world C such that [W=w]Y y, subject to the following conditions: (a) W contains a node on every path from a root node to Y in the causal network associated with C. (b) If W – V contains a node on every path from variable V to Y, then V W.

  18. Experimental conditions • To test the algorithms, we generated random causal worlds through the following process: • We generate a random DAG over N variables by adding an edge from variable k to variable l, k<l, with probability P_E. We also can limit the number of parents allowed per node at L. • We quantify the table for variable V by randomly choosing the value of each table entry from a uniform distribution over the domain (of size D) of V. • The query under consideration was whether V_1 = v_1 is an actual cause of V_n = v_n (where V_1 is a root variable, V_n is a leaf variable in the causal network associated with the generated causal world).

  19. Results – Network Pruning

  20. Results – Algorithm Comparison Note: E_P = .15, L=3, D=2 5000 models, N=25

  21. Conclusions and Future Work • TP achieves much better experimental results than were achieved through a pruned brute-force search tree. • Questions: • Once we have identified potential causes under the simplified definition, how do we check AC2(c) efficiently? • Can we obtain a polynomial-time algorithm that will predict, given the topology and quantification of a causal model, how fast TP will run? • Is there any pruning we can do on the TP search tree?

More Related