1 / 30

Solving trust issues using Z3

Solving trust issues using Z3. Moritz Y. Becker, Nik Sultana Alessandra Russo Masoud Koleini Microsoft Research, Cambridge Imperial College Birmingham University. Z3 SIG, November 2011. probe.

cedric
Download Presentation

Solving trust issues using Z3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Solving trust issues using Z3 Moritz Y. Becker, Nik Sultana Alessandra Russo MasoudKoleini Microsoft Research, Cambridge Imperial College Birmingham University Z3 SIG, November 2011

  2. probe e.g. SecPAL, DKAL, Binder, RT, ... observe What can be detected about policy A0? Infer?

  3. A simple probing attack [Gurevich et al., CSF 2008] Svc says secretAg(Bob)! Alice Svc A = {Alice says foo if secretAg(Bob)} q = access? Alice says No: A0∪ A ⊬ q 1 2 Yes: A0∪ A ⊢ q A = { Alice says foo if secretAg(Bob), Alice says Svc cansaysecretAg(Bob)} q = access? Alice can detect “Svc says secretAg(Bob)”! (There’s also an attack on DKAL2, to appear in: “Information Flow in Trust Management Systems”, Journal of Computer Security.) Svc says secrAg(B) Alice says secrAg(B)

  4. Challenges • What does “attack”, “detect”, etc. mean?* • What can the attacker (not) detect? • How do we automate? *Based on “Information Flow in Credential Systems”, Moritz Y. Becker, CSF 2010

  5. probe

  6. Available probes ) ) ) )

  7. Available probes The attacker can’t distinguish and ) ) ),...,)? ) ) ) Yes, No, Yes, Yes, ...! ),...,)? Yes, No, Yes, Yes, ...! • Policies and are observationally equivalent() iff • for all :

  8. ),...,)? p ) ) p Yes, No, Yes, Yes, ...! ) p ) p ) p p! • A query is detectable in iff • .

  9. ),...,)? p ) ) p Yes, No, Yes, Yes, ...! )  p )  p ) p p?? • A query is opaque in iff • .

  10. Availableprobes No! secretAg(B) ({A says foo if secrAg(B)}, acc) secretAg(B) Yes! ({A says SrccansaysecAg(B), A says fooifsecretAg(B)}, acc) secretAg(B) secretAg(B) secretAg(B)! secretAg(B) • Svc says secretAg(B) is detectable in A0!

  11. Challenges • What does “attack”, “detect”, etc. mean? • What can the attacker (not) detect?* • How do we automate? * Based on “Opacity Analysis in Trust Management Systems”, Moritz Y. Becker and Masoud Koleini (U Birmingham), ISC2011

  12. Is opaque in ? • Policy language: Datalog clauses • Input: • Output: “opaque in ” or “detectable in ” • Sound, complete, terminating • A query is opaque in iff • .

  13. Example 1 What do we learn about and in ? must satisfy one of these:    

  14. Example 2 What do we learn about e.g. and in ? must satisfy one of these:     

  15. Challenges • What does “attack”, “detect”, etc. mean? • What can the attacker (not) detect? • How do we automate?

  16. How do we automate? • Previous approach:Build a policy in which the sought fact is opaque. • Approach described here:Search for proof to show that a property is detectable.

  17. Reasoning framework • Policies/credentials, and their properties are mathematical objects • Better still, are terms in a logic (object-level) • Probes are just a subset of the theorems in the logic. • Semantic constraints: Datalog entailment, hypothethical reasoning.

  18. Policies Empty policy Fact Rule Policy union

  19. Properties “phi holds if gamma”

  20. Example 1

  21. Example 2

  22. Calculus + PL + ML + Hy

  23. Reduced calculus (modulo normalisation)

  24. Axioms C1 and C2

  25. Props 8 and 9

  26. Normal form

  27. Naïve propositionalisation • Normalise the formula • Apply Prop9 (until fixpoint) • Instantiate C1, C2 and Prop8 for eachbox-formula • Abstract the boxes

  28. Improvements • Prop9 is very productive – in many cases this can be avoided – so it could be delayed. • Axiom C1 can be used as a filter.

  29. Summary • What does “attack”, “protect”, etc. mean? • Observational equivalence, opacity and detectability • What can the attacker (not) infer? • Algorithm for deciding opacity in Datalog policies • Tool with optimizations • How do we automate? • Encode as SAT problem

More Related