1 / 40

Main search strategy review

This review compares the advantages and disadvantages of interpretation and proof systems in search strategies, analyzing their connections and potential for improved theorem proving. It also discusses the use of proofs in feedback, enhancing human interaction, and explores connections between the two domains.

mwerner
Download Presentation

Main search strategy review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Main search strategy review More human friendly, Less automatable Main search strategy • Natural deduction • Sequents • Resolution Proof-system search ( ` ) Interpretation search (² ) • DPLL • Backtracking • Incremental SAT Less human friendly, More automatable

  2. Comparison between the two domains

  3. Comparison between the two domains • Advantages of the interpretation domain • Don’t have to deal with inference rules directly • less syntactic overhead, can build specialized data structures • Can more easily take advantage of recent advanges in SAT solvers • A failed search produces a counter-example, namely the interpretation that failed to make the formula true or false (depending on whether the goal is to show validity of unsatisfiability)

  4. Comparison between the two domains • Disadvantages of the interpretation domain • Search does not directly provide a proof • There are ways to get proofs, but they require more effort • Proofs are useful • Proof checkers are more efficient than proof finders (PCC) • Provide feedback to the human user of the theorem prover • Find false proofs cause by inconsistent axioms • See path taken, which may point to ways of formulating the problem to improve efficiency • Provide feedback to other tools • Proofs from a decision procedure can communicate useful information to the heuristic theorem prover

  5. Comparison between the two domains • Disadvantages of the interpretation domain (contd) • A lot harder to make the theorem prover interactive • Fairly simple to add user interaction when searching in the proof domain, but this is no the case in the interpretation domain • For example, when the Simplify theorem prover finds a false counter-example, it is in the middle of an exhaustive search. Not only is it hard to expose this state to the user, but it’s also not even clear how the user is going to provide guidance

  6. Connection between the two domains • Are there connections between the techniques in the two domains? • There is at least one strong connection, let’s see what it is.

  7. Let’s go back to interpretation domain • Show that the following in UNSAT (also, label each leaf with one of the original clauses that the leaf falsifies): • A Æ (: A Ç B) Æ: B

  8. Let’s go back to interpretation domain • Show that the following in UNSAT (also, label each leaf with one of the original clauses that the leaf falsifies): • A Æ (: A Ç B) Æ: B

  9. Parallel between DPLL and Resolution • A successful refutation DPLL search tree is isomorphic to a refutation based resolution proof • From the DPLL search tree, one can build the resolution proof • Label each leaf with one of the original clauses that the leaf falsifies • Perform resolution based on the variables that DPLL performed a case split on • One can therefore think of DPLL as a special case of resolution

  10. Connection between the two domains • Are there any other connections between interpretation searches and proof system searches? • Such connections could point out new search strategies (eg: what is the analog in the proof system domain of Simplify’s search strategy?) • Such connections could allow the state of theorem prover to be switched back and forth between the interpretation domain and proof system domain, leading to a theorem prover that combines the benefits of the two search strategies

  11. Proof Carrying Code

  12. Security Automata read(f) start has read send read(f) bad send

  13. if (…) halt(); <trans_FSM> send(); if (…) { if (…) halt(); <trans_FSM> read(f); } if(…) halt(); <trans_FSM> send(); Example Provider Consumer Policy send(); if(…) read(f); send(); Instr Run

  14. if (…) halt(); <trans_FSM> send(); if (…) { if (…) halt(); <trans_FSM> read(f); } if(…) halt(); <trans_FSM> send(); send(); if (…) read(f); if(…) halt(); <trans_FSM> send(); Example Provider Consumer Policy Policy send(); if(…) read(f); send(); Instr Opt Run

  15. Optimize: how? • Use a dataflow analysis • Determine at each program point what state the security automata may be in • Based on this information, can remove checks that are known to succeed

  16. if (…) halt(); <trans_FSM> send(); if (…) { if (…) halt(); <trans_FSM> read(f); } if(…) halt(); <trans_FSM> send(); send(); if (…) read(f); if(…) halt(); <trans_FSM> send(); Example Provider Consumer Policy Policy send(); if(…) read(f); send(); Instr Opt Run

  17. if (…) halt(); <trans_FSM> send(); if (…) { if (…) halt(); <trans_FSM> read(f); } if(…) halt(); <trans_FSM> send(); send(); if (…) read(f); if(…) halt(); <trans_FSM> send(); Example Provider Consumer Policy Policy Reject No send(); if(…) read(f); send(); Proof valid? Instr Opt Yes Proof Run Proof Instr & generate proof optimize & update proof

  18. Proofs: how? • Generate verification condition • Include proof of verification in binary Policy VCGen Program Verification Condition

  19. if (…) halt(); <trans_FSM> send(); if (…) { if (…) halt(); <trans_FSM> read(f); } if(…) halt(); <trans_FSM> send(); send(); if (…) read(f); if(…) halt(); <trans_FSM> send(); Example Provider Consumer Policy Policy Reject No send(); if(…) read(f); send(); Proof valid? Instr Opt Yes Proof Run Proof Instr & generate proof optimize & update proof

  20. send(); if (…) read(f); if(…) halt(); <trans_FSM> send(); Example Reject No Proof valid? Yes Proof Run

  21. send(); if (…) read(f); if(…) halt(); <trans_FSM> send(); Example Reject No • Run VCGen on code to generate “VC” • Check that “Proof” is a valid proof of “VC” Proof Yes Run

  22. VCGen • Where have we seen this idea before?

  23. VCGen • Where have we seen this idea before? • ESC/Java • For a certain class of policies, we can use a similar approach to ESC/Java • Say we want to guarantee that there are no NULL pointer dereferences • Add assert(p != NULL) before every dereference of p • The verification condition is then the weakest precondition of the program with respect to TRUE

  24. Simple Example a := &b if (x >= 0) if (x < 0) a := NULL assert(a != NULL) c := *a

  25. Simple Example a := &b if (x >= 0) if (x < 0) a := NULL assert(a != NULL) c := *a

  26. For the security automata example • We will do this example differently • Instead of providing a proof of WP for the whole program, the provider annotates the code with predicates, and provides proofs that the annotations are locally consistent • The consumer checks the proofs of local consistency • We use the type system of David Walker, POPL 2000, for expressing security automata compliance

  27. send(); if (…) read(f); if(…) halt(); <trans_FSM> send(); For the security automata example Actual instrumented code Pretty picture of instrumented code curr = “start”; send(); if (…) { next = transread(curr); read(f); curr = next; } next = transsend(curr); if (next == “bad”) halt(); send(); Original code send(); if(…) read(f); send(); transsend(“start”) == “start” transsend(“has_read”) == “bad” transread(“start”) == “has_read” transread(“has_read”) == “has_read”

  28. Secure functions • Each security relevant operation requires pre-conditions, and guarantees post-conditions. • For any “alphabet” function “func”: • P1: in(current_state) • P2: next_state == transfunc(current_state) • P3: next_state != “bad” Pre: {P1; P2; P3} Execute func Post: {in(next_state)}

  29. Secure functions • Example for function send() • Normal WP rules apply for other statements, for example: {in(curr); next == transsend(curr); next != “bad”} send(); {in(next)} {in(curr)} next = transsend(curr); {in(curr); next == transsend(curr)}

  30. Example curr = “start”; send(); if (…) { next = transread(curr); read(f); curr = next; } next = transsend(curr); if (next == bad) halt(); send();

  31. Example curr = “start”; send(); if (…) { next = transread(curr); read(f); curr = next; } next = transsend(curr); if (next == bad) halt(); send();

  32. ... next = transsend(curr); if (next == bad) halt(); send(); ... {in(curr)} next = transsend(curr); {in(curr); next == transsend(curr)} if (next == bad) { {in(curr); next == transsend(curr); next == “bad”} halt(); {} } {in(curr); next == transsend(curr); next != “bad”} send();

  33. curr = start; send(); if (…) { next = transread(curr); read(f); curr = next; } next = transsend(curr); if (next == bad) halt(); send(); {in(“start”)} curr = “start”; {curr == “start”; in(curr); curr == transsend(curr); curr != “bad”} send(); {in(curr); curr == “start”} if (…) { {in(curr); curr == “start”} next = transread(curr); {curr == “start”; next == “has_read”; in(curr); next == transread(curr); next != “bad”} read(f); {in(next); next == “has_read”} curr = next; {in(curr); curr == “has_read”} } {in(curr)} next = transsend(curr); if (next == bad) halt(); send(); Recall: transsend(“start”) == “start” transread(“start”) == “has_read”

  34. What to do with the annotations? • The code provider: • Send the annotations with the program • For each statement: • Send a proof of P) wp(S, Q) • The code consumer • For each statement: • Check that the provided proof of P) wp(S, Q) is correct { P } S { Q } { P } S { Q }

  35. PCC issues: Generating the proof • Cannot always generate the proof automatically • Techniques to make it easier to generate proof • Can have programmer provide hints • Can automatically modify the code to make the proof go through • Can use type information to generate a proof at the source code, and propagate the proof through compilation (TIL: Type Intermediate Language, TAL: Typed Assembly Language)

  36. PCC issues: Representing the proof • Can use the LF framework • Proofs can be large • Techniques for proof compression • Can remove steps from the proof, and let the checker infer them • Tradeoff between size of proof and speed of checker

  37. PCC issues: Trusted Computing Base • What’s trusted?

  38. PCC issues: Trusted Computing Base • What’s trusted? • Proof checker • VCGen (encodes the semantics of the language) • Background axioms

  39. Foundational PCC • Try to reduce the trusted computing base as much as possible • Express semantics of machine instructions and safety properties in a foundational logic • This logic should be suitably expressive to serve as a foundation for mathematics • Few axioms, making the proof checker very simple • No VCGen. Instead just provide a proof in the foundational proof system that the safety property holds • Trusted computed base an order of magnitude smaller than regular PCC

  40. (which you should ask yourself when you review a paper for this class) The big questions • What problem is this solving? • How well does it solve the problem? • What other problems does it add? • What are the costs (technical, economic, social, other) ? • Is it worth it? • May this eventually be useful?

More Related