1 / 38

Interpolation and Widening

Interpolation and Widening. Ken McMillan Microsoft Research. TexPoint fonts used in EMF: A A A A A. Interpolation and Widening. Widening/Narrowing and Craig Interpolation are two approaches to computing inductive invariants of transition systems.

calais
Download Presentation

Interpolation and Widening

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Interpolation and Widening Ken McMillan Microsoft Research TexPoint fonts used in EMF: AAAAA

  2. Interpolation and Widening • Widening/Narrowing and Craig Interpolation are two approaches to computing inductive invariants of transition systems. • Both are essentially methods of generalizing from proofs about bounded executions to proofs about unbounded executions. • In this talk, we'll consider the relationship between these two approaches, from both theoretical and practical points of view. • Consider only property proving applications, since interpolation only applies with a property to prove.

  3. Intuitive comparison weaker lfp ... widening/narrowing stronger iterations inductive weaker lfp ... interpolation stronger iterations

  4. Abstractions as proof systems • We will view both widening/narrowing and interpolation as proof systems • In particular, local proof systems • A proof system (or abstraction) consists of: • A logical language L (abstract domain) • A set of sound deduction rules • A choice of proof system constitutes a bias, or domain knowledge • Rich proof system = weak bias • Impoverished proof system = strong bias By restricting the logical language and deduction rules, the analysis designer expresses a space of possible proofs in which the analysis tool should search.

  5. Fundamental problems • Relevance • We must avoid a combinatorial explosion of deductions • Thus, deduction must be restricted to facts relevant to the property • Convergence • Eventually the proofs for bounded executions must generalize to a proof of unbounded executions.

  6. Different approaches We will see that the two methods have many aspects in common, but take different approaches to these fundamental problems. • Widening/narrowing relies on a restricted proof system • Relevance is enforced by strong bias • Convergence is also enforced in this way, but proof of a property is not guaranteed • Interpolation uses a rich proof system • Relevance is determined by Occam's razor • relevant deductions occur in simple property proofs • Convergence is not guaranteed, but • approached heuristically again using Occam's razor In the interpolation approach, we rely on well-developed theorem proving approaches to search large spaces for simple proofs.

  7. Proofs • A proof is a series of deductions, from premises to conclusions • Each deduction is an instance of an inference rule • Usually, we represent a proof as a tree... Premises P1 P2 P5 P4 P3 P1 P2 C Conclusion C

  8. Linear arithmetic Boolean logic Sum rule: x1· y1 x2· y2 x1+x2· y1+y2 Resolution rule: p _ : p _D _ Inference rules • The inference rules depend on the theory we are reasoning in

  9. invariant: {x == y} Invariants from unwindings • A simple way to generalize from bounded to unbounded proofs: • Consider just one program execution path, as straight-line program • Construct a proof for this straight-line program • See if this proof contains an inductive invariant proving the property • Example program: x = y = 0; while(*) x++; y++; while(x != 0) x--; y--; assert (y == 0);

  10. {True} {True} x = y = 0; x++; y++; x++; y++; [x!=0]; x--; y--; [x!=0]; x--; y--; [x == 0] [y != 0] {y = 0} {x = 0 ^ y = 0} {y = 1} {x = y} {y = 2} {x = y} Proof of inline program contains invariants for both loops {y = 1} {x = y} {y = 0} {x = 0 ) y = 0} {False} {False} Unwind the loops • Assertions may diverge as we unwind • A practical method must somehow prevent this kind of divergence! How can we find relevant proofs of program paths?

  11. B A p Ù q Øq Ù r Interpolation Lemma [Craig,57] • Let A and B be first order formulas, using • some non-logical symbols (predicates, functions, constants) • the logical symbols ^, _, :, 9, 8, (), ... • If A Ù B = false, there exists an interpolant A' for (A,B) such that: A Þ A' A' ^ B = false A’ uses only common vocabulary of A and B A’ = q

  12. {True} True x = y x1= y0 x=y; 1. Each formula implies the next ) x1=y0 {x=y} y1=y0+1 y++ y++; ) {y>x} y1>x1 x1=y1 [x == y] [x=y] ) False {False} Proving in-line programs proof SSA sequence Hoare Proof Prover Interpolation Interpolants as Floyd-Hoare proofs 2. Each is over common symbols of prefix and suffix 3. Begins with true, ends with false

  13. x1=y0 x=y; x1· y0 y0· x1 y++; y1=y0+1 y0+1 · y1 y1· y0+1 x1+1 · y1 y1· x1+1 y1·x1 1 · 0 [y · x] FALSE Local proofs and interpolants TRUE x1 · y x1+1 · y1 FALSE This is an example of a local proof...

  14. x1=y0 {x1,y0} x1· y0 y0 y1 x1 y1=y0+1 y0+1 · y1 deduction “in scope” here {x1,y0,y1} x1+1 · y1 y1·x1 {x1,y1} Definition of local proof Local proof: Every deduction written in vocabulary of some frame. vocabulary of frame = set of variables “in scope” scope of variable = range of frames it occurs in

  15. x1=y0 x1· y0 {x1,x0} y1=y0+1 y0+1 · y1 {x1,y0,y1} x1+1 · y1 y1·x1 1 · 0 {x1,y1} FALSE Forward local proof TRUE x1 · y x1+1 · y1 FALSE Forward local proof: each deduction can be assigned a frame such that all the deduction arrows go forward. For a forward local proof, the (conjunction of) assertions crossing frame boundary is an interpolant.

  16. {x1,y0} {x1,y0,z1} z1= y0+ 2 {x1,y0,z1} FALSE 1·0 Proofs and relevance • By dropping unneeded inferences, we weaken the interpolant and eliminate irrelevant predicates. TRUE x1=y0+1 x1=y0 + 1 z1=x1+1 x1=y0 + 1 x1=y0 + 1 Æz1 = y0+ 2 x1·y0 y0· z1 0 · 2 FALSE Interpolants are neither weakest pre not strongest post.

  17. unsat. FALSE  x = e [e/x] Applying Occam's Razor Simple proofs are more likely to generalize • Define a (local) proof system • Can contain whatever proof rules you want • Define a cost metric for proofs • For example, number of distinct predicates after dropping subscripts • Exhaustive search for lowest cost proof • May restrict to forward or reverse proofs Allow simple arithmetic rewriting. Even this trivial proofs system allows useful flexibility

  18. x0 = y0 x1 = 1 y1 = 1 x1 = y0+1 x2 = y2 x1 = y1 x2 = 2 y2 = 2 x2 = y1+1 ... ... Loop example cost: 2N cost: 2 TRUE TRUE x0 = 0 y0 = 0 x0 = y0 x0= 0Æ y0 = 0 x1=x0+1 y1=y0+1 x1= y1 x1=1 Æ y1 = 1 x2=x1+1 y2=y1+1 x2= y2 x2=2 Æ y2 = 2 ... ... ... Lowest cost proof is simpler, avoids divergence.

  19. Interpolation • Generalize from bounded proofs to unbounded proofs • Weak bias • Rich proof system (large space of proofs) • Apply Occam's razor (simple proofs more likely to generalize) • Occam's razor is applied to • Avoid combinatorial explosion of deductions (relevance) • Eventually generalize to inductive proofs (convergence) • Apply theorem proving technology to search large space of possible proofs for simple proofs • DPLL, SMT solvers, etc.

  20. Widening operators • A widening operator is a function over partially ordered , satisfying the following properties: • Soundness: • Expansion: • Stability: for every ascending chain... this chain eventually stabilizes.

  21. Upward iteration sequence • Suppose we have a monotone transformer representing (or approximating) our concrete semantics. • We use apply the widening operator to successive iterations of to obtain an upward iteration sequence. ... eventually stable! over-approximate • The widening properties guarantee • over-approximation • stabilization Narrowing similar but contracting

  22. Widening as local deduction • Since widening loses information, we can think of it as a deduction rule • In fact, we may have several deduction rules at our disposal: abstract post join widen pre pre pre Sound if is an over-approximation Sound if is an over-approximation and is sound Sound if is an over-approximation and is sound Note we don't need the expansion and stability properties of to have a sound deduction rule.

  23. Widening with octagons {True} x = y = 0; x++; y++; x++; y++; [x!=0]; x--; y--; [x!=0]; x--; y--; [x == 0] [y != 0] {x=y, x,y} {x=y, x,y} {x=y, x,y} {x=y, x,y} {x=y, x,y} {x=y, x,y} But note the irrelevant fact! {x=y, x,y} Our proof rules are too coarse to eliminate this fact. {False} Because we proved the property, we have computed an interpolant

  24. Over-widening (with intervals) {True} x = y = 0; x=1-x; y++; x=1-x; y++; [x==2]; {x,y} {x,y} {x,y} {x,y } {False} Note if we had waited on step to widen we would have a proof.

  25. Safe widening • Let us define a safe widening sequence as one that ends in a safe state. Suppose we apply a sequence of rules and fail... We may postpone a widening to achieve a safety proof • This is a proof search problem! • If we have rules and steps, there are possible proofs • Safe widening sequences may not stabilize • May not contain a long enough sequence of • Safe widening sequences may not exist • That is, our proof system may be incomplete

  26. Incompleteness Widening/narrowing • Incomplete proof system on purpose • We restrict the proof system (strong bias) to enforce • relevance focus • convergence • These properties are obtained at the risk of over-widening Interpolation • Incompleteness derives only from incompleteness of underlying logic • For example, in Presburger arithmetic we have completeness • Relevance focus and convergence rely on general heuristics • Occam's razor (simple proofs tend to generalize) • Rely on theorem proving techniques • Choice of logic and axioms also represents a weak bias

  27. Consequences of strong bias • Widening requires domain knowledge, which entails a careful choice of the logical language L. • Octagons: easy • Unions of octagons: harder • Presburger arithmetic formulas: ??? • This entails incompleteness, as a restricted language implies loss of information. • This also means we can tailor the representation for efficiency. • Octagons: use half-space representation, not convex hull of vertices • Polyhedra: mixed representation

  28. Advantages of weak bias Weak bias can be used in cases where domain knowledge is lacking. • Boolean logic (e.g., hardware verification) • Language L is Boolean circuits over system state variables • There is no obvious a priori widening for this language • Interpolation techniques are the most effective known for this problem • McMillan CAV 2003 (feasible interpolation using SAT solvers) • Bradley VMCAI 2011 (interpolation by local proof) • Note rapid convergence is very important here • Infinite state cases requiring disjunctions • Hard to formula a widening a priori • Weak bias can be used to avoid combinatorial explosion of disjuncts • Example: IMPACT • Scaling to large number of variables • Weak bias can allow focus just on relevant variables

  29. invariant: {8 x. 0 · x ^ x < i ) a[x] = x} Simple example for(i = 0; i < N; i++) a[i] = i; for(j = 0; j < N; j++) assert a[j] = j;

  30. Partial Axiomatization • Axioms of the theory of arrays (with select and update) 8 (A, I, V) (select(update(A,I,V), I) = V 8 (A,I,J,V) (I  J ! select(update(A,I,V), J) = select(A,J)) • Axioms for arithmetic 8 (X) X X 8 (X,Y,Z) (X Y Y Z ! X Z) 8 (X,Y) (Y X Y succ(X)) [ integer axiom] etc... We use a (local) first-order superposition prover to generate interpolants, with a simple metric for proof complexity.

  31. i0 = 0 i0 < N a1 = update(a0,i0,i0) i1 = i0 + 1 i1 < N a2 = update(a1,i1,i1) i2 = i+1 + 1 i ¸ N ^ j0 = 0 j0 < N ^ j1 = j0 + 1 j1 < N select(a2,j1)  j1 {i0 = 0} i = 0; [i < N]; a[i] = i; i++; [i < N]; a[i] = i; i++; [i >= N]; j = 0; [j < N]; j++; [j < N]; a[j] != j; invariant {0 · U ^ U < i1) select(a1,U)=U} {0 · U ^ U < i2) select(a2,U)=U} invariant {j · U ^ U < N ) select(a2,U)=U} {j · U ^ U < N ) select(a2,U) = U} Unwinding simple example • Unwind the loops twice weak bias prevents constants diverging as 0, succ(0), succ(succ(0)), ...

  32. i = 0; [i < N]; a[i] = i; i++; [i < N]; a[i] = i; i++; [i >= N]; j = 0; [j < N]; j++; [j < N]; a[j] != j; With strong bias • Something like array segmentation functor of C + C + Logozzo {0,i} | i = 0 {0,i-1} {1,i} | i = 1, i {0} {i}? | i {0} {i-1}? {i} | i , iN {0} {i}? | i ... note: it so happened here our first try a widening was safe, but this may not always be so.

  33. Comparison Widening/narrowing • Language L, operators and carefully chosen to throw away information at just the right places • This represents strong domain knowledge • Carefully crafted representation yields high performance Interpolation • Axioms and proof bias are generic • Little domain knowledge is represented • Uses a generic theorem prover to generate local proofs • No domain specific tuning • Not as scalable as the strong bias approach

  34. List deletion example • Add a few axioms about reachability • Invariant synthesized with 3 unwindings (after some: simplification): a = create_list(); while(a){ tmp = a->next; free(a); a = tmp; } {rea(next,a,nil) ^ 8 x (rea(next,a,x)! x = nil _alloc(x))} • No need to craft a new specialized domain for linked lists. • Weak bias can be used in cases where domain knowledge is lacking.

  35. Are interpolantswidenings? • A safe widening sequence is an interpolant. • An interpolant is not necessarily a widening sequence, however. • Does not satisfy the expansion property • Does not satisfy the eventual stability property as we increase the sequence length. • A consequence of giving up stabilization is that inductive invariants (post-fixed points) are typically found in the middle of the sequence, not at an eventual stabilization point. • Early formulas tend to be too strong (influenced by initial condition) • Late formulas tend to be too weak (influenced by final condition)

  36. Typical interpolant sequence {True} x = y = 0; x++; y++; x++; y++; [x!=0]; x--; y--; [x!=0]; x--; y--; [x == 0] [y != 0] {} Too strong {} Weakened, but not expansive {} {} Does not stabilize at invariant {} Too weak {False} No matter how far we unwind, we may not get stabilization

  37. Conclusion • Widening/narrowing and interpolation are methods of generalizing from bounded to unbounded proofs • Formally, widening/narrowing satisfies stronger conditions widening/narrowing interpolation soundness expanding/contracting stabilizing soundness stabilization is not obtained when proving properties, however

  38. Conclusion, cont. • Heuristically, the difference is weak v. strong bias strong bias weak bias restricted proof system incompleteness smaller search space domain knowledge efficient representations rich proof system completeness large search space Occam's razor generic representations • Can we combine strong and weak heuristics? • Fall back on weak heuristics when strong fails • Use weak heuristics to handle combinatorial complexity • Build known widenings into theory solvers in SMT?

More Related