190 likes | 325 Views
Under-approximating to Over-approximate Invisible Invariants and Abstract Interpretation. Ken McMillan Microsoft Research. Lenore Zuck University of Illinois Chicago. TexPoint fonts used in EMF: A A A A A. Overview.
E N D
Under-approximating to Over-approximateInvisible Invariants and Abstract Interpretation Ken McMillan Microsoft Research Lenore Zuck University of Illinois Chicago TexPoint fonts used in EMF: AAAAA
Overview • For some abstract domains, computation of the best abstract transformer can be very costly • (Indexed) Predicate Abstraction • Canonical shape graphs • In these cases we may under-approximate the best transformer using finite-state methods, by restricting to a representative finite subset of the state space. • In practice, this can be a close under-approximation or even yield the exact abstract least fixed point • In some cases, finite-state under-approximations can yield orders-of-magnitude run-time reductions by reducing evaluation of the true abstract transformer. In this talk, we'll consider some generic strategies of this type, suggested by Pnueli and Zuck'sInvisible Invariants method (viewed as Abstract Interpretation).
... P1 P2 P3 PN Parameterized Systems • Suppose we have a parallel composition of N (finite state) processes, where N is unknown • Proofs require auxiliary constructs, parameterized on N • For safety, an inductive invariant • For liveness, say, a ranking • Pnueli, et al., 2001: derive these constructs for general N by abstracting from the mechanical proof of a particular N. • Surprising practical result: under-approximations can yield over-approximations at the fixed point.
1. Compute the reachable states RN for fixed N (say, N=5) ● ● ●●● ●●● ● ● ● ●●● ● ●●● ● ● ●● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ●● ● ●●● ● ● ● ● ●● 2. Project onto a small subset of processes (say 2) ●● ● ● ●● ●● ●● ●● = {(s1,s2) | 9 (s1,s2,...) 2 RN} Recipe for an invariant
● ● ....... ● ●● ....... ● ●●....... ● ●●....... ● ●● ....... ● ●● ....... ● 2. Project onto a small subset of processes (say 2) ●● ●● ●● ●● ●● ● ● 4. Test whether GN is inductive invariant for all N 8 N. GN) X GN Recipe for an invariant = {(s1,s2) | 9 (s1,s2,...) 2 RN} 3. Generalize from 2 to N, to get GN N N GN = Æi j2 [1..N] (si,sj) ... ...
Inductiveness is equivalent to validity of this formula: GNÆ T ) G’N Transition relation Checking inductiveness • Small model theorem: • If there is a countermodel with N>M, there is a countermodel with N=M • Suffices to check inductiveness for N·M In this case both the invariant generation and invariant checking amount to finite-state model checking. If no small model result is available, however, we can rely on a theorem prover to check inductiveness.
Abstraction setting Abstract language preserves conjunctions (s) = Æ { 2 L| s µ() } Concrete state space concrete transformer
Parameterized systems • Concrete state space is set of valuations of variables • Special variable Nat represents system parameter • Ranges of other variables depend on . • Example: • For fixed , all ranges are finite: • Concrete transition system defined in FOL: • Abstract domain is Indexed Predicate Abstraction • Fixed set of of index variables (say • Fixed set of atomic predicates . • A matrix is a Boolean combination over (in some normal form) • is the set of formulas where is a matrix. Example: 8i,j: i j ):(q[i] Æq[j]) • Small model results: • M depends mainly on quantifier structure of GN and T • Example: if T has one universal and GN has two, then M = 2b+3
= lfp Invariant by AI • Abstract transformer # # is difficult to compute (exponential TP calls) • Compute strongest inductive invariant in L For our abstraction, this computation can be quite expensive!
Restricted abstraction Abstract language Restricted concretization: "project" "generalize" Galois connection: computable! ... ... Concrete state space is a union of finite spaces for each value of
N N N N N N N ¶ ¶ t# = N N lfp SMT N if N >= M Invisible invariant construction • We construct the invariant guess by reachability and abstraction RN • Testing the invariant guess GN
t# t#N N N N Under-approximation • The idea of generalizing from finite instances suggests we can under-approximate the best abstract transformer # is an under-approximation of that we can compute with finite-state methods.
N N N N N N N =? =? Three methods lfp() A lfp() lfp() B N N N N N N ... lfp() C N N ... lfp
Experiments • Evaluate three approaches • Strategy A: compute using UCLID PA (eager reduction to ALL-SAT) • Strategy B: compute by TLV with BDD's and using UCLID • Strategy C: compute and by TLV and using UCLID • Strategy C wins in two cases • Fewer computations of and • Strategy B wins in one case • More abstraction reduces BDD size and iterations • In all cases, only one theorem prover call is needed.
Related Work • Yorsh, Ball, Sagiv 2006 • Combines testing and abstract interpretation • Does not compute abstract fixed-points for finite sub-spaces as here • Here we apply model checking aggressively to reduce computation • Bingham 2008 • Essentially applies Strategy B with small model theorem to verify FLASH cache coherence protocol • Compare to interactive proof with PVS with 111 lemmas (776 lines) and extensive proof script! Static analysis with finite domains can replace very substantial hand proof efforts.
Conclusion • Invisible invariants suggest a general approach to minimizing computation of the best transformer, based on two ideas: • Under-approximations can yield over-approximations at the fixed point • This is a bit mysterious, but observationally true • Computing the fixed point with under-approximations can use more light-weight methods • For example, BDD-based model checking instead of a theorem prover • Using under-approximations can reduce the number of theorem prover calls to just one in the best case. • We can apply this idea whenever we can define finite sub-spaces that are representative of the whole space. • Parametricity and symmetry are not required • For example, could be applied to heap-manipulating programs by bounding the heap size.
Example: Peterson ME • N-process version from Pnueliet al. 2001. : pc : in : last : Initially in : <non-critical>; goto : ; goto : if gotoelse goto : if then gotoelse goto : goto : <Critical>; goto : ; goto
Peterson invariant • Hand-made invariant for N-process Peterson (m.ZERO < m.in(i) & m.in(i) < m.N => m.in(m.last(m.in(i))) = m.in(i)) & (m.in(i) = m.in(j) & m.ZERO < l & l < m.in(i) => m.in(m.last(l)) = l) & (m.pc(i) = L4 => (m.last(m.in(i)) != i | m.in(j) < m.in(i))) & ((m.pc(i) = L5 | m.pc(i) = L6) => m.in(i) = m.N) & ((m.pc(i) = L0 | m.pc(i) = L1) => m.in(i) = m.ZERO) & (m.pc(i) = L2 => m.in(i) > m.ZERO) & ((m.pc(i) = L3 | m.pc(i) = L4) => m.in(i) < m.N & m.in(i) > m.ZERO) & (~(m.in(i) = m.N & m.in(j) = m.N)) • Required a few hours of trial-and error with a theorem prover
Peterson Invariant (cont.) • Machine generated by TLV in 6.8 seconds X18 := ~levlty1 & y1ltN & ~y1eqN & ~y2eqN & ~y1gtz & y1eqz & (~ysy1eqy1 => ~sy1eq1); X15 := ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysy1eqy1; X5 := (~levlty1 => y1ltN & X15); X1 := ysy1eqy1 & ~sy1eq1; X0 := ysy1eqy1 & sy1eq1; X16 := y1eqN & y2eqN & y1gtz & ~y1eqz & ysleveqlev & X0; X14 := y1eqN & y2eqN & y1gtz & ~y1eqz & X0; X13 := ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & (ysleveqlev => ysy1eqy1) & (~ysleveqlev => X0); X7 := (levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysleveqlev & ysy1eqy1) & X5; X6 := ~y1eqy2 & X7; X4 := (levlty1 => y1ltN & X13) & X5; X3 := (levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysleveqlev & X1) & (~levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & X1); X2 := ~y1eqy2 & X3; X17 := (levlty1 => (y1ltN => X13) & (~y1ltN => X14)) & (~levlty1 => (y1ltN => X15) & (~y1ltN => X16)); X12 := (y1eqy2 => X7); X11 := (y1lty2 => X6); X10 := y1lty2 & X6; X9 := ~y1lty2 & ~y1eqy2 & X4; X8 := (~y1eqy2 => X4); matrix := ((loc1 = L5 | loc1 = L6) => (loc2 = L0 | loc2 = L1 | loc2 = L2 | loc2 = L3 | loc2 = L4) & ~y1lty2 & ~y1eqy2 & (levlty1 => ~y1ltN & X14) & (~levlty1 => ~y1ltN & X16)) & (loc1 = L4 => ((loc2 = L5 | loc2 = L6) => y1lty2 & X2) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => (y1lty2 => X2) & (~y1lty2 => (y1eqy2 => X3) & X8)) & ((loc2 = L0 | loc2 = L1) => X9)) & (loc1 = L3 => ((loc2 = L5 | loc2 = L6) => X10) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => X11 & (~y1lty2 => X12 & X8)) & ((loc2 = L0 | loc2 = L1) => X9)) & (loc1 = L2 => ((loc2 = L5 | loc2 = L6) => X10) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => X11 & (~y1lty2 => X12 & (~y1eqy2 => X17))) & ((loc2 = L0 | loc2 = L1) => ~y1lty2 & ~y1eqy2 & X17)) & ((loc1 = L0 | loc1 = L1) => (~(loc2 = L1 | loc2 = L0) => y1lty2 & ~y1eqy2 & X18) & ((loc2 = L0 | loc2 = L1) => ~y1lty2 & y1eqy2 & X18));