880 likes | 1.13k Views
Learning Assumptions for Compositional Verification. J. M. Cobleigh, D. Giannakopoulou and C. S. Pasareanu. Presented by: Sharon Shoham. Main Limitation of Model Checking:. The state explosion problem The number of states in the system model grows exponentially with the number of variables
E N D
Learning Assumptions for Compositional Verification J. M. Cobleigh, D. Giannakopoulou and C. S. Pasareanu Presented by: Sharon Shoham
Main Limitation of Model Checking: The state explosion problem • The number of states in the system model grows exponentially with • the number of variables • the number of components in the system • One solution to state explosion problem: Abstraction • Another:Compositional Verification
Compositional Verification • Inputs: • composite systemM1║M2 • propertyP • Goal: check if M1║M2²P • First attempt: “divide and conquer” approach • Problem: usually impossibleto verify each component separately. • a component is typically designed to satisfy its requirements in specificenvironments (contexts)
Compositional Verification • Assume-Guarantee (AG) paradigm: introduces assumptions representing a component’s environment Instead of: Does componentsatisfy property? Ask: Under assumption A on its environment, does the component guarantee the property? <A> M <P>: whenever M is part of a system satisfying the assumptionA, then the system must also guaranteeP
Useful AG Rule for Safety Properties • check if a component M1 guaranteesP when it is a part of a system satisfying assumptionA. • discharge assumption: show that the remaining component M2 (the environment) satisfies A. <A> M1 <P> <true> M2 <A> <true> M1║M2<P> A M1 M2 ║ ² P
<A> M1 <P> <true> M2 <A> <true> M1║M2<P> Assume-Guarantee • Crucial element: assumption A. • Has to be strong enough to eliminate violations of P, but also general enough to reflect the environment M2appropriately. • requires non-trivial human input in defining assumptions. How to automatically construct assumptions ?
Outline • Motivation • Setting • Automatic Generation of Assumptions for the AG Rule • Learning algorithm • Assume-Guarantee with Learning • Example
Labeled Transition Systems (LTS) • Act – set of observableactions • LTSs communicate using observable actions • τ – local \ internal action LTSM = (Q, q0, αM, δ) • Q : finite non-empty set of states • q0∈ Q : initial state • αMµ Act : observable actions • δµQ x (αM [{τ }) xQ : transitionrelation Alphabet
Labeled Transition Systems (LTS) in send Traces: <in>, <in, send>,… 0 1 2 ack Trace of an LTS M : finite sequence of observable actions that M can perform starting at the initial state. L(M) = the Language of M : the set of all traces of M.
Parallel Composition M1║M2 • Components synchronize on commonobservable actions (communication). • The remaining actions are interleaved. M1 = (Q1,q01,αM1,δ1), M2 = (Q2,q02,αM2,δ2) M1║M2 = (Q,q0,αM,δ) • Q = Q1 x Q2 • q0 = (q01, q02) • αM = αM1[αM2
Transition Relation δ Synchronization on a ∈αM1ÅαM2 : (q1,a,q1’) ∈δ1 and (q2,a,q2’) ∈δ2 : ((q1,q2), a, (q1’,q2’)) ∈δ Interleaving on a ∈ (αM\ (αM1 ÅαM2)) [ {τ} : • (q1, a, q1’) ∈δ1 and a∉αM2: ((q1,q2), a,(q1’,q2)) ∈δfor any q2∈ Q2 • (q2, a, q2’) ∈δ2 and a∉αM1: ((q1,q2), a,(q1,q2’)) ∈δfor any q1∈ Q1
Example Input Output in send send out ║ 0 1 2 a b c ack ack
send ack Input Output in send send out ║ 0 0 1 2 1 2 a b c b c a ack ack Input ║Output 0,a 0,b 0,c 0,a in 1,a 1,a 1,b 1,c out 2,a 2,b 2,b 2,c 2,c
in in in send out out out ack Input Output in send send out ║ 0 1 2 a b c ack ack Input ║Output 0,a 0,b 0,c 1,a 1,b 1,c 2,a 2,b 2,c
Safety Properties Also expressed as LTSs, but of a special kind: Safety LTS : • Deterministic: • Does not contain τ -transitions, and • every state has at mostone outgoing transition for each action: (q,a,q’), (q,a,q’’) ∈δ q’ = q’’
Safety Properties For a safety LTS,P: • L(P) describes the set of legal (acceptable) behaviors over αP. M ² P iff ∀σ∈ L(M) : (σαP) ∈ L(P) • For Σ µ Act, σΣ is the trace obtained from σby removing all occurrences of actions a ∉ Σ. Example: <in, send, ack>{in, ack} = <in, ack> Language Containment L(M)αP µL(P)
Input ║Output ack 0,a 0,b 0,c 1,a 1,b 1,c send in in in 2,a 2,b 2,c out out out Example P Order in ? |= out ? Pref(<in,send,out,ack>*) Pref(<in,out>*) {in,out}
Model Checking M ² P Safety LTSP an Error LTS, Perr: • “traps” violations with special error state . P = (Q, q0, αP, δ) Perr = (Q [ {}, q0, αP, δ’), where: • δ’= δ[ {(q,a,) | a ∈αP and ∄q’ ∈Q: (q,a,q’) ∈δ} • Error LTS is complete. • is a deadend state: has no outgoing transitions.
Example Order Ordererr in out out in
In automata: M ² iff L(AM Å A) =; Model Checking M ² P Theorem: • M ²P iff is unreachable in M ║ Perr Remark: composition M ║ Perr is defined as before with a small exception: If the target state of a transition in Perr is , so is the target state of the corresponding transitions in the composed system M ║ Perr
δ1:M, δ2:Perr δ:M║Perr Transition Relation δ π Synchronization on a ∈αM1ÅαM2 : (q1,a,q1’) ∈δ1 and (q2,a,q2’) ∈δ2 : ((q1,q2), a, (q1’,q2’)) ∈δ Interleaving on a ∈ (αM\ (αM1 ÅαM2)) [ {τ} : • (q1, a, q1’) ∈δ1 and a∉αM2: ((q1,q2), a,(q1’,q2)) ∈δfor any q2∈ Q2 • (q2, a, q2’) ∈δ2 and a∉αM1: ((q1,q2), a,(q1,q2’)) ∈δfor any q1∈ Q1 π π π
Input ║Output ack 0,a 0,b 0,c 1,a 1,b 1,c send in in in 2,a 2,b 2,c out out out Example Order Ordererr in ║ out out in
in out Example Input ║Output Order Ordererr ack in 0,a ║ 1,a out send out in 2,b 2,c
Input ║Output Order Ordererr in ack ii i ii ║ 0,a 0,a 1,a 1,a 2,b 2,c in send out out out in unreachable ack send 0,a,i 0,a,i 1,a,i 2,b,i 2,c,i 2,c,i in out ack out send 0,a,ii 0,a,ii 1,a,ii 1,a,ii 2,b,ii 2,b,ii 2,c,ii in
Input ║Output Order Ordererr in ack i ii ║ 0,a 1,a 2,b 2,c in send out out in out in counter- example! ack send 0,a,i 0,a,i 1,a,i 2,b,i 2,c,i in out in ack out send 1,a,ii 0,a,ii 1,a,ii 2,b,ii 2,c,ii in in
A ║ M1² P M2²A M1║M2²P <A> M1 <P> <true> M2 <A> <true> M1║M2<P> Assume Guarantee Reasoning • Assumptions: also expressed as safety LTSs. • <A> M <P> is true iff A ║ M ² Pi.e. is unreachable in A ║ M ║ Perr M1, M2 : LTSs P, A : safety LTSs
Outline • Motivation • Setting • Automatic Generation of Assumptions for the AG Rule • Learning algorithm • Assume-Guarantee with Learning • Example
A ║ M1² P M2²A M1║M2²P Observation 1 • AG Rule will return conclusive results with: Weakest assumptionAwunder which M1 satisfies P: • Under assumption Aw on M1‘s environment,M1 satisfies P , i.e. Aw ║ M1² P • Weakest: forall env. M’2:M1║M’2 ² P IFF M’2 ²Aw Given Aw, to check if M1║M2²P, check if M2 meets the assumption Aw. sufficient and necessary assumption on M1’s env.
M1║M’2² P IFF M’2 ² Aw How to Obtain Aw ? • Given moduleM1,propertyP and M1‘s interface with its environmentΣ = (αM1[αP)ÅαM2 : Aw describes exactly alltraces over Σ such that in the context of , M1 satisfies P • Expressible as a safety LTS • Can be computed algorithmically Drawback of using Aw : if the computation runs out of memory, no assumption is obtained.
A ║ M1² P M2²A M1║M2²P Observation 2 • No need to use the weakest env. assumption Aw. • AG rule might be applicable with stronger (lessgeneral) assumption. Instead of finding Aw: • Use learning algorithm to learnAw . • Use candidates Ai produced by learning algorithm as candidate assumptions: try to apply AG rule with Ai.
A ║ M1² P M2²A M1║M2²P Given a Candidate Assumption Ai If Ai ║ M1² Pdoes nothold: Assumption Ai is not tight enough need to strengthenAi
A ║ M1² P M2²A M1║M2²P Given a Candidate Assumption Ai Suppose Ai║ M1² Pholds: If M2² Ai also holds M1║M2²P verified!. Otherwise: σ ∈ L(M2) but(σαΣ) ∉ L(Ai) -- cex P violated by M1 in the context of cex=(σαΣ)? Yes: realviolation ! M1║M2²P falsified ! No: spurious violation. need to find better approximation of Aw. “cex║M1”|≠ P ?
“cex ∊ L(Aw)” ? “cex║M1” ²P ? Compose M1 with LTS Acex over αΣ cex=a1,…ak Acex : Q={q0,…,qk} , q0 = q0 δ ={ (qi,ai+1,qi+1) | 0 i < k } Model check Acex║M1²P Is reachable in Acex║M1║Perr ? Alternatively: simulate cex onM1║ Perr cex║M1²P iff when cex is simulated on M1║Perrit cannot lead to (error) state. … q0 q1 qk a1 a2 ak Real violation yes no Spurious cex
For Aw : conclusive results guaranteed termination! 1. Ai║ M1 |= P Assume-Guarantee Framework counterexample – strengthen assumption Model Checking Learning Learning Ai false 1. Ai║ M1 |= P true true P holds in M1||M2 2. M2 |= Ai 2. M2 |= Ai false cex L(Ai) real error? real error? N Y P violated in M1||M2 counterexample – weaken assumption cex║M1|P ?
Outline • Motivation • Setting • Automatic Generation of Assumptions for the AG Rule • Learning algorithm • Assume-Guarantee with Learning • Example
Learning Algorithm for DFA – L* • L*: by Angluin, improved by Rivest & Schapire • learns an unknown regular language U • produces a Deterministic Finite state Automaton (DFA) C such that L(C) = U DFAM = (Q, q0, αM, δ, F) : • Q, q0, αM, δ: as in deterministic LTS • F µQ : accepting states • L(M) = {σ | δ(q0, σ) ∈ F}
Learning Algorithm for DFA – L* • L* interacts with a Teacher to answer two types of questions: • Membership queries: is string σinU? • Conjectures: for a candidate DFA Ci, is L(Ci) = U ? • answers are (true) or (false + counterexample) Conjectures C1, C2, … converge to C Equivalence Queries
Learning Algorithm for DFA – L* Myhill-Nerode Thm. For every regular set U ⊆∑* there exists a unique minimal deterministic automaton whose states are isomorphic to the set of equivalenceclasses of the following relation: w ≈ w’ iff ∀u ∈∑* : wu ∈U ⇔ w’u ∈U Basic idea: learn the equivalence classes • Two prefixes are not in the same class iff there is a distinguishingsuffixu
General Method • L* maintains a table that records whether strings in belong to U • makes membership queries to update it • Once all currently distinguishable strings are in different classes (table is closed), uses table to build a candidate Ci -- makes a conjecture : • if Teacher replies true, done! • if Teacher replies false, uses counterexample to update the table
Learning Algorithm for DFA – L* Observation Table: (S, E, T) S2– prefixes (represent. of equiv. classes / states) E2– suffixes (distinguishing) T: mapping (S ∪ S·∑) · E {true, false} ∈U ∉U T :
L* Algorithm T: (S ∪ S·∑) ·E {true, false} (S,E,T) is closed if ∀s∈S ∀a∈∑ ∃s’∈S ∀e∈E: T(sae)=T(s’e) ∈U ∉U sa is undistinguishable from s’ by any of the suffixes represents the next state from s after seeing a i.e. Everyrow sa of Shas a matching row s’inS
Learning Algorithm for DFA – L* sa has no matching row s’ inS Initially: S={λ}, E= {λ} Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? }
Learning Algorithm for DFA – L* Initially: S={λ}, E= {λ} Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? } Candidate DFA from a closed(S,E,T): States: S Initial state: λ Accepting states: s ∈S such that T(s) [= T(s¢λ)] =true Transition relation: δ(s,a) = s’ where ∀e∈E: T(sae)=T(s’e) [ closed: ∀s∈S ∀a∈∑ ∃s’∈S ∀e∈E: T(sae)=T(s’e)]
Learning Algorithm for DFA – L* Initially: S={λ}, E= {λ} Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? if C is correct return C else add e∈∑* that witnesses the cexto E }
Choosing e • e must be such that adding it to E will eliminate the current cex in the next conjecture. • e is a suffix of cex that witnesses a difference between L(Ci) and U • Adding e to E will cause an addition of a prefix (state) to S splitting states
Characteristics of L* • Terminates with minimalautomaton C for U • Each candidateCi is smallest • any DFA consistent with table has at least as many states as Ci • |C1| < | C2| < … < |C| • Produces at most n candidates, where n = |C| • # membership queries:O(kn2 + n logm) • m: size of largest counterexample • k: size of ∑
Outline • Motivation • Setting • Automatic Generation of Assumptions for the AG Rule • Learning algorithm • Assume-Guarantee with Learning • Example
Assume-Guarantee with Learning Reminder: • Use learning algorithm to learnAw. • Use candidates produced by learning as candidate assumptions Ai for AG rule. In order to use L* to produce assumptions Ai: • Show that L(Aw) is regular • Translate DFA to safety LTS (assumption) • Implement the teacher
L(Aw) isRegular • Awis expressible as a safety LTS • Translation of safety LTS A into DFA C: A = (Q, q0, αM, ) C = (Q, q0, αM, , Q) There exists a DFA C s.t. L(C) = L(Aw) L* can be used to learn a DFA that accepts L(Aw)
Weakest assumption is prefix-closed DFA to Safety LTS • DFAs Ci returned by L* are complete, minimal and in this setting also prefix-closed: • if 2 L(Ci), then every prefix of is also in L(Ci). Ci contains a single non-accepting state qnf and no accepting state is reachable from qnf. To get a safety LTS simply remove non-accepting state and all its ingoing transitions : Ci = (Q [{qnf}, q0, αM, , Q) Ai = (Q, q0, αM, Å (Q £αM £ Q) )