1 / 28

Compositional Verification part II

Compositional Verification part II. Dimitra Giannakopoulou and Corina Păsăreanu CMU / NASA Ames Research Center. recap from part I. Compositional Verification Assume-guarantee reasoning Weakest assumption Learning framework for reasoning about 2 components. compositional verification.

abedi
Download Presentation

Compositional Verification part II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CompositionalVerificationpart II Dimitra Giannakopoulou and Corina Păsăreanu CMU / NASA Ames Research Center

  2. recap from part I • Compositional Verification • Assume-guarantee reasoning • Weakest assumption • Learning framework for reasoning about 2 components

  3. compositional verification satisfies P? Does system made up of M1 and M2 satisfy property P? M1 • Check P on entire system: too many states! • Use the natural decomposition of the system into its components to break-up the verification task • Check components in isolation: Does M1 satisfy P? • Typically a component is designed to satisfy its requirements in specific contexts / environments • Assume-guarantee reasoning: • Introduces assumption A representing M1’s “context” A M2

  4. assume-guarantee reasoning satisfies P? • A M1 P • true M2 A • true M1 || M2 P “discharge” the assumption • Reason about triples: A M P The formula is true if whenever M is part of a system that satisfies A, then the system must also guarantee P M1 • Simplest assume-guarantee rule – ASYM A M2 How do we come up with the assumption A? (usually a difficult manual process) Solution: synthesize A automatically

  5. the weakest assumption • Given component M, property P, and the interface of Mwith its environment, generate the weakest environment assumption WA such that: WA M P holds • Weakest means that for all environments E: true M || E P IFF true E WA

  6. assumption generation [ASE’02] STEP 1: composition, hiding, minimization property true! (all environments) STEP 2: backward propagation of error along  transitions property false! (all environments) STEP 3: property extraction (subset construction & completion) assumption

  7. learning for assume-guarantee reasoning • A M1 P • true M2 A • true M1 || M2 P • Use an off-the-shelf learning algorithm to build appropriate assumption for rule ASYM • Process is iterative • Assumptions are generated by querying the system, and are gradually refined • Queries are answered by model checking • Refinement is based on counterexamples obtained by model checking • Termination is guaranteed

  8. learning assumptions true s M1 P query: string s false • A M1 P • true M2 A • true M1 || M2 P Model Checking • Use L* to generate candidate assumptions • A = (M1P) M2 L* string c A false + cex c • conjecture:Ai Ai M1 P true true true M2 Ai P holds in M1 || M2 false + cex c false cA M1 P P violated in M1 || M2 string c A true • Guaranteed to terminate • Reaches weakest assumption or terminates earlier

  9. part II • compositional verification • assume-guarantee reasoning • weakest assumption • learning framework for reasoning about 2 components extensions: • reasoning about n > 2 components • symmetric and circular assume-guarantee rules • alphabet refinement • reasoning about code

  10. extension to n components • A M1 P • true M2 || … || Mn A • true M1 || M2 … || Mn P • To check if M1 || M2 || … || Mn satisfies P • decompose it into M1 and M’2 = M2 || … || Mn • apply learning framework recursively for 2nd premise of rule • A plays the role of the property • At each recursive invocation for Mj and M’j = Mj+1 || … || Mn • use learning to compute Aj such that • Ai Mj Aj-1 is true • true Mj+1 || … || MnAj is true

  11. example U5 ARB Request, Cancel Grant, Deny Rescind U4 U3 U2 U1 • Model derived from Mars Exploration Rover (MER) Resource Arbiter • Local management of resource contention between resource consumers (e.g. science instruments, communication systems) • Consists of k user threads and one server thread (arbiter) • Checked mutual exclusion between resources • E.g. driving while capturing a camera image are mutually incompatible • Compositional verification scaled to >5 users vs. monolithic verification ran out of memory [SPIN’06] Resource Arbiter

  12. recursive invocation U1 P A1 A2 A1 U3 A2 A3 A4 A3 U4 U5 U2 A5 A4 ARB A5 • Compute A1 … A5 s.t. A1 U1 P true U2 || U3 || U4 || U5 || ARB A1 A2 U2 A1 true U3 || U4 || U5 || ARB A2 A3 U3 A2 true U4 || U5 || ARB A2 A4 U4 A3 true U5 || ARB A4 A5 U5 A4 true ARB A5 • Result: true U1 || .. || U5 || ARB P

  13. symmetric rules: motivation send in ack A1: send A2: A4: ack out send ack ack ack out, send send send send ack ack,out,send Ordererr Input Output send in out send out in out ack M1 = Input, M2 = Output, P = Order M1 = Output, M2 = Input, P = Order in A2: A1: ack send in ack

  14. symmetric rules • A1 M1 P • A2 M2 P • L(coA1 || coA2)  L(P) • true M1 || M2 P • Assumptions for both components at the same time • Early termination; smaller assumptions • Example symmetric rule – SYM • coAi = complement of Ai, for i=1,2 • Requirements for alphabets: • PM1M2; Ai (M1M2)  P, for i =1,2 • The rule is sound and complete • Completeness neededto guarantee termination • Straightforward extension to n components Ensure that any common trace ruled out by both assumptions satisfies P.

  15. learning framework for rule SYM add counterex. add counterex. L* L* remove counterex. remove counterex. A2 A1 A1 M1 P A2 M2 P false false true true L(coA1 || coA2)  L(P) true P holds in M1||M2 false counterex. analysis P violated in M1||M2

  16. circular rule • A1 M1 P • A2 M2  A1 • true M1  A2  • true M1 || M2 P • Rule CIRC – from [Grumberg&Long – Concur’91] • Similar to rule ASYM applied recursively to 3 components • First and last component coincide • Hence learning framework similar • Straightforward extension to n components

  17. assumption alphabet refinement M1 P • Rule ASYM • Assumption alphabet was fixed during learning • A = (M1P) M2 • [SPIN’06]: A subset alphabet • May be sufficient to prove the desired property • May lead to smaller assumption • How do we compute a good subset of the assumption alphabet? • Solution – iterative alphabet refinement • Start with small alphabet • Add actions as necessary • Discovered by analysis of counterexamples obtained from model checking M2

  18. learning with alphabet refinement 1. Initialize Σ to subset of alphabet A = (M1P) M2 2. If learning with Σ returns true, return true and go to 4. (END) 3. If learning returns false (with counterexample c), perform extended counterexample analysis on c. If c is real, return false and go to 4. (END) If c is spurious, add more actions from A to Σ and go to 2. 4. END

  19. extended counterexample analysis query A = (M1P) M2 Σ A is the current alphabet s M1 P L* c A false • conjecture: Ai Ai M1 P true M2 Ai P holds false + cex c false false + cex t true cΣ M1P P violated cA M1P c A true Refiner: compare cA and tA Add actions to Σ and restart learning

  20. alphabet refinement send in ack Ordererr Input Output in out send out in out ack A = { send, out, ack }  = { out } cΣ= out false with c = send, out true Output Ai cA = send, out false with counterex. t = out  cΣ Input P true cA Input P compare out with send, out add “send” to 

  21. characteristics • Initialization of Σ • Empty set or property alphabet P A • Refiner • Compares tA and cA • Heuristics: AllDiff adds all actions in the symmetric difference of the trace alphabets Forward scans traces in parallel forward adding first action that differs Backward symmetric to previous • Termination • Refinement produces at least one new action and the interface is finite • Generalization to n components • Through recursive invocation • See also learning with optimal alphabet refinement • Developed independently by Chaki & Strichman 07

  22. implementation & experiments • Implementation in the LTSA tool • Learning using rules ASYM, SYM and CIRC • Supports reasoning about two and n components • Alphabet refinement for all the rules • Experiments • Compare effectiveness of different rules • Measure effect of alphabet refinement • Measure scalability as compared to non-compositional verification • Extensions for • SPIN • JavaPathFinder http://javapathfinder.sourceforge.net

  23. case studies K9 Rover • Model of Ames K9 Rover Executive • Executes flexible plans for autonomy • Consists of main Executive thread and ExecCondChecker thread for monitoring state conditions • Checked for specific shared variable: if the Executivereads its value, the ExecCondCheckershould not read it before the Executive clears it • Model of JPL MER Resource Arbiter • Local management of resource contention between resource consumers (e.g. science instruments, communication systems) • Consists of k user threads and one server thread (arbiter) • Checked mutual exclusion between resources MER Rover

  24. results • Rule ASYM more effective than rules SYM and CIRC • Recursive version of ASYM the most effective • When reasoning about more than two components • Alphabet refinement improves learning based assume guarantee verification significantly • Backward refinement slightly better than other refinement heuristics • Learning based assume guarantee reasoning • Can incur significant time penalties • Not always better than non-compositional (monolithic) verification • Sometimes, significantly better in terms of memory

  25. analysis data ASYM ASYM + refinement Monolithic |A| = assumption size Mem = memory (MB) Time (seconds) -- = reached time (30min) or memory limit (1GB)

  26. design/code level analysis M1 A M2 P • Does M1 || M2 satisfy P? Model check; build assumption A • Does C1 || C2 satisfy P? Model check; use assumption A [ICSE’2004] – good results but may not scale TEST! Design C1 A C2 P Code

  27. compositional verification for C • Check composition of software C components C1||C2 |= P C1 C2 refine refine predicate abstraction predicate abstraction M1 M2 learning framework true C1||C2 |= P false counterexample analysis spurious spurious C1||C2|=P

  28. end of part 1I please ask LOTS of questions!

More Related