Learning based assume guarantee reasoning
Download
1 / 27

Learning Based Assume-Guarantee Reasoning - PowerPoint PPT Presentation


  • 293 Views
  • Uploaded on

Learning Based Assume-Guarantee Reasoning. Corina Păsăreanu Perot Systems Government Services, NASA Ames Research Center Joint work with: Dimitra Giannakopoulou (RIACS/NASA Ames) Howard Barringer (U. of Manchester) Jamie Cobleigh (U. of Massachusetts Amherst/MathWorks)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Learning Based Assume-Guarantee Reasoning' - loki


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Learning based assume guarantee reasoning l.jpg

Learning Based Assume-Guarantee Reasoning

Corina Păsăreanu

Perot Systems Government Services,

NASA Ames Research Center

Joint work with:

Dimitra Giannakopoulou (RIACS/NASA Ames)

Howard Barringer (U. of Manchester)

Jamie Cobleigh (U. of Massachusetts Amherst/MathWorks)

Mihaela Gheorghiu (U. of Toronto)


Thanks l.jpg
Thanks

  • Eric Madelaine

  • Monique Simonetti

  • INRIA


Context l.jpg

C1

C1

C2

M2

C2

M1

models

Cost of detecting/fixing defects increases

Integration issues handled early

Context

Objective:

  • An integrated environment that supports software development and verification/validation throughout the lifecycle; detectintegrationproblems early, prior to coding

    Approach:

  • Compositional (“divide and conquer”) verification, for increased scalability, at design level

  • Use design level artifacts to improve/aid coding and testing

Compositional

Verification

Requirements

Design

Coding

Testing

Deployment

implementations


Compositional verification l.jpg

satisfies P?

Compositional Verification

Does system made up of M1 and M2 satisfy property P?

M1

  • Check P on entire system: too many states!

  • Use the natural decomposition of the system into its components to break-up the verification task

  • Check components in isolation:

    Does M1 satisfy P?

    • Typically a component is designed to satisfy its requirements in specific contexts / environments

  • Assume-guarantee reasoning:

    • Introduces assumption A representing M1’s “context”

A

M2


Assume guarantee rules l.jpg

satisfies P?

  • A M1 P

  • true M2 A

  • true M1 || M2 P

“discharge” the

assumption

Assume-Guarantee Rules

  • Reason about triples:

    A M P

    The formula is true if whenever M is part of a system that satisfies A, then the system must also guarantee P

M1

  • Simplest assume-guarantee rule – ASYM

A

M2

How do we come up with the assumption?

(usually a difficult manual process)

Solution: use a learning algorithm.


Outline l.jpg
Outline

  • Framework for learning based assume-guarantee reasoning [TACAS’03]

    • Automates rule ASYM

  • Extension with symmetric [SAVCBS’03] and circular rules

  • Extension with alphabet refinement [TACAS’07]

  • Implementation and experiments

  • Other extensions

  • Related work

  • Conclusions


Formalisms l.jpg
Formalisms

  • Components modeled as finite state machines (FSM)

    • FSMs assembled with parallel composition operator “||”

      • Synchronizes shared actions, interleaves remaining actions

  • A safety property P is a FSM

    • P describes all legal behaviors

    • Perr– complement of P

      • determinize & complete P with an “error” state;

      • bad behaviors lead to error

    • Component M satisfies P iff error state unreachable in (M || Perr)

  • Assume-guarantee reasoning

    • Assumptions and guarantees are FSMs

    • A M P holds iff error state unreachable in (A || M || Perr)


Example l.jpg
Example

Input

Ordererr

in

send

in

ack

||

out

out

in

Output

send

out

ack


Learning for assume guarantee reasoning l.jpg

  • A M1 P

  • true M2 A

  • true M1 || M2 P

Learning for Assume-Guarantee Reasoning

  • Use an off-the-shelf learning algorithm to build appropriate assumption for rule ASYM

  • Process is iterative

  • Assumptions are generated by querying the system, and are gradually refined

  • Queries are answered by model checking

  • Refinement is based on counterexamples obtained by model checking

  • Termination is guaranteed


Learning with l l.jpg

Unknown regular

language U

Learning with L*

  • L* algorithm by Angluin, improved by Rivest & Schapire

  • Learns an unknown regular language U (over alphabet) and produces a DFA A such that L(A) = U

  • Uses a teacher to answer two types of questions

true

L*

query: string s

is s in U?

false

remove string t

false

  • conjecture:Ai

true

output DFA A

such that L(A) = U

is L(Ai)=U?

false

add string t


Learning assumptions l.jpg

Ai M1 P

s M1 P

true M2 Ai

Learning Assumptions

  • A M1 P

  • true M2 A

  • true M1 || M2 P

  • Use L* to generate candidate assumptions

  • A = (M1P) M2

true

Model Checking

L*

query: string s

false

remove cex. t/A

false (cex. t)

  • conjecture:Ai

true

true

P holds in M1 || M2

false (cex. t)

counterex. analysis

t/A M1P

false

add cex. t/A

true

P violated


Characteristics l.jpg
Characteristics

  • Terminates with minimal automaton A for U

  • Generates DFA candidates Ai: |A1| < | A2| < … < |A|

  • Produces at most n candidates, where n = |A|

  • # queries:(kn2 + n logm),

    • m is size of largest counterexample, k is size of alphabet


Example13 l.jpg

send

in

ack

ack

Example

Ordererr

Input

Output

in

out

send

out

in

out

ack

Computed Assumption

send

A2:

out, send


Extension to n components l.jpg

  • A M1 P

  • true M2 || … || Mn A

  • true M1 || M2 … || Mn P

Extension to n components

  • To check if M1 || M2 || … || Mn satisfies P

    • decompose it into M1 and M’2 = M2 || … || Mn

    • apply learning framework recursively for 2nd premise of rule

    • A plays the role of the property

  • At each recursive invocation for Mj and M’j = Mj+1 || … || Mn

    • use learning to compute Aj such that

      • Ai Mj Aj-1 is true

      • true Mj+1 || … || MnAj is true


Symmetric rules l.jpg

  • A1 M1 P

  • A2 M2 P

  • L(coA1 || coA2)  L(P)

  • true M1 || M2 P

Symmetric Rules

  • Assumptions for both components at the same time

    • Early termination; smaller assumptions

  • Example symmetric rule – SYM

  • coAi = complement of Ai, for i=1,2

  • Requirements for alphabets:

    • PM1M2; Ai (M1M2)  P, for i =1,2

  • The rule is sound and complete

  • Completeness neededto guarantee termination

  • Straightforward extension to n components


Learning framework for rule s ym l.jpg
Learning Framework for Rule SYM

add counterex.

add counterex.

L*

L*

remove

counterex.

remove

counterex.

A2

A1

A1 M1 P

A2 M2 P

false

false

true

true

L(coA1 || coA2)  L(P)

true

P holds in M1||M2

false

counterex.

analysis

P violated in M1||M2


Circular rule l.jpg

  • A1 M1 P

  • A2 M2  A1

  • true M1  A2 

  • true M1 || M2 P

Circular Rule

  • Rule CIRC – from [Grumberg&Long – Concur’91]

  • Similar to rule ASYM applied recursively to 3 components

    • First and last component coincide

    • Hence learning framework similar

  • Straightforward extension to n components


Outline18 l.jpg
Outline

  • Framework for assume-guarantee reasoning [TACAS’03]

    • Uses learning algorithm to compute assumptions

    • Automates rule ASYM

  • Extension with symmetric [SAVCBS’03] and circular rules

  • Extension with alphabet refinement [TACAS’07]

  • Implementation and experiments

  • Other extensions

  • Related work

  • Conclusions


Assumption alphabet refinement l.jpg
Assumption Alphabet Refinement

  • Assumption alphabet was fixed during learning

    • A = (M1P) M2

  • [SPIN’06]: A subset alphabet

    • May be sufficient to prove the desired property

    • May lead to smaller assumption

  • How do we compute a good subset of the assumption alphabet?

  • Solution – iterative alphabet refinement

    • Start with small (empty) alphabet

    • Add actions as necessary

    • Discovered by analysis of counterexamples obtained from model checking


Implementation experiments l.jpg
Implementation & Experiments

  • Implementation in the LTSA tool

    • Learning using rules ASYM, SYM and CIRC

    • Supports reasoning about two and n components

    • Alphabet refinement for all the rules

  • Experiments

    • Compare effectiveness of different rules

    • Measure effect of alphabet refinement

    • Measure scalability as compared to non-compositional verification


Case studies l.jpg

K9 Rover

MER Rover

Case Studies

  • Model of Ames K9 Rover Executive

    • Executes flexible plans for autonomy

    • Consists of main Executive thread and ExecCondChecker thread for monitoring state conditions

    • Checked for specific shared variable: if the Executivereads its value, the ExecCondCheckershould not read it before the Executive clears it

  • Model of JPL MER Resource Arbiter

    • Local management of resource contention between resource consumers (e.g. science instruments, communication systems)

    • Consists of k user threads and one server thread (arbiter)

    • Checked mutual exclusion between resources


Results l.jpg
Results

  • Rule ASYM more effective than rules SYM and CIRC

  • Recursive version of ASYM the most effective

    • When reasoning about more than two components

  • Alphabet refinement improves learning based assume guarantee verification significantly

  • Backward refinement slightly better than other refinement heuristics

  • Learning based assume guarantee reasoning

    • Can incur significant time penalties

    • Not always better than non-compositional (monolithic) verification

    • Sometimes, significantly better in terms of memory


Analysis results l.jpg
Analysis Results

ASYM

ASYM + refinement

Monolithic

|A| = assumption size

Mem = memory (MB)

Time (seconds)

-- = reached time (30min) or memory limit (1GB)


Other extensions l.jpg
Other Extensions

  • Design-level assumptions used to check implementations in an assume-guarantee way [ICSE’04]

    • Allows for detection of integration problems during unit verification/testing

  • Extension of SPIN model checker to perform learning based assume-guarantee reasoning [SPIN’06]

    • Our approach can use any model checker

  • Similar extension for Ames Java PathFider tool – ongoing work

    • Support compositional reasoning about Java code/UML statecharts

    • Support for interface synthesis: compute assumption for M1 for any M2

  • Compositional verification of C code

    • Collaboration with CMU

    • Uses predicate abstraction to extract FSM’s from C components

  • More info on my webpage

    • http://ase.arc.nasa.gov/people/pcorina/


Applications l.jpg
Applications

  • Support for compositional verification

    • Property decomposition

    • Assumptions for assume-guarantee reasoning

  • Assumptions may be used for component documentation

  • Software patches

    • Assumption used as a “patch” that corrects a component errors

  • Runtime monitoring of environment

    • Assumption monitors actual environment during deployment

    • May trigger recovery actions

  • Interface synthesis

  • Component retrieval, component adaptation, sub-module construction, incremental re-verification, etc.


Related work l.jpg
Related Work

  • Assume-guarantee frameworks

    • Jones 83; Pnueli 84; Clarke, Long & McMillan 89; Grumberg & Long 91; …

    • Tool support: MOCHA; Calvin (static checking of Java); …

  • We were the first to propose learning based assume guarantee reasoning; since then, other frameworks were developed:

    • Alur et al. 05, 06 – Symbolic BDD implementation for NuSMV (extended with hyper-graph partitioning for model decomposition)

    • Sharygina et al. 05 – Checks component compatibility after component updates

    • Chaki et al. 05 – Checking of simulation conformance (rather than trace inclusion)

    • Sinha & Clarke 07 – SAT based compositional verification using lazy learning

  • Interface synthesis using learning: Alur et al. 05

  • Learning with optimal alphabet refinement

    • Developed independently by Chaki & Strichman 07

  • CEGAR – counterexample guided abstraction refinement

    • Our alphabet refinement is similar in spirit

    • Important differences:

      • Alphabet refinement works on actions, rather than predicates

      • Applied compositionally in an assume guarantee style

      • Computes under-approximations (of assumptions) rather than behavioral over-approximations

  • Permissive interfaces – Hezinger et al. 05

    • Uses CEGAR to compute interfaces


Conclusion and future work l.jpg
Conclusion and Future Work

Learning based assume guarantee reasoning

  • Uses L* for automatic derivation of assumptions

  • Applies to FSMs and safety properties

  • Asymmetric, symmetric, and circular rules

    • Can accommodate other rules

  • Alphabet refinement to compute small assumption alphabets that are sufficient for verification

  • Experiments

    • Significant memory gains

    • Can incur serious time overhead

  • Should be viewed as a heuristic

    • To be used in conjunction with other techniques, e.g. abstraction

      Future work

  • Look beyond safety (learning for infinitary regular sets)

  • Optimizations to overcome time overhead

    • Re-use learning results across refinement stages

  • CEGAR to compute assumptions as abstractions of environments

  • More experiments


ad