1 / 36

Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook

Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook. Joint work with Scott A. Smolka. Model Checking. ?. Is system S a model of formula φ ?. Model Checking. S is a nondeterministic/concurrent system.  is a temporal logic formula. in our case Linear Temporal Logic (LTL).

chiku
Download Presentation

Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Monte Carlo Model CheckingRadu GrosuSUNY at Stony Brook Joint work with Scott A. Smolka

  2. Model Checking ? Is systemS a model of formula φ?

  3. Model Checking • S is anondeterministic/concurrent system. •  is atemporal logic formula. • in our case Linear Temporal Logic (LTL).

  4. LTL Model Checking • Every LTL formula can be translated to a BüchiautomatonB such that L()= L(B) • Automata-theoretic approach: • S|=iff L(BS)  L(B ) iffL(BS  B )= • Checking non-emptiness is equivalent to finding a reachableaccepting cycle(lasso).

  5. Checking Non-Emptiness Lassos Computation tree (CT) recurrence diameter LTL Explore alllassos in the CT DDFS,SCC: time efficient DFS: memory efficient

  6. Randomized Algorithms Huge impacton CS: (distributed) algorithms, complexity theory, cryptography, etc. Takes of next step algorithm may depend on random choice(coin flip). Benefitsof randomization include simplicity,efficiency, and symmetry breaking.

  7. Randomized Algorithms • Monte Carlo: may produce incorrect result but with bounded error probability. • Example: Election’s result prediction • Las Vegas: always gives correct result but running time is a random variable. • Example: Randomized Quick Sort

  8. Monte Carlo Approach Lassos Computation tree (CT) recurrence diameter … LTL flip a k-sided coin Explore N(,) independent lassos in the CT Error margin andconfidence ratio 

  9. Lassos Probability Space • Sample Space: lassos in BS  B • Bernoulli random variable Z : • Outcome = 1 if randomly chosen lasso accepting • Outcome = 0 otherwise • pZ= ∑ pi Zi(expectation of an accepting lasso) where pi is lasso prob. (uniform random walk)

  10. Example: Lassos Probability Space 1 pZ = 1/8 1 qZ = 7/8 1 2 2 ½ 4 3 3 4 1 4 4 ¼ ⅛ 4 ⅛

  11. Geometric Random Variable • Value ofgeometricRV Xwith parameterpz: No. of independent lassos until success. • Probability mass function: p(N) = P[X = N] = qzN-1 pz • Cumulative Distribution Function: F(N) = P[X  N] = ∑i  Np(i) = 1 - qzN

  12. How Many Lassos? • RequiringP[X  N] = 1- δ yields: N = ln (δ) / ln (1- pz) • Lower bound on number of trials N needed to achieve success with confidence ratioδ.

  13. What If pz Unknown? • Requiringpz  εyields: M = ln (δ) / ln (1- ε)  N = ln (δ) / ln (1- pz) and therefore P[X  M]  1- δ • Lower bound on number of trials M needed to achieve success with confidence ratioδ and error marginε .

  14. Statistical Hypothesis Testing • Null hypothesisH0:pz  ε • Alternative hypothesisH1:pz <ε • If no success after N trials, then rejectH0 • Type I error:α= P[ X > M | H0] <δ • Since: P[ X  M | H0 ]  1- δ

  15. Monte Carlo Model Checking (MC2) input:B=(Σ,Q,Q0,δ,F), ε, δ N = ln (δ) / ln (1- ε) for (i = 1; i  N; i++) if (RL(B) == 1) return (1, error-trace); return (0, “reject H0 with α = Pr[ X>N | H0 ] < δ”); where RL(B) performs a uniform random walk through B to obtain a random lasso.

  16. Correctness of MC2 Theorem: Given aBüchi automaton B, error margin ε, and confidence ratio δ, if MC2rejects H0, then its type I error has probability α= P[ X > M | H0] <δ

  17. Complexity of MC2 Theorem: Given aBüchi automaton B having diameter D, error margin ε, and confidence ratio δ, MC2 runsin timeO(N∙D) and uses spaceO(D), whereN = ln(δ) / ln(1- ε) Cf. DDFS which runs in O(2|S|+|φ|) time for B= BS B.

  18. Implementation • Implemented DDFS and MC2 in jMocha model checker for synchronous systems specified using Reactive Modules. • Performance and scalability of MC2 compares very favorably to DDFS.

  19. DPh: Symmetric Unfair Version (Deadlock freedom)

  20. DPh: Symmetric Unfair Version (Starvation freedom)

  21. DPh: Asymmetric Fair Version (Deadlock freedom) δ = 10-1 ε = 1.8*10-3 N = 1278

  22. DPh: Asymmetric Fair Version (Starvation freedom) δ = 10-1 ε = 1.8*10-3 N = 1278

  23. Related Work • Random walk testing: • Heimdahl et al: Lurch debugger. • Random walks to sample system state space: • Mihail & Papadimitriou (and others) • Monte Carlo Model Checking of Markov Chains: • Herault et al: LTL-RP, bonded MC, zero/one ET • Younes et al: Time-Bounded CSL, sequential analysis • Sen et al: Time-Bounded CSL, zero/one ET • Probabilistic Model Checking of Markov Chains: • ETMCC, PRISM, PIOAtool, and others.

  24. Conclusions • MC2 is first randomized, Monte Carlo algorithm for the classical problem of temporal-logic model checking. • Future Work: Use BDDs to improve run time. Also, take samples in parallel! • Open Problem: Branching-Time Temporal Logic (e.g. CTL, modal mu-calculus).

  25. Talk Outline • Model Checking • Randomized Algorithms • LTL Model Checking • Probability Theory Primer • Monte Carlo Model Checking • Implementation & Results • Conclusions & Open Problem

  26. Model Checking • S is anondeterministic/concurrent system. •  is atemporal logic formula. • in our case Linear Temporal Logic (LTL). • Basic idea: intelligently explore S’s state space in attempt to establish S|=.

  27. Linear Temporal Logic • LTL formula: made up inductively of • atomic propositions p, boolean connectives, ,  • temporal modalities X (neXt) and U (Until). • Safety: “nothing bad ever happens” • E.g. G( (pc1=cs  pc2=cs)) where G is a derived modality (Globally). • Liveness: “something good eventually happens” • E.g. G( req  F serviced ) where F is a derived modality (Finally).

  28. sn sk+3 sk+2 sk+1 DFS2 DFS1 s1 s2 s3 sk-2 sk-1 sk Emptiness Checking • Checking non-emptiness is equivalent to finding an accepting cycle reachable from initial state (lasso). • Double Depth-First Search (DDFS) algorithm can be used to search for such cycles, and this can be done on-the-fly!

  29. Bernoulli Random Variable(coin flip) • Value of Bernoulli RV Z: Z = 1 (success) & Z = 0 (failure) • Probability mass function: p(1) = Pr[Z=1] = pz p(0) = Pr[Z=0] = 1- pz= qz • Expectation: E[Z] = pz

  30. Statistical Hypothesis Testing • Example: Given a fair and a biased coin. • Null hypothesisH0- fair coin selected. • Alternative hypothesisH1- biased coin selected. • Hypothesis testing: Perform N trials. • If number of heads is LOW, rejectH0. • Else fail to rejectH0.

  31. Statistical Hypothesis Testing

  32. Random Lasso (RL) Algorithm

  33. Randomized Algorithms Huge impacton CS: (distributed) algorithms, complexity theory, cryptography, etc. Takes of next step algorithm may depend on random choice(coin flip). Benefitsof randomization include simplicity,efficiency, and symmetry breaking.

  34. Randomized Algorithms • Monte Carlo: may produce incorrect result but with bounded error probability. • Example: Rabin’s primality testing • Las Vegas: always gives correct result but running time is a random variable. • Example: Randomized Quick Sort

  35. 1 2 3 4 Lassos Probability Space L1 = 11 L2 = 1244 L3 = 1231 L4 = 12344 Pr[L1]= ½ Pr[L2]= ¼ Pr[L3]= ⅛ Pr[L4]= ⅛ qZ = L1 + L2 = ¾ pZ = L3 + L4 = ¼

  36. 0 1 n-1 n Alternative Sampling Strategies • Multilasso sampling: ignores backedges that do not lead to an accepting lasso. Pr[Ln]= O(2-n) • Probabilistic systems: there is a natural way to assign a probability to a RL. • Input partitioning: partition input into classes that trigger the same behavior (guards).

More Related