1 / 32

Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook

Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook. Joint work with Scott A. Smolka. Talk Outline. Model Checking Randomized Algorithms LTL Model Checking Probability Theory Primer Monte Carlo Model Checking Implementation & Results Conclusions & Open Problem. ?.

Download Presentation

Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Monte Carlo Model CheckingRadu GrosuSUNY at Stony Brook Joint work with Scott A. Smolka

  2. Talk Outline • Model Checking • Randomized Algorithms • LTL Model Checking • Probability Theory Primer • Monte Carlo Model Checking • Implementation & Results • Conclusions & Open Problem

  3. ? Model Checking Is systemS a model of formula φ?

  4. Model Checking • S is anondeterministic/concurrent system. •  is atemporal logic formula. • in our case Linear Temporal Logic (LTL). • Basic idea: intelligently explore S’s state space in attempt to establish S⊨ .

  5. computation tree diameter Model Checking’s Fly in the Ointment:State Explosion Symbolic MC (OBDDs) Symmetry Reduction Partial Order Reduction Abstraction Refinement Bounded Model Checking Size of S’s state transition graph is O(2|s|)!

  6. Monte Carlo Approach computation tree recurrence diameter LTL Monte Carlo: N(,) independent samples Error margin andconfidence ratio 

  7. Randomized Algorithms Huge impacton CS: (distributed) algorithms, complexity theory, cryptography, etc. Takes of next step algorithm may depend on random choice(coin flip). Benefitsof randomization include simplicity,efficiency, and symmetry breaking.

  8. Randomized Algorithms • Monte Carlo: may produce incorrect result but with bounded error probability. • Example: Rabin’s primality testing algorithm • Las Vegas: always gives correct result but running time is a random variable. • Example: Randomized Quick Sort

  9. Linear Temporal Logic • An LTL formula is made up of atomic propositions p, boolean connectives, ,  and temporal modalities X (neXt) and U (Until). • Safety: “nothing bad ever happens” • E.g. G( (pc1=cs  pc2=cs)) where G is a derived modality (Globally). • Liveness: “something good eventually happens” • E.g. G( req  F serviced ) where F is a derived • modality (Finally).

  10. LTL Model Checking • Every LTL formula can be translated to a BüchiautomatonBwhose language is the set of infinite words satisfying . • Automata-theoretic approach: • S⊨ iffL(BS)  L(B ) iffL(BS  B )=

  11. sn sk+3 sk+2 sk+1 DFS2 DFS1 s1 s2 s3 sk-2 sk-1 sk Emptiness Checking • Checking non-emptiness is equivalent to finding an accepting cycle reachable from initial state (lasso). • Double Depth-First Search (DDFS) algorithm can be used to search for such cycles, and this can be done on-the-fly!

  12. Bernoulli Random Variable(coin flip) • Value of Bernoulli RV Z: Z = 1 (success) & Z = 0 (failure) • Probability mass function: p(1) = Pr[Z=1] = pz p(0) = Pr[Z=0] = 1- pz= qz • Expectation: E[Z] = pz

  13. Geometric Random Variable • Value ofgeometricRV Xwith parameterpz: no. independent trials until success. • Probability mass function: p(N) = Pr[X = N] = qzN-1 pz • Cumulative Distribution Function: F(N) = Pr[X  N] = ∑i  Np(i) = 1 - qzN

  14. How Many Trials? • RequiringPr[X  N]  1- δ yields: N  ln (δ) / ln (1- pz) • Lower bound on number of trials N needed to achieve success with confidence ratioδ.

  15. What If pz Unknown? • RequiringPr[X  N]  1- δ andpz  ε yields: N  ln (δ) / ln (1- ε)  ln (δ) / ln (1- pz) • Lower bound on number of trials N needed to achieve success with confidence ratioδ and error marginε .

  16. Statistical Hypothesis Testing • Example: Given a fair and a biased coin. • Null hypothesisH0- fair coin selected. • Alternative hypothesisH1- biased coin selected. • Hypothesis testing: Perform N trials. • If number of heads is LOW, rejectH0. • Else fail to rejectH0.

  17. Statistical Hypothesis Testing

  18. Hypothesis Testing – Our Case • Null hypothesisH0:pz  ε • Alternative hypothesisH1:pz < ε • If no success after N trials, then rejectH0 • Type I error:α = Pr[ X > N | H0]  δ

  19. Monte Carlo Model Checking • Sample Space: lassos in BS  B • Bernoulli random variable Z : • Outcome = 1 if randomly chosen lasso accepting • Outcome = 0 otherwise • pZ= ∑ pi Zi(expectation of an accepting lasso) where pi is lasso prob. (uniform random walk)

  20. 1 2 3 4 Lassos Probability Space L1 = 11 L2 = 1244 L3 = 1231 L4 = 12344 Pr[L1]= ½ Pr[L2]= ¼ Pr[L3]= ⅛ Pr[L4]= ⅛ qZ = L1 + L2 = ¾ pZ = L3 + L4 = ¼

  21. Monte Carlo Model Checking (MC2) input:B=(Σ,Q,Q0,δ,F), ε, δ N = ln (δ) / ln (1- ε) for (i = 1; i  N; i++) if (RL(B) == 1) return (1, error-trace); return (0, “reject H0 with α = Pr[ X > N | H0 ]< δ”); where RL(B) performs a uniform random walk through B (storing states encountered in hash table) to obtain a random sample (lasso).

  22. Random Lasso (RL) Algorithm

  23. Monte Carlo Model Checking Theorem: Given aBüchi automaton B, error margin ε, and confidence ratio δ, if MC2 fails to find a counter-example, then Pr[ X > N | H0]  δ where N = ln(δ) / ln(1- ε).

  24. Monte Carlo Model Checking Theorem: Given aBüchi automaton B having diameter D, error margin ε, and confidence ratio δ, MC2 runsin time O(N∙D) and uses space O(D), whereN = ln(δ) / ln(1- ε). Cf. DDFS which runs in O(2|S|+|φ|) time for B= BS B.

  25. Implementation • Implemented DDFS and MC2 in jMocha model checker for synchronous systems specified using Reactive Modules. • Performance and scalability of MC2 compares very favorably to DDFS.

  26. DPh: Symmetric Unfair Version (Deadlock freedom)

  27. DPh: Symmetric Unfair Version (Starvation freedom)

  28. DPh: Asymmetric Fair Version (Deadlock freedom) δ = 10-1 ε = 1.8*10-4 N = 1257

  29. DPh: Asymmetric Fair Version (Starvation freedom) δ = 10-1 ε = 1.8*10-4 N = 1257

  30. 0 1 n-1 n Alternative Sampling Strategies • Multilasso sampling: ignores backedges that do not lead to an accepting lasso. Pr[Ln]= O(2-n) • Probabilistic systems: there is a natural way to assign a probability to a RL. • Input partitioning: partition input into classes that trigger the same behavior (guards).

  31. Related Work • Heimdahl et al.’s Lurch debugger. • Mihail & Papadimitriou (and others) use random walks to sample system state space. • Herault et al. use bounded model checking to compute an (ε,δ)-approx. for “positive LTL”. • Probabilistic Model Checking of Markov Chains: ETMCC, PRISM, PIOAtool, and others.

  32. Conclusions • MC2 is first randomized, Monte Carlo algorithm for the classical problem of temporal-logic model checking. • Future Work: Use BDDs to improve run time. Also, take samples in parallel! • Open Problem: Branching-Time Temporal Logic (e.g. CTL, modal mu-calculus).

More Related