1 / 38

An Integrated Sequential Verification Flow

An Integrated Sequential Verification Flow. Berkeley Logic Synthesis and Verification Group Presented by Alan Mishchenko. Overview. Sequential verification Integrated verification flow Experimental results Ongoing and future work. Sequential Verification. Motivation

frederique
Download Presentation

An Integrated Sequential Verification Flow

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Integrated Sequential Verification Flow Berkeley Logic Synthesis and Verification Group Presented by Alan Mishchenko

  2. Overview • Sequential verification • Integrated verification flow • Experimental results • Ongoing and future work

  3. Sequential Verification • Motivation • Verifying equivalence after synthesis (equivalence checking) • Checking specific sequential properties (model checking) • Design analysis and estimation • Our research philosophy • Developing scalable solutions aimed at industrial problems • Exploiting synergy between synthesis and verification • Experimenting with new research ideas • Producing public implementations

  4. Verification Problems and Solutions • Taxonomy of verification • Property and equivalence checking • Combinational and sequential verification • Satisfiable and unsatisfiable problems • Single-solver and multi-solver approach • Taxonomy of solvers/engines • Bug-hunters, provers, simplifiers, multi-purpose • Simulation, BDD-, AIG-, SAT-based, hybrid, etc • Fast/slow, weak/strong, etc

  5. Equivalence checking Property checking p 0 0 D2 D1 D1 Property / Equivalence Checking • Property checking • Takes design and property and makes a miter • Equivalence checking • Takes two designs and makes a miter • The goal is to prove that the output of the miter is always 0

  6. Comb / Seq Verification • Combinational miter • Primary inputs • Primary output(s) • Logic gates (no FFs) • Sequential miter • Primary inputs • Primary output(s) • Logic gates • FFs with initial state ? ? y=0 y=0 Seq logic Comb logic x x Logic is represented using AIGs

  7. Verification Engines • Bug-hunters • random simulation • bounded model checking (BMC) • hybrids of the above two (“semi-formal”) • Provers • K-step induction, with or without uniqueness constraints • Interpolation (over-approximate reachability) • BDDs (exact reachability) • Transformers • Combinational synthesis • Retiming • etc

  8. Design Representations for Verification • Decision diagrams • Conjunctive normal form • Logic networks (circuits) • And-Inverter Graphs

  9. d a b a c b c a c b d b c a d And-Inverter Graphs (AIGs) AIG is a Boolean network composed of two-input ANDs and inverters. F(a,b,c,d) = ab + d(ac’+bc) 6 nodes 4 levels F(a,b,c,d) = ac’(b’d’)’ + c(a’d’)’ = ac’(b+d) + bc(a+d) 7 nodes 3 levels

  10. c d c d a b a b Efficient Implementation of AIGs • Structural hashing • Makes sure AIG is always stored in a compact form • Is applied during AIG construction • Ensures each node is structurally unique • Can be combined with constant propagation • Complemented edges • Represents inverters as attributes on the edges • Leads to fast, uniform manipulation • Does not use memory for inverters • Leads to efficient structural hashing • Memory allocation • Uses fixed amount of memory for each node • Can be done by a simple custom memory manager • Even dynamic fanout manipulation is supported! • Allocates memory for nodes in a topological order • Optimized for traversal in the same topological order • Small static memory footprint for many applications Without hashing With hashing

  11. Why AIGs in Verification? • Easy to construct, relatively compact, robust • 1M AIG nodes = 40Mb RAM • Can be efficiently stored on disk • AIGER: 3-4 bytes / AIG node (1M AIG = 4Mb file) • Unifying representation for different engines • Used by bug-hunters, provers, transformers • Easy to pass around between the engines • Compatible with latest SAT solvers • Efficient AIG-to-CNF conversion available • AIGs + simulation + SAT recently replaced BDDs in most (but not all) applications

  12. Integrated Verification Flow • Preprocessing • Handling combinational problems • Starting with faster engines • Continuing with slower engines • Main induction loop • Last-gasp engines

  13. Command “dprove” in ABC Preprocessors • transforming initial state (“undc”, “zero”) • converting into an AIG (“strash”) • creating sequential miter (“miter -c”) • combinational equivalence checking (“iprove”) • bounded model checking (“bmc”) • sequential sweep (“scl”) • phase-abstraction (“phase”) • most forward retiming (“dret -f”) • partitioned register correspondence (“lcorr”) • min-register retiming (“dretime”) • combinational SAT sweeping (“fraig”) • for ( K = 1; K  16; K = K * 2 ) • signal correspondence (“scorr”) • stronger AIG rewriting (“dc2”) • min-register retiming (“dretime”) • sequential AIG simulation • interpolation (“int”) • BDD-based reachability (“reach”) • saving reduced hard miter (“write_aiger”) Combinational solver Faster engines Slower engines Main induction loop Last-gasp engines

  14. Retiming Engine • Uses most-forward retiming to “canonicize” register positions (before register correspondence) • Uses min-register retiming to minimize the number of registers (before signal correspondence) • Exposes combinational logic for optimization • Minimizes the size of state-space • Derives a new register boundary • Min-register retiming algorithm is new (A. Hurst, DAC’08) • Fast implementation based on max-flow/min-cut • Trades runtime for optimality • Guarantees existence of initial state • Minimizes register perturbation

  15. Induction Engine • Can be used to prove one or more properties • Can strengthen them using signal correspondence • Supports external constraints (under development) • Supports uniqueness constraints (under development) • Only the cone-of-influence of a property is constrained • Highly-optimized implementation • Speculative reduction • Smart simulation for better filtering of candidate properties • Aggressive filtering using structural similarity (for SEC)

  16. Interpolation Engine • Uses McMillan’s interpolation algorithm for unbounded model checking • Has several improved features • Improved termination condition • “Solver-independent” proof-logging • Features experimental implementation of • Backward interpolation • Alternative proof-logging algorithms • Alternative interpolant computation methods

  17. Experimental Results • Sequential verifier in ABC • First implemented in summer 2007 • Publicly available since September 2007 • Now working on second-generation code • Very active research area - lots of new ideas to try! • Test cases • Generated by applying sequential synthesis in ABC • Public benchmarks from various sources • Industrial problems from several companies

  18. Hardware Model Checking Competition at CAV (HWMCC’08) • Competition organizers • Armin Biere (Johannes Kepler University, Linz, Austria) • Alessandro Cimatti (IRST, Trento, Italy) • Koen Lindström Claessen (Chalmers University, Gothenburg, Sweden) • Toni Jussila (OneSpin Solutions, Munich, Germany) • Ken McMillan (Cadende Berkeley Labs, Berkeley, USA) • Fabio Somenzi (University of Colorado, Boulder, USA) • The total of 16 solvers from 6 universities • The total of 645 benchmarks • 344 old and 301 new • Resource limits per problem (on Intel Pentium IV, 3 GHz, 2 GB) • Runtime limit: 900 sec • Memory limit: 1.5 Gb

  19. Results Courtesy Armin Biere

  20. HWMCC’08: All Benchmarks Courtesy Armin Biere

  21. HWMCC’08: SAT Benchmarks Courtesy Armin Biere

  22. HWMCC’08: UNSAT Benchmarks Courtesy Armin Biere

  23. Competition Webpage

  24. Summary • Presented basics of formal verification • Described integrated flow in ABC • Reviewed the results of HWMCC’08

  25. Ongoing and Future Work • Improved interpolation • command “int” • New choice computation • command “dch” (not covered in this talk) • New inductive prover • command “scorr”

  26. Interpolation: Basics • Input: Sequential AIG with single output representing a property • Property holds when the output is 0 • Method: Over-approximate reachability analysis • Using over-approximations, instead of exact sets of reachable states • Output: Proof that the property holds • Implementation: A sequence of SAT calls on unrolled time-frames that is similar to bounded model checking A B R1 R2 R3 Rn Ik L P=1 Ik+1

  27. Interpolation: Experiments (Done in collaboration with Roland Jiang, National Taiwan University.) • Checking termination using induction • Quit, if interpolant is a k-step-inductive invariant • Compare two interpolation algorithms • McMillan’s vs. Pudlak’s • Backward interpolation • Interpolate the last time frame, instead of the first • Compare two different proofs • Proof logger in ABC vs. proof logger in MiniSat-1.14p

  28. Checking Termination by Induction (This idea was suggested by Ken McMillan, Cadence Research Labs.) • Traditional approach: Check termination by checking Boolean containment of Ik+1 in Ik • If so, a fixed-point is reached • New approach: Check termination by checking whether Ik is an inductive invariant • If so, iteration can stop because (i) Ik contains all reachable states and (ii) the property holds for all states in Ik • Improvement: Use k-step induction where k increases proportionally to the effort applied in the interpolation procedure

  29. McMillan’s Root clauses Clause of A gets OR of global literals Clause of B gets constant 1 Learned clauses Variable of A gets OR of interpolants Variable of B or C gets AND of interpolants Pudlak’s Root clauses Clause of A gets constant 0 Clause of B gets constant 1 Learned clauses Variable of A gets OR of interpolants Variable B gets AND of interpolants Variable of C gets MUX controlled by this variable Two Interpolation Procedures

  30. Backward Interpolation • Instead of interpolating init-state and the first time frame, interpolate negated property and the last frame • Unroll circuit backward rather than forward It was found experimentally that backward interpolation rarely has better runtime

  31. ABC Uses a sequence of learned clauses Is largely independent of the SAT solver Doubles the runtime of SAT solver because the proof is re-derived using backward BCP MiniSat-1.14p Records the steps of conflict analysis SAT solver should be heavily modified Has little runtime overhead but may use more memory Two Proof Logging Procedures It was found experimentally that using proof-logging in ABC results in a faster interpolation procedure

  32. Interpolation Results The table reports runtime of command “int” in ABC, which implements Ken McMillan’s unbounded model checking procedure. The runtime is in seconds on an IBM laptop with a 1.6GHz Pentium 4 CPU and 2GB of RAM. Timeout was set to 300 seconds. Default interpolation parameters: inductive check (K=2), original transition relation (no self-loop), forward interpolation, proof-logging engine in ABC.

  33. SAT-1 SAT-3 SAT-1 A A A B B B SAT-4 SAT-2 SAT-2 C C C D D D Inductive Prover: Basics Inductive Case Base Case ? Candidate equivalences: {A,B}, {C,D} ? Proving internal equivalences in a topological order in frame K ? ? PIk 0 0 PI1 C ? D A Assuming internal equivalences to in uninitialized frames 0 through K-1 ? B PI1 0 0 PI0 C D Initial state A B Proving internal equivalences in initialized frames 0 through K-1 PI0 Symbolic state

  34. Inductive Prover: Experiments • Simulation of additional timeframes • Counter-examples to induction can be simulated over several timeframes, resulting in additional refinement • Skipping SAT calls for some cand. equivalences • Can skip an equivalence if its cone-of-influence did not change after the last iteration of refinement • Improved implementation • Better AIG to CNF conversion • Better candidate equivalence class manipulation • More flexible simulation

  35. Inductive Prover: Results • Using a large test-case taken at random from resynthesis/retiming/resynthesis benchmarks (R. Jiang et al, ICCAD’07) • Running three versions of ABC on a laptop • Old prover (September 2007) • 171 sec • Improved old prover (September 2008) • 94 sec • New prover (September 2008) • 31 sec

  36. Inductive Prover: Next Steps • Support external sequential constraints • Use constrained instead of random simulation • Add uniqueness constraints on demand • May increase inductive power for hard properties • Use aggressive filtering of cand. equivalences • May speed up SEC after seq. synthesis when most of the circuit structure did not change (e.g. clock-gating)

  37. Future Work • Incorporate stand-alone speculative reduction into the verification engine • May extend the scope of hard problems solved • Experiment with several other “fast engines” • May provide better filtering for the problems • Re-implement CEC engine using new ideas • Tune for circuits with little or no common structure

More Related