1 / 79

ABC: An Academic “Industrial-Strength” Verification Tool (based on a tutorial given at CAV’10)

ABC: An Academic “Industrial-Strength” Verification Tool (based on a tutorial given at CAV’10). BVSRC Berkeley Verification and Synthesis Research Center UC Berkeley Robert Brayton, Niklas Een, Alan Mishchenko Jiang Long, Sayak Ray, Baruch Sterin

Download Presentation

ABC: An Academic “Industrial-Strength” Verification Tool (based on a tutorial given at CAV’10)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ABC: An Academic “Industrial-Strength” Verification Tool(based on a tutorial given at CAV’10) BVSRC Berkeley Verification and Synthesis Research Center UC Berkeley Robert Brayton, Niklas Een, Alan Mishchenko Jiang Long, Sayak Ray, Baruch Sterin Thanks to: NSF, NSA, SRC, and industrial sponsors, Actel, Altera, Atrenta, IBM, Intel, Jasper, Magma, Oasys, Real Intent, Synopsys, Tabula, and Verific

  2. Overview • General introduction to ABC • Synergy between synthesis and verification • Introduction to AIGs • Contrast between classical synthesis and ABC synthesis • Algorithm example: re-synthesis with don’t cares using SAT and interpolation • Equivalence checking • Combinational and sequential • Recording synthesis history as a way of reducing complexity • ABC+ - orchestrated verification flow • Simplification • Extraction of constraints • Phase abstraction • Forward and minimum FF retimiing • K-step induction • Abstraction • Speculation • Last gasp: BMC, BDDs, interpolation • Verification example (super_prove) • Future work

  3. ABC • A synthesis and verification tool under development at Berkeley • Started 6 years ago as a replacement for SIS • Academic public domain tool • “Industrial-strength” • Has been employed in commercial offerings of various CAD companies • In both synthesis and verification • Exploits the synergy between synthesis and verification

  4. A Plethora of ABCs http://en.wikipedia.org/wiki/Abc • ABC (American Broadcasting Company) • A television network… • ABC (Active Body Control) • ABC is designed to minimize body roll in corner, accelerating, and braking. The system uses 13 sensors which monitor body movement to supply the computer with information every 10 ms… • ABC (Abstract Base Class) • In C++, these are generic classes at the base of the inheritance tree; objects of such abstract classes cannot be created… • Atanasoff-Berry Computer • The Atanasoff–Berry Computer (ABC) was the first electronicdigitalcomputing device.[1] Conceived in 1937, the machine was not programmable, being designed only to solve systems of linear equations. It was successfully tested in 1942. • ABC (supposed to mean “as simple as ABC”) • A system for sequential synthesis and verification at Berkeley

  5. Equivalence checking Design Flow ABC - Property Checking Verification System Specification RTL ABC Logic synthesis Technology mapping Physical synthesis Manufacturing

  6. Combinational synthesis AIG rewriting technology mapping resynthesis after mapping Sequential synthesis retiming structural register sweep merging seq. equiv. nodes Areas Addressed by ABC • Verification • combinational equivalence checking • bounded sequential verification • unbounded sequential verification • equivalence checking using synthesis history • property checking (safety and liveness)

  7. Synergy – Two Kinds • The algorithms and advancements in verification can be used in synthesis, and vice versa. • One enables the other • Verification enables synthesis - equivalence checking capability enables acceptance of sequential transformations • retiming • use of unreachable states • sequential signal correspondence, etc • Synthesis enables verification • Desire to use sequential synthesis operations (shown by superior results) spurs verification developments

  8. Examples of The Synergy • Similar solutions • e.g. retiming in synthesis / retiming in verification • Algorithm migration • e.g. BDDs, SAT, induction, interpolation, rewriting • Related complexity • scalable synthesis <=> scalable verification (approximately) • Common data-structures • e.g. combinational and sequential AIGs

  9. Evidence of Synergy Between Synthesis and Verification • IBM • Has a very capable sequential verification engine – SixthSense. • Used throughout IBM to verify property and equivalence • Designers more willing to consider sequential transformations now. • ABC • Sequential verification was developed to check that new algorithms were implemented correctly • Example of a startup company • Had developed sequential methods to reduce power • Needed a verification tool to double check if their ideas and implementations were correct. • Needed a tool to assure customers that results were correct.

  10. AIGs (And-Inverter Graphs) Why AIGs? Same reasons hold for both synthesis and verification • Easy to construct, relatively compact, robust • 1M AIG ~ 12Mb RAM • Can be efficiently stored on disk • 3-4 bytes / AIG node (1M AIG ~ 4Mb file) • Unifying representation • Used by all the different verification engines • Easy to pass around, duplicate, save • Compatible with SAT solvers • Efficient AIG-to-CNF conversion available • Circuit-based SAT solvers work directly on AIG • “AIGs + simulation + SAT” works well in many cases

  11. F = abc G = (abc)’ H = abc’ Before After AIGs • Structural hashing • Performs AIG compaction • Applied on-the-fly during construction • Propagates constants • Makes each node structurally unique

  12. AIG Memory Usage • Memory allocation • Use fixed amount of memory for each node • Can be done by a simple custom memory manager • Dynamic fanout manipulation is supported! • Allocate memory for nodes in a topological order • Optimized for traversal in the same topological order • Mostly AIG can be stored in cache – fewer cache misses. • Small static memory footprint in many applications • Compute fanout information on demand

  13. Boolean network in SIS f Boolean network in SIS f z z y x + c x y x ab e e a b a b c d c d Quick Overview of “Classical” (technology independent) Logic Synthesis • Boolean network • Network manipulation (algebraic) • Elimination (substituting a node into its fanouts) • Decomposition (common-divisor extraction) • Node minimization (Boolean) • Espresso • Don’t cares • Resubstitution (algebraic or Boolean)

  14. Boolean network in SIS f f z z y x x y e a c d b e a b c d “Classical” Logic Synthesis Equivalent AIG in ABC AIG is a Boolean network of 2-input AND nodes and invertors (dotted lines)

  15. One AIG Node – Many Cuts Combinational AIG AIG can be used to compute many cuts for each node • Each cut in AIG represents a different SIS node • SIS node logic represented by AIG between cut and root. • No a priori fixed boundaries Implies that AIG manipulation with cuts is equivalent to working on many Boolean networks at the same time f e a c d b Different cuts for the same node

  16. Subgraph 2 Subgraph 1 Subgraph 3 A A a a b c b a c a a c a b b c b c a Subgraph 2 Subgraph 1 B B a c b a c a b a c a b Subgraph 2 Subgraph 1 Combinational Synthesis • AIG rewriting minimizes the number of AIG nodes without increasing the number of AIG levels • Pre-computing AIG subgraphs • Consider function f = abc Rewriting AIG subgraphs Rewriting node A  Rewriting node B  In both cases 1 node is saved

  17. Combinational Rewriting iterate 10 times { for each AIG node { for eachk-cut derive node output as function of cut variables if ( smaller AIG is in the pre-computed library ) rewrite using improved AIG structure } } Note: For 4-cuts, each AIG node has, on average, 5 cuts compared to a SIS node with only 1 cut Rewriting at a node can be very fast – using hash-table lookups, truth table manipulation, disjoint decomposition

  18. n n’ History AIG n n’ Combinational Rewriting Illustrated Working AIG AIG rewriting looks at one AIG node, n, at a time • A set of new nodes replaces the old fanin cone of n • The rewriting can account for a better implementation which can use existing nodes in the network (DAG aware). A synthesis history can be recorded easily with AIGs • the old root and the new root nodes are grouped into an equivalence class (more on this later)

  19. “Classical” synthesis Boolean network Network manipulation (algebraic) Elimination Decomposition (common kernel extraction) Node minimization Espresso Don’t cares computed using BDDs Resubstitution “Contemporary” synthesis AIG network DAG-aware AIG rewriting (Boolean) Several related algorithms Rewriting Refactoring Balancing Node minimization Boolean decomposition Don’t cares computed using simulation and SAT Resubstitution with don’t cares Comparison of Two Syntheses Note: here all algorithms are scalable: no SOP, no BDDs, no Espresso

  20. Example Algorithm:Resubstitution in ABC • Illustrates computation and use of don’t cares • Illustrates use of SAT and interpolation. • All done in a scalable way

  21. Mapped network – gates or LUTS Window POs m = 3 n = 3 X = Window PIs is a SIS node, gate, or FPGA LUT Windowing a Node in a Mapped Network A window for a node in the network is the context in which the don’t-cares are computed. It includes: • n levels of the TFI • m levels of the TFO • all re-convergent paths captured in this scope A window with its PIs and POs can be considered as a separate network

  22. C(X) AIG “care” network n n Y Y X X Same window with inverter before fanout Window X Don’t-Care Computation Framework “Miter” constructed for the window POs

  23. Resubstitution Resubstitution considers a node in a Boolean network and expresses it using a different set of fanins X X Computation can be enhanced by use of don’t cares

  24. Resubstitution with Don’t-Cares - Overview Consider all or some nodes in Boolean network. For each node: • Create window – and care network • Select candidate divisor nodes in non-fanout cone in window • For each candidate subset of divisors • If possible, rule it out with simulation • Check resubstitution infeasibility using SAT • If UNSAT, compute resubstitution function using interpolation • Update the network if improvement

  25. g1 g1 g3 g3 g2 g2 C(x) F(x) = F(x) h(g) C(x) F(x) Resubstitution with Don’t Cares • Given: • node function F(x) to be replaced • care network C(x) for the node • candidate set of divisors {gi(x)} • Find: • A resubstitution function h(y) such that F(x) = h(g(x)) on the care set Substitution Theorem: A function h(y)exists if and only if for every pair of care minterms, x1 and x2, where F(x1) != F(x2) , there exists k such that gk(x1)!=gk(x2)

  26. Example of Resubstitution Substitution Theorem: Any minterm pair that needs to be distinguished by F(x) should be distinguished by at least one of the candidates {gk(x)} Example: F(x) = (x1 x2)(x2 x3) Two candidate sets: {g1= x1’x2, g2 = x1x2’x3}, {g3= x1x2, g4 = x2x3} Set {g3, g4} cannot be used for resubstitution while set {g1, g2} can (check all minterm pairs).

  27. Checking Resubstitution using SAT Miter for resubstitution check AIG network F F Substitution Theorem: Any minterm pair needed to be distinguished by F(x) should be distinguished by at least one of the candidates {gk(x)} Note use of care set. Resubstitution function exists if and only if problem is unsatisfiable.

  28. Boolean space (x,y,z) h(y) A(x, y) B(y, z) Computing Dependency Function h - Craig Interpolation • Consider two sets of clauses, A(x, y) and B(y, z), where y are the only variables common to A and B. • A Craig interpolant of the pair (A(x, y), B(y, z)) is a function h(y)depending only on the common variables y such that A(x, y) h(y)B(y, z) • It exists iff A(x, y) B(y, z) = 0

  29. B A h A B y Computing Dependency Function h by Interpolation (Implementation) Problem: • Find function h(y), such that C(x)  [h(g(x))  F(x)], i.e. F(x) is expressed in terms of {gk}. Solution: • Prove the corresponding SAT problem “unsatisfiable” • Derive unsatisfiability resolution proof [Goldberg/Novikov, DATE’03] • Divide clauses into A clauses and B clauses • Derive interpolant from the unsatisfiability proof [McMillan, CAV’03] • Use interpolant as the dependency function, h(g) • Replace F(x)by h(g) if cost function improved Notes on this solution • uses don’t cares • does not use Espresso • is more scalable

  30. Equivalence checking miter Property checking miter p 0 0 D2 D1 D1 Sequential Verification • Property checking • Create miter from the design and the safety property • Special construction for liveness • Biere, Artho, Schuppan • Equivalence checking • Create miter from two versions of the same design • Assuming the initial state is given • The goal is to prove that the output of the miter is 0, for all states reachable from the initial state.

  31. Sequential Equivalence Checking and Sequential Synthesis Complexity Problem: • Although iterated retiming and combinational synthesis (these two methods are scalable) has been shown to be very scalable and effective, • sequential equivalence checking for this has been shown to be as hard as general sequential equivalence checking (PSPACE complete) How to make it simpler? leave a trail of synthesis transformations (History)

  32. Recording a History Observation • Each transformation can be broken down into a sequence of small steps • Combinational rewriting • Sequential rewritng • Retiming • Using ODC’s obtained from a window How do we easily and uniformly record the history of this?

  33. Easily Recording Synthesis History • Two AIG managers are used • Normal Working AIG (WAIG) • History AIG (HAIG) • Combinational structural hashing is used in both managers • Two node-mappings are supported • Every node in WAIG points to a node in HAIG • Some nodes in HAIG point to other nodes in HAIG that are sequentially equivalent WAIG HAIG

  34. History AIG Sequentially equivalent rewrite new nodes History AIG after rewriting step. The History AIG accumulates sequential equivalence classes. Sequential Rewriting Example(combined retiming and combinational rewriting) Sequential cut: {a,b,b1,c1,c} Rewriting step.

  35. Practicality Conceptually this is easy. Just modify each synthesis algorithm with the following • Use of HAIG makes SEC easier (only coNP-complete) Practically it is more of a coding effort to record the history than we thought • Since little interest so far, not fully implemented in ABC. • It still might be of interest to a company that does both synthesis and verification • Working AIG • createAigManager <---> • deleteAigManager <---> • createNode <---> • replaceNode <---> • deleteNode_recur <---> • History AIG • createAigManager • deleteAigManager • createNode, setWaigToHaigMapping • setEquivalentHaigMapping • do nothing

  36. Integrated Verification Flow • Simplification • Abstraction • Speculation • High Effort Verification

  37. Integrated Verification Flow • Simplification Initial fast simplification of the logic • Forward retime and do FF correspondence • Min FF retime • Extract implicit constraints** and use them to find signal equivalences (ABC command scorr –c) • Fold back the constraints • add a FF so that if ever a constraint is not satisfied, make the output 0 forever after that. • Trim away irrelevant inputs (do not fanout to FF or POs) • Try phase abstraction (look for periodic signals) • Heavy simplify • (k-step signal correspondence and deep rewriting) ** (see paper of Cabodi et. al.)

  38. SAT-1 A B SAT-2 C D ? ? Proving internal equivalences in a topological order Sequential SAT Sweeping(signal correspondence) Related to combinational CEC • Naïve approach • Build output miter – call SAT • works well for many easy problems Better approach - SAT sweeping • based on incremental SAT solving • detects possibly equivalent nodes using simulation • candidate constant nodes • candidate equivalent nodes • runs SAT on the intermediate miters in a topological order • refines candidates using counterexamples D1 D2

  39. A’ A B’ B D1 D2 Improved CEC • For hard CEC instances • Heuristic: skip some equivalences • results in • 5x reduction in runtime • solving previously unresolved problems • Given a combinational miter with equivalence class {A, B, A’, B’} • Possible equivalences: • A = B, A = A’, A = B’, B = A’, B = B’, A’ = B’ • only try to prove A=A’ and B=B’ • do not try to prove • A = B, A’ = B’, A’ = B A = B’

  40. Sequential SAT Sweeping(signal correspondence) Similar to combinational SAT sweeping • detects node equivalences • But the equivalences are sequential • guaranteed to hold only on the reachable state space • Every combinational equivalence is a sequential one  run combinational SAT sweeping first A set of sequential equivalences are proved by k-step induction • Base case • Inductive case • Efficient implementation of induction is key!

  41. ? ? Proving internal equivalences in a topological order in frame k+1 ? SAT-3 SAT-1 SAT-1 A A A B B B ? PIk 0 SAT-2 SAT-4 SAT-2 C C C D D D 0 PI1 C ? D A Assuming internal equivalences in uninitialized frames 1 through k ? B PI1 0 0 PI0 C D initial state A B Proving internal equivalences in initialized frames 1 through k PI0 k-step Induction Base Case(just BMC) Inductive Case Candidate equivalences: {A = B}, {C = D} k = 2 If proof of any one equivalence fails need to start over arbitrary state

  42. Efficient Implementation Two observations: • Both base and inductive cases of k-step induction are combinational SAT sweeping problems • Tricks and know-how from the above are applicable • base case is just BMC • The same integrated package can be used • starts with simulation • performs node checking in a topological order • benefits from the counter-example simulation • Speculative reduction • Deals with how assumptions are used in the inductive case

  43. C D A B PIk C D A B PI1 0 0 C D A A A B B B PI0 Adding assumptions with speculative reduction Adding assumptions without speculative reduction Speculative Reduction Assume equivalences are valid • add XORs to create new POs • merge fanouts, rehash logic Down stream logic can be simplified. Some down stream equivalences become trivial

  44. Integrated Verification Flow(continued) • Abstraction • Use new CBA/PBA method* • Uses single instance of SAT solver • Uses counter-example based abstraction which is refined with proof-based abstraction • Checked afterward with BMC, BDDs, and simulation for CEX’s and refined with CBA if found. * N. Een, A. Mishchenko, and N. Amla, "A single-instance incremental SAT formulation of proof- and counterexample-based abstraction". Proc. IWLS'10.

  45. Counterexample-Based Abstraction (CBA) x0 x0 x1 x1 0 x2 x2 1 x3 x3 1 1 x4 x4 1 0 s0 s0 & 0 0 s1 s1 & 1 1 s2 s2 1 1 0 0 0 ~Property • start with set of FF, A = Ø (all flops are PIs) • abstraction engine will add flops to A (concretize flops) • new flops will refute spurious counterexamples

  46. Proof-Based Abstraction (PBA) • Starts with an UNSAT k-trace • Inspects the proof: flops not present are abstracted. Benefit: • More precise than CBA. Drawback: • Have to unroll the full design. • In contrast, CBA starts with a very small design.

  47. Combined Abstraction • Use CBA to grow the abstraction bottom up. • Apply PBA only on the current abstraction (not the full design) when k frames is UNSAT • Do everything in a single incremental SAT instance. depth 0: SAT, SAT, SAT, SAT, UNSAT depth 1: SAT, UNSAT depth 2: SAT, SAT, SAT, UNSAT … Depth = # frames unrolled.

  48. Incremental SAT • Extend the solve() method in MiniSat: • Accept as input a set of literals to be treated as unit clauses for this call only. • For UNSAT results, output the subset of those literals used in the proof. • Very minor extension of a modern SAT solver • Allows PBA to be done without proof-logging (major extension).

  49. Activation Literals • Assumptions to solve() allow selectively activating constraints: a → (f [k+1] ↔ fin[k]), all k where a is an activation literal, f is a flop • Every flop in the BMC trace has its own activation literal. • All activation literals are passed as assumptions to solve(). • The set returned by solve() can be used for PBA.

  50. Integrated Verification Flow(continued) • Speculation ** • Especially useful for SEC • Simulation used to find candidate equivalences. • These are equivalences that we could not prove by induction (sequential SAT sweeping) • These are used to build a “speculative miter” • The result is double-checked with BMC, BDDs or simulation for CEX’s and refined if necessary. ** H. Mony, J. Baumgartner, V. Paruthi, and R. Kanzelman, “Exploiting suspected redundancy without proving it”. Proc. DAC’05.

More Related