1 / 20

Generating Test Inputs for Fault-Tree Analyzers using Imperative Predicates

This paper discusses the automated generation of test inputs for fault-tree analyzers using imperative predicates. It explores the challenges, approaches, and examples of code and data structures used in this process.

Download Presentation

Generating Test Inputs for Fault-Tree Analyzers using Imperative Predicates

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Generating Test Inputs for Fault-Tree Analyzers using Imperative Predicates Sasa Misailovic, Aleksandar Milicevic Sarfraz Khurshid, Darko Marinov STEP 2007 Workshop May 7, 2007

  2. Broader Context: Automated Generation of Structurally Complex Data red-black tree XML document fault tree 1 Event 0 <library> <book year=2005> <title>T1</title> <author>A1</author> </book> <book year=2006> <title>T2</title> <author>A2</author> </book> <book year=2007> <title>T3</title> <author>A3</author> </book></library> /library/book[@year<2007]/title 0 3 2 Event 1 Event 2 toplevel Event_0 ;Event_0 pand Event_1Event_2ISeq_0 ISeq_1 FDep_0 FDep_1 ;Event_1 be replication = 1 ;Event_2 be replication = 1 ;ISeq_0 seq Event_0 ;ISeq_1 seq Event_1 ;FDep_0 fdep trigger = Event_0Event_1 ;FDep_1 fdep trigger = Event_1Event_2 ;Event_1 dist=exponential rate=.0004 cov=0 res=.5 spt=.5 dorm=0 ;Event_2 dist=exponential rate=.0004 cov=0 res=.5 spt=.5 dorm=.5 ;

  3. Testing Setup 1 0 3 2 2 2 0 0 3 3 3 0 • Examples of code under test • Abstract data type, input: data structure • XML processing program, input: XML document • Fault-tree analyzer, input: fault tree • Focus: how to generate test inputs? inputs outputs pass testing oracle code fail

  4. Assumptions • Testers have good intuition about inputs • Know the properties of desired inputs • For abstract data types • Encapsulated data structures that satisfy invariants • For XML programs • Interesting (il)legal XML documents • Large number of desired inputs required • Manual generation tedious and error-prone • Code may be evolving • Necessary to change inputs

  5. Manual vs. Automated Generation 1 1 1 1 0 3 0 3 0 3 2 2 1 • Manual: create inputs one by one… emptytree … • Proposed automated solution: • User describes properties of a set of inputs • Tool automatically generate the inputs

  6. Challenges for Automated Generation 1 0 3 2 2 0 3 3 0 3 3 3 3 3 3 3 3 3 0 0 0 0 0 0 0 0 0 • How to describe desired test inputs? • How to efficiently generate desired test inputs? • Natural input spaces can be enumerated • Sparse: number of desired test inputs much smaller than the total number of inputs

  7. Two Approaches for Properties • Declarative language for properties of desired inputs[SOFTMC’01, ASE’01, FME’02, OOPSLA’02, SAT’03, MIT’03, J-ASE’04, ISSTA’04, SAT’05, LDTA’06, ALLOY’06] • Properties written in the Alloy modeling language • Uses Alloy Analyzer for generation of test inputs • Imperative language for properties of desired inputs[ISSTA’02, TR’03, MIT’04, ICSE-DEMO’07] • Properties written in standard implementation language (Java, C#…) • Developed Korat for generation of test inputs

  8. More on Imperative Language • Tester writes imperative predicate • Code that identifies desired test inputs • Takes an input that can be desired or undesired • Returns a boolean indicating desirability • Advantages • Familiar language • Existing development tools • Predicates can be already present in code

  9. Example Predicate class BST { Node root; class Node { Node left, right; } boolean repOk(BST t) { if (t.root == null) returntrue; // empty tree Set visited = new HashSet(); visited.add(t.root); List workList = new LinkedList(); workList.add(t.root); while (!workList.isEmpty()) { Node current = (Node)workList.removeFirst(); if (current.left != null) { if (!visited.add(current.left)) return false; // sharing workList.add(current.left); } if (current.right != null) { if (!visited.add(current.right)) return false; // sharing workList.add(current.right); } } return true; // no sharing } }

  10. Bounded-Exhaustive Generation • Finitization bounds input space • Number of objects • Values of fields • Generate all valid inputs up to given bound • Eliminates systematic bias that testers may have • Finds all errors detectable within bound • Avoid some equivalent inputs • Reduces the number of inputs • Preserves capability to find all errors

  11. Korat repOk finitization • Generation from imperative predicates • Korat systematically searches input space • Brute-force search that enumerates entire input space and runs predicate on each input is infeasible for sparse input spaces (#desired << #total) • Must reason about behavior of predicate • Korat dynamically monitors execution of predicates • Avoids some equivalent inputs alldesired inputs Korat

  12. Example Desired Inputs T0 T0 T0 T0 T0 N0: 1 N0: 2 root root root root root right right left N0: 1 N0: 3 N0: 3 N1: 1 N2: 3 N1: 3 right left left left N1: 2 N1: 1 N1: 2 N2: 2 right right left N2: 3 N2: 2 N2: 1 • Trees with exactly 3 nodes

  13. Example Testing Scenario pass fail predicate Galileo <title>T1</title><title>T2</title> <library> <book year=2001> <title>T1</title> <author>A1</author> </book> <book year=2002> <title>T2</title> <author>A2</author> </book> <book year=2003> <title>T3</title> <author>A3</author> </book></library> /library/book[@year<2003]/titl =? Korat NOVA <title>T1</title><title>T2</title> finitization • Fault-tree analyzer • Input: fault trees that model system failures • Two implementations • Galileo: dynamic fault-tree analyzer developed for NASA • NOVA: fault-tree analyzer based on precise semantics prettyprinter

  14. Some Results from Academia • Errors in real applications in academia • Galileo/Nova (fault-tree analyzer) • Previous work with declarative predicates • Extended manually generated test suite with ~10 million automatically generated inputs • ~90 hours to generate, ~48 hours to execute • Uncovered 20 previously unknown distinct bugs in both analyzers • This paper: Korat is orthogonal; recent: Korat better • Intentional Naming System (dynamic networks) • Alloy-alpha Analyzer (constraint solver) • Thorough evaluation for data structures • Textbook data structures and Java collections • Measured structural coverage and mutation score

  15. Korat at Microsoft Research • Korat implemented in the AsmL test tool in Foundations of Software Engineering group • Predicates in Abstract state machine Language (AsmL), not in Java or C# • GUI for setting finitization and manipulating tests • Korat can be used stand-alone or to generate inputs for method sequences • Some extensions • (Controlled) non-exhaustive generation • Generation of complete tests from partial tests • Library for faster generation for common datatypes

  16. AsmL/Korat at Microsoft • Used by testers in several product groups • Enabled finding numerous errors • XML tools • XPath compiler (10 code errors, test-suite augmentation) • Serialization (3 code errors, changing spec) • Web-service protocols • WS-Policy (13 code errors, 6 problems in informal spec) • WS-Routing (1 code error, 20 problems in informal spec) • Others • SSLStream • MSN Authentication •  • Errors found in • Important real-world applications • Code already well tested using best testing practices

  17. Some Comments from Microsoft • Positive comments on AsmL and Korat • “So far our stateless AsmL models are pretty successful.” • “AsmL parameter generation tool is quite convenient and powerful.” • Negative comments on AsmL not Korat • “Most of our testers prefer to write as much C# as possible.” • “Very difficult to debug AsmL.” • Result: new test tool for C#, SpecExplorer Korat is Korat

  18. Related Approaches • Specification-based testing • Predicates as specs, bounded-exhaustive generation • Model-based testing • Predicates as specs (Alloy is related to OCL for UML) • Constraint-based generation • Tools typically handle only primitive types not structures • Random generation • No guarantees, hard to generate inputs for sparse spaces • Grammar-based generation • Hard to generate inputs with complex properties • Combinatorial selection (pair-wise generation) • Easy to enumerate spaces, smart selection of inputs

  19. Ongoing Work • Parallel Korat for generation and execution of structurally complex test inputs • Speedup of 140x on 1,024 machines on GFS • Generation of fewer equivalent inputs • User-provided equivalence, not fixed isomorphism • Imperative generators instead of declarative • Write code that directly generates tests instead of writing predicates/code that describe properties • Faster generation, often more involved • Some results for testing refactoring engines:9 bugs in Eclipse, 10 bugs in NetBeans

  20. Conclusion • Testing is important for increasing software reliability • Structurally complex data • Increasingly pervasive in modern systems • Important challenges for software testing • Korat • Automates testing with structurally complex inputs • Efficient bounded-exhaustive generation • Results • Effective for data structures, adopted in industry • Found errors in real-world programs • Try it out: http://korat.sourceforge.net

More Related