1 / 64

Parameterized Unit Testing

Parameterized Unit Testing with Pex : Introduction Tao Xie Adapted from TAP 08 tutorial slides by Nikolai Tillmann, Peli de Halleux, Wolfram Schulte Microsoft Research , Redmond http://research.microsoft.com/Pex. Parameterized Unit Testing. Parameterized Unit Tests serve as specifications

Download Presentation

Parameterized Unit Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parameterized Unit Testingwith Pex:IntroductionTao XieAdapted from TAP 08 tutorial slides byNikolai Tillmann, Peli de Halleux, Wolfram SchulteMicrosoft Research, Redmond http://research.microsoft.com/Pex

  2. Parameterized Unit Testing Parameterized Unit Tests • serve as specifications • can be leveraged by (automatic) test input generators • fit in development environment, evolve with the code Parameterized Unit Test

  3. Motivation: ResourceReader Traditional approach: • Writing many tests that cover the code, • making sure it does not crash. Possible test case, written by Hand

  4. Example: ResourceReader Parameterized Unit Test, written by hand Test input, generated by Pex

  5. What is a Unit Test? A unit test is a small program with assertions. [TestMethod] public void Add() { HashSet set = new HashSet(); set.Add(3); set.Add(14); Assert.AreEqual(set.Count, 2); } Many developers write such unit tests by hand. This involves • determining a meaningful sequence of method calls, • selecting exemplary argument values (the test inputs), • stating assertions.

  6. Unit Testing: Benefits • Design and specification • by example • Code coverage and regression testing • confidence in correctness • preserving behavior • Short feedback loop • unit tests exercise little code • failures are easy to debug • Documentation

  7. Unit Testing: Problems • Quality of unit tests • Do they test what is important? • Amount of unit tests • How many tests to write? • New code with old tests • Even if unit test • Hidden integration test

  8. Unit Testing: Measuring Quality • Coverage: Are all parts of the program exercised? • statements • basic blocks • explicit/implicit branches • … • Assertions: Does the program do the right thing? • test oracle Experience: • Just high coverage or large number of assertions is no good quality indicator. • Only both together are!

  9. What is a Parameterized Unit Test? • A parameterized unit test is a small program that takes some inputs and states assumptions and assertions. Parameterized Unit Test

  10. PUTs separate concerns PUTs separate two concerns: • The specification of external behavior (i.e., assertions) 2) The selection of internal test inputs (i.e., coverage) In many cases, Pex can construct a small test suite with high coverage !

  11. PUTs arealgebraic specifications A PUT can be read as a universally quantified, conditional axiom.  int name, int data. name ≠ null ⋀ data ≠ null ⇒ equals( ReadResource(name, WriteResource(name, data)), data)

  12. What is Pex • Pex is a test input generator • Pex starts from parameterized unit tests • Generated tests are emitted as traditional unit tests • Pex analyzes execution paths • Analysis at the level of the .NET instructions (MSIL) • Incremental analysis that discovers feasible execution paths • Thorem prover/constraint solver Z3 determines satisfying assignments for constraint systems representing execution paths

  13. Review: Test Generation Process xUnit Attributes PexAttributes Generated // FooTest.cs[TestClass, PexClass] partial class FooTest { [PexMethod] void Test(Foo foo) {…} } // FooTest.Test.cspartial class FooTest { [TestMethod] void Test_1() { this.Test(new Foo(1)); } [TestMethod] void Test_1() { this.Test(new Foo(2)); } … } Partial Class Pex Parameterized Unit Test Hand-written • User writes parameterized tests • Lives inside a test class • Generated unit tests • Pex not required for re-execution • xUnit unit tests http://msdn.microsoft.com/en-us/library/wa80x488(VS.80).aspx

  14. Background: Common Program Analysis Techniques • Static analysis • Verify properties for all possible executions • Conservative ("over approximation") • Spurious warnings ("false positives") • Dynamic symbolic execution (Pex’ test input generation technique) • Verify properties for many execution paths • Finds most errors within configured bounds • No spurious warnings (well, almost, see later) • Dynamic analysis • Testing • May miss errors ("under approximation") • No spurious warnings

  15. Background: Different ways to write Contracts • API contracts (Eiffel, Spec#, JML, …) • At the level of individual actions • Goal: guarantee robustness • Problem: abstraction is hard, e.g. how to describe the protocol • Parameterized unit tests • Scenarios specify functional correctness • Confidence in robustness by automated test coverage • Unit tests • Scenarios, spanning multiple actions • Goal: functional correctness • Problem: missing implementation coverage

  16. Background: The Testing Problem Starting from parameterized unit tests as specification, we can state the testing problem as follows. • Given a sequential program P with statements S, compute a set of program inputs I such that for all reachable statements s in S there exists an input i in I such that P(i) executes s.

  17. Background: The Testing Problem Remarks: • By sequential we mean that the program is single-threaded. • We consider failing an assertion, or violating an implicit contract of the execution engine (e.g. NullReferenceException when null is dereferenced as special statements. • Since reachability is not decidable in general, we aim for a good approximation in practice, e.g. high coverage of the statements/branches/… of the program.

  18. if (p) then … else … C p true false C’=C⋀⌝p C’=C⋀p Background: Test Input Generation by Symbolic Execution • Exploration of all feasible execution paths: • Start execution from initial state with symbolic values as input • Operations yieldterms over symbolic values • At conditional branch, fork execution for each feasible evaluation of the condition • For each path, we get an accumulated path condition • For each path, check if path condition is feasible(using automated constraint solver / theorem prover)

  19. Symbolic Execution Illustrated intMax(int a,int b, int c, int d) { return Max(Max(a, b), Max(c, d)); } int Max(int x, int y) { if (x > y)returnx; elsereturny; }

  20. if (p) then … else … C p true false C’=C⋀⌝p C’=C⋀p Symbolic Execution revisited… Constraint solver cannot reason about certain operations (e.g., floating point arithmetic, interactions with the environment) • Exploration of all feasible execution paths: • Start execution from initial state with symbolic values as input • Operations yieldterms over symbolic values • At conditional branch, fork execution for each feasible evaluation of the condition • For each path, we get an accumulated path condition • For each path, check if path condition is feasible(using automated constraint solver / theorem prover) Execution of programsthat interact with stateful environment cannot be forked!

  21. Background: Dynamic Symbolic Execution Dynamic symbolic execution combines static and dynamic analysis: • Execute program multiple timeswith different inputs • build abstract representation of execution path on the side • plug in concrete results of operations which cannot reasoned about symbolically • Use constraint solver to obtain new inputs • solve constraint system that represents an execution path not seen before

  22. Pex Automatic Test Input Generation:Whole-program, white-box code analysis Initially, choose Arbitrary Run Test and Monitor Solve TestInputs Constraint System Execution Path KnownPaths Choose an Uncovered Path Record Path Condition Finds only real bugs No false warnings Result: small test suite, high code coverage

  23. Pex Automatic Test Input Generation:Whole-program, white-box code analysis a[0] = 0; a[1] = 0; a[2] = 0; a[3] = 0; … Initially, choose Arbitrary Run Test and Monitor Solve TestInputs Constraint System Execution Path KnownPaths Choose an Uncovered Path Record Path Condition Finds only real bugs No false warnings Result: small test suite, high code coverage

  24. Pex Automatic Test Input Generation:Whole-program, white-box code analysis Path Condition: … ⋀ magicNum != 0x95673948 Initially, choose Arbitrary Run Test and Monitor Solve TestInputs Constraint System Execution Path KnownPaths Choose an Uncovered Path Record Path Condition Finds only real bugs No false warnings Result: small test suite, high code coverage

  25. Pex Automatic Test Input Generation:Whole-program, white-box code analysis Initially, choose Arbitrary Run Test and Monitor … ⋀ magicNum != 0x95673948 … ⋀ magicNum == 0x95673948 Solve TestInputs Constraint System Execution Path KnownPaths Choose an Uncovered Path Record Path Condition Finds only real bugs No false warnings Result: small test suite, high code coverage

  26. Pex Automatic Test Input Generation:Whole-program, white-box code analysis a[0] = 206; a[1] = 202; a[2] = 239; a[3] = 190; Initially, choose Arbitrary Run Test and Monitor Solve TestInputs Constraint System Execution Path KnownPaths Choose an Uncovered Path Record Path Condition Finds only real bugs No false warnings Result: small test suite, high code coverage

  27. Pex Automatic Test Input Generation:Whole-program, white-box code analysis Initially, choose Arbitrary Run Test and Monitor Solve TestInputs Constraint System Execution Path KnownPaths Choose an Uncovered Path Record Path Condition Finds only real bugs No false warnings Result: small test suite, high code coverage

  28. Analysis of reachable program behavior • Most programs are not self-contained • In fact, large parts of the .NET Base Class Library are not written in .NET • Dynamic symbolic execution will systematically explore the conditions in the code which the constraint solver understands. • And happily ignore everything else, e.g. • Calls to native code • Difficult constraints (e.g. precise semantics of floating point arithmetic) • Result: Under-approximation, which is appropriate for testing Calls to external world Unmanaged x86 code Unsafe managed .NET code (with pointers) Safe managed .NET code

  29. Example void Complicated(int x, int y) { int Obfuscate (int y) { if (x == Obfuscate(y)) return (100+y)*567 % 2347; error(); } else return; } Dynamic symbolic execution, starting from Complicated, runs the code twice: 1. Call Complicated() with arbitrary values, e.g. -312 for x, 513 for y • Record branch condition “x != (100+y * 567) % 2347” • error is not hit • Compute values such that “x == (100+y * 567) % 2347” (using constraint solver) 2. Call Complicated() with computed values for x, y (e.g. x=(100+513 * 567) % 2347, y=513) • error is hit; coverage goal is reached

  30. Implicit branches • Pex treats all possible exceptional control flow changes like explicit branches • Deterministic exceptions through constraint solving: • NullReferenceException • IndexOutOfRangeException • OverflowException • DivisionByZeroException • Non-deterministic exceptions through exception injection (if enabled) • OutOfMemoryException, StackOverflowException • ThreadAbortException • …

  31. Exercise: Implicit Branches • Add ImplicitNullCheck test • Run Pex • How many tests will be necessary? • Try it out with other instruction: • Allocating new arrays, • Accessing array indexes, • Field dereference

  32. Creating complex objects Problem: • Constraint solver determinestest inputs = initial state of test • Most classes hide their state (private fields) • State is initialized by constructor, and can be mutated only by calling methods • What sequence of method calls reaches a given target state? • There may be no such sequence • In general, undecidable Two approaches: • (Guided) exploration of constructor/mutator methods • Testing with class invariants

  33. Example: ArrayList Specification: [PexMethod] public void ArrayListTest(ArrayList al, object o) { PexAssume.IsTrue(al != null); int len = al.Count; al.Add(o); PexAssert.IsTrue(al[len] == o); }

  34. Object creation: Guided exploration • Exploration driver • Explicit • Implicit, in Pex configurable through attributes, e.g. • PexExplorableFromConstructorAttribute • Result: Exploration of reachable states • Only within generally configured bounds • Under-approximation

  35. Object creation: Class invariants • Write class invariant as boolean-valued parameterless method • Refers to private fields • Must be placed in implementation code • Write special constructor for testing only • May be marked as "debug only" • Constructor sets fields, assumes invariant • Result: Exploration of feasible states • May include states that are not reachable

  36. Exercise – ArrayList invariant public class ArrayList { private Object[] _items; private int _size, _version, _capacity; private bool Invariant() { return this._items != null && this._size >= 0 && this._items.Length >= this._size; } #if DEBUG public ArrayList(object[] items, int size, int capacity) { this._items = items; this._size = size; this._capacity = capacity; if (!this.Invariant()) throw new InvalidOperationException(); } #endif }

  37. Assumptions and Assertions void PexAssume.IsTrue(bool c) { if (!c) throw new PexAssumptionViolationException(); } void PexAssert.IsTrue(bool c) { if (!c) throw new PexAssertionViolationException(); } • Executions with assumption violation exceptions are ignored, not reported as errors or test cases • Both assumption violation and assertion violation exceptions are ‚uncatchable‘ • Special code instrumentation prevents catchingto avoid tainting of coverage data

  38. Exercise • Revisit explicit ArrayList driver • TestEmissionFilter=PexTestEmissionFilter.All

  39. When does a test case fail? • If the test does not throw an exception, it succeeds. • If the test throws an exception, • (assumption violations are filtered out), • assertion violations are failures, • for all other exception, it depends on further annotations. • Annotations • Short form of common try-catch-assert test code • [PexAllowedException(typeof(T))] • [PexExpectedException(typeof(T))]

  40. Exercise • DateTime.Parse • May throw FormatException

  41. When does exploration stop? • Loops and recursion give rise to potentially infinite number of execution paths • In Pex: Configurable exploration bounds • TimeOut • MaxBranches • MaxCalls • MaxConditions • Number of conditions that depend on test inputs • MaxRuns • ConstraintSolverTimeOut • ConstraintSolverMemoryLimit

  42. The Environment • Code we don’t want to test • We usually don't want to re-test code that we use and trust, e.g. already tested libraries • Code not accessible to white-box test generation • No symbolic information about uninstrumented code • No symbolic information about code outside of known execution machine (.Net for Pex)

  43. Unit Testing vs. Integration Testing • Unit test: while it is debatable what a ‘unit’ is, a ‘unit’ should be small. • Integration test: exercises large portions of a system. • Observation: Integration tests are often “sold” as unit tests • White-box test generation does not scale well to integration test scenarios. • Possible solution: Introduce abstraction layers, and mock components not under test

  44. Example: Testing with Interfaces AppendFormat(null, “{0} {1}!”, “Hello”, “World”);  “Hello World!” .Net Implementation: public StringBuilder AppendFormat( IFormatProvider provider, char[] chars, params object[] args){ if (chars == null || args == null) throw new ArgumentNullException(…); int pos = 0; int len = chars.Length; char ch = '\x0'; ICustomFormatter cf = null; if (provider != null) cf = (ICustomFormatter)provider.GetFormat(typeof(ICustomFormatter)); …

  45. Stubs / Mock Objects • Introduce a mock class which implements the interface. • Write assertions over expected inputs, provide concrete outputs public class MFormatProvider : IFormatProvider { public object GetFormat(Type formatType) { Assert.IsTrue(formatType != null); return new MCustomFormatter(); } } • Problems: • Costly to write detailed behavior by example • How many and which mock objects do we need to write?

  46. Parameterized Mock Objects - 1 • Introduce a mock class which implements the interface. • Let an oracle provide the behavior of the mock methods. public class MFormatProvider : IFormatProvider { public object GetFormat(Type formatType) { … object o = call.ChooseResult<object>(); return o; } } • Result: Relevant result values can be generated by white-box test input generation tool, just as other test inputs can be generated!

  47. Exercise • Extract Samples • Open sample solution • Show AppendFormat code • Show AppendFormat test • Run Pex

  48. Parameterized Mock Objects with Assumptions and Assertions • Problem: Without further work, parameterized mock objects might lead to spurious warnings.

  49. Parameterized Mock Objects - 2 • Write assertions over arguments, • And assumptions on results. public class MFormatProvider : IFormatProvider { public object GetFormat(Type formatType) { Assert(formatType != null); … object o = call.ChooseResult<object>(); PexAssume.IsTrue(o is ICustomFormatter); return o; } } • (Note: Assertions and assumptions are “reversed” when compared to parameterized unit tests.)

  50. Outlook - Mock Objects and Interface Contracts API-level interface contracts (e.g. written in Spec#) can be leveraged to restrict behavior of mock objects. Consider the following Spec# contract: interface IFormatProvider { object GetFormat(Type formatType) requires formatType != null; ensures result != null && formatType.IsAssignableFrom(result.GetType()); }

More Related