1 / 53

Testing OO Software

Testing OO Software. Yves Le Traon Jean-Marc Jézéquel Benoit Baudry Triskell -Team Rennes - France bbaudry @irisa.fr http://www.irisa.fr/ triskell /. Le test des logiciels à objets. Au départ, une certaine méfiance des testeurs…. A m1() m2() m3() m4().

lada
Download Presentation

Testing OO Software

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing OO Software Yves Le Traon Jean-Marc Jézéquel Benoit Baudry Triskell -Team Rennes - France bbaudry@irisa.fr http://www.irisa.fr/triskell/

  2. Le test des logiciels à objets • Au départ, une certaine méfiance des testeurs…

  3. A m1() m2() m3() m4() A.m1() calls m4() calls m2() B.m2() B w m1() m2() m3() B.m1() C m1() m4() calls super() calls super() Implements Implements C.m4() Mixed-feeling at the code level Flôt d’appels de méthodes pour exécuter C.m1() L’effet Yo-Yo (Binder, Offut) C.m1()

  4. Générer les tests unitaires pour l’OO • Test fonctionnel : • diagramme de séquences -> données de test • Environnement d’exécution des tests • framework Junit • Test structurel • Critères non supportés par les outils • Analyse de mutation pour l’OO • Mesurer l’efficacité des tests = confiance !

  5. Contexte • Composants logiciels • Analyse et conception orientée objet • Structures particulières des programmes OO Besoin de techniques adaptées pour le test • R. V. Binder, "Testing Object-Oriented Systems: Models, Patterns and Tools". Addison-Wesley 1999.

  6. Quelques notions importantes • La relecture >> au test dynamique • La non-régression est fondamentale et implique la capacité à mémoriser les tests • Les contrats peuvent servir d’oracle embarqués

  7. Test de composants « unitaires » • Composants de confiance / trusting components • Composants autotestables • Analyse de mutation • Assemblage de composants • test d’intégration

  8. I) Trusting OO Components Estimating theQualityof the Component • Based on Consistency Assessment, • Using Mutation Techniques.

  9. The Triangle View of a component Specification Contract between the client and the component V & V: checking Implementation against Specification (oracle) (e.g., embedded tests) Measure of Trust based on Consistency Implementation

  10. Assessing Consistency • Assume the component passes all its tests according to its specification • Component’s implementation quality linked to its test & specification quality • How do we estimate test & spec consistency? • Introduce faults into the implementation • mutant implementations • Check whether the tests catch the faults • the tests kill the mutants

  11. Limited Mutation Analysis put (x : INTEGER) is -- put x in the set require not_full: not full do 1 if not has (x) then 2 count := count + 1 3 structure.put (x, count) end-- if ensure has: has (x) - 1 not_empty: not empty end-- put Remove-inst

  12. Class A Generation of mutants mutantA6 mutantA5 mutantA4 mutantA3 mutantA2 mutantA1 Selftest A Automated process Non automated process Test Execution Error detected Error not detected mutantAj  alive  mutantAj killed SelfTest OK ! Consider Aj as 3 Diagnosis Equivalent mutant 1 Enhance Selftest 2 Add contracts to the specification Incomplete specification Overall Process

  13. About Alive Mutants • What if a mutant is not killed? • Tests inadequate => add more tests • Specification incomplete => add precision • Equivalent mutant => remove mutant (or original!) • e.g., x<y ? x:y <=> x<=y ? x:y

  14. Quality Estimate = Mutation Score • Q(Ci) = Mutation Score for Ci = di/mi • di = number of mutants killed • mi = number of mutants generated for Ci • WARNING: Q(Ci)=100% not=>bug free • Depends on mutation operators (see next slide) • Quality of a system made of components • Q(S) = S di / S mi

  15. 120/184 selftest System tests Specification 65/100 Test Impl. tests selftest 378/378 tests 234/245 Quality Estimate System test Quality Q(S) = (120 + 65 + 378 + 234) (184 + 100 + 378 + 245) = 87,8 %

  16. Mutation operators (1) • Exception Handling Fault • causes an exception • Arithmetic Operator Replacement • replaces e.g., ‘+’ by ‘-’ and vice-versa. • Logical Operator Replacement • logical operators (and, or, nand, nor, xor) are replaced by each of the other operators; • expression is replaced by TRUE and FALSE.

  17. Mutation operators (2) • Relational Operator Replacement • relational operators (<, >, <=, >=, =, /=) are replaced by each one of the other operators. • No Operation Replacement • Replaces each statement by the Null statement. • Variable and Constant Perturbation • Each arithmetic constant/variable: ++ / -- • Each boolean is replaced by its complement.

  18. Mutation operators (3) • Referencing Fault Insertion (Alias/Copy) • Nullify an object reference after its creation. • Suppress a clone or copy instruction. • Insert a clone instruction for each reference assignment.

  19. Outline of a Testing Process • Select either: • Quality Driven: select wanted quality level = Q(Ci) • Effort Driven: Maximum #test cases = MaxTC • Mutation Analysis and Test Cases Enhancement • while Q(Ci) < Q(Ci) and nTC <= MaxTC • enhance the test cases (nTC++) • apply test cases to each mutant • Eliminates equivalent mutants • computes new Q(Ci)

  20. id_mut EQ METHODE SOURCE MUTANT COMMENTAIRE 2 1 empty count = lower_bound – 1 count <= lower_bound – 1 jamais < 6 2 full count = upper_bound count >= upper_bound jamais > * count 2 16 3 index_of – loop variant count + 2 même or else or count structure count structure 24 4 index_of – loop until court test count := 0 ( nul) 30 5 make valeur défaut – 1 ( lower_bound ), upper_bound 45 6 make lower_bound, upper_bound redondance + 1 lower_bound, ( upper_bound ) 46 7 make lower_bound, upper_bound redondance + 1 test insuf. count = 60 I full count = – 1 test insuf. ( upper_bound ); 63 II full upper_bound; not has (x) true Spec inc. if then if then 72 III put not has (x) not false Spec inc. if then if then 75 IV put + 1 - Result ) 98 8 index_of – loop variant - Result) même - 1 - Result ) 99 9 index_of – loop variant - Result) même + 1 (count + 2 ) - 100 10 index_of – loop variant count + 2 - même – 1 (count + 2 ) - 101 11 index_of – loop variant count + 2 - même + 1 (count ) + 2 - 102 12 index_of – loop variant count + 2 - même – 1 (count ) + 2 - 103 13 index_of – loop variant count + 2 - même + 2 3 count - (count + - 104 14 index_of – loop variant même + 2 1 count - (count + - 105 15 index_of – loop variant même + 1 test insuf. > (count ) or 110 V index_of – loop until > count or NON EQUIVALENT EQUIVALENT A test report 119 mutants, 99 dead, 15 equivalents MS= 99/104=95%

  21. A short case study • Building a self-testable library : date-time

  22. p_date.e p_time.e p_date_time.e Total number of mutants 673 275 199 Nbr equivalent 49 18 15 Mutation score 100% 100% 100% Initial contracts efficiency 10,35% 17,90% 8,7% Improved contracts 69,42% 91,43% 70,10% efficiency First version test size 106 93 78 Reduced tests size 72 33 44 A short case study • Results

  23. A short case study • Robustness of the selftest against an infected environment • p_date_time selftest robustness

  24. Partial conclusion • An effective method to build (some level of) ‘trust’ • estimate the quality a component based on the consistency of its 3 aspects: • specification (contract) • tests • implementation • be a good basis for integration and non-regression testing • A tool is currently being implemented • for languages supporting DbC: • Eiffel, Java (with iContract), UML (with OCL+MSC) ...

  25. II) Transition steps : integration and regression Integration & Non-Regression Testing Based on Component Self Tests

  26. Efficient Strategies for Integration and Regression Testing of OO Systems Where to begin with ? How to organize test ? • Integration plan - What we have to deal with... problem interdependencies loops of dependencies between components into an architecture

  27. S pck1 pck2 pck4 pck3 Efficient Strategies for Integration and Regression Testing of OO Systems • A simple solution, with constraints on the design • no loops in an architecture • often possible but local optimizations are not always optimal for the architecture but designing interdependent components may also be relevant • The solution presented here • takes any model • optimizes the way to deal with loops of interdependent components

  28. Efficient Strategies for Integration and Regression Testing of OO Systems The Test Dependency Graph preliminary modeling inheritance • Two basic types of test dependencies client/provider Contractual dependencies = • specified in the public part of classes • included in the body of internal methods Implementation dependencies = not contractual ones

  29. Efficient Strategies for Integration and Regression Testing of OO Systems The Test Dependency Graph preliminary modeling 2 types of nodes • class node • method node A A Class A Class node A … mA1(…) ... A mA1 Method mA1 in Class A Method node in a class node

  30. Semantic of a directed edge A is test dependent from B A B Efficient Strategies for Integration and Regression Testing of OO Systems The Test Dependency Graph preliminary modeling 3 types of edges • class_to_class • method_to_class • method_to_method …

  31. Efficient Strategies for Integration and Regression Testing of OO Systems Method_to_class A ... +mA1(...v1: B...) … mA2(…v: A…) ... B A mA1 mA2 refinement Method_to_method A +mA1(...v: B...) {… v.mB1 …} +mA2(...v: A...) {… v.mA1 …} B A mB1 mA1 mA2 … an action language (ASL/OCL) code level

  32. B A A C B C association class A A A A Interface Name B A B B B B inheritance Interfaces dependency navigability A A B B association Efficient Strategies for Integration and Regression Testing of OO Systems Class-to-class A A B B aggregation composition

  33. Loss of information B A A B Problem : Not a classical graph mB1 mA1 mB2 mA2 class-to-class graph mB3 Preliminary test dependency graph solution 1 solution 2 A mA1 B homomorphism mA2 mB2 mB1 mB3 mixed classes and methods graph No loss of information Efficient Strategies for Integration and Regression Testing of OO Systems Normalization rules

  34. C Test depends on C’ A stub Test depends on B Efficient Strategies for Integration and Regression Testing of OO Systems Integration Testing • Based on a TDG, how to order components ? • Minimizing the number of stubs • realistic stub => dedicated simulator, « old component » C’ stub simulates the behavior of C when A and B are tested

  35. C Test depends on Test depends on Test depends on C’ A A B stub Test depends on C’’ B stub Efficient Strategies for Integration and Regression Testing of OO Systems • Minimizing the number of stubs • specific stub => deterministic component behavior A stub for A and a stub for B

  36. a f b i g e c j h d k l Efficient Strategies for Integration and Regression Testing of OO Systems • An efficient strategy (1) Optimal ordering => NP-complete complexity = n!

  37. a f b i g e c j h d Determination and ordering of connected components k l Efficient Strategies for Integration and Regression Testing of OO Systems a B f • An efficient strategy (2) Tarjan’s algorithm b i g e c j h d k l A C [(e) or C] then [A or B] then [(a)] Complexity linear with #nodes

  38. Efficient Strategies for Integration and Regression Testing of OO Systems 5 f Bourdoncle’s algorithm • An efficient strategy (3) f i 4 i Candidate node = # max(fronds) g j 3 j g h h 2 1 B [(e) or C] then [A or B] then [(a)] Break the connected component Reapply Tarjan [l, k] [c, b, d] [g, h, j, i, f]

  39. a c i b j f e h l Efficient Strategies for Integration and Regression Testing of OO Systems • Result = a partial ordered tree • all possible strategies g Optimized algorithm d #specific stubs = 4 #realistic stubs = 3 Random selection k #specific stubs = 9.9 Partial ordered tree #realistic stubs = 5

  40. D F H I J G Exo • Plan de test d’intégration pour : A E C B

  41. Case Studies • SMDS • Telecommunication Switching System: Switched Multimegabits Data Service • running on top of connected networks such as the Broadband Integrated Service Digital Network (B-ISDN) • based on the asynchronous transfer mode (ATM). • 22 Kloc • Gnu Eiffel Compiler • open-source Eiffel compiler • 70 Kloc of Eiffel code • (http://SmallEiffel.loria.fr)

  42. Case study SMDS UML diagram

  43. Case study SMDS Test Dependency Graph

  44. SMDS realistic stubs

  45. Efficient Strategies for Integration and Regression Testing of OO Systems A comparison with • 4 strategies • RC : Random Components selection • MC : Most Used Components • RT : Random Thread of Dependencies • MT : Most Used Components Threads of Dependencies (intuitive integration) 100 000 times for random strategies

  46. Specific stubs counting Realistic stubs counting 60 48 47 50 30 27 29 30 39 38 25 36 24 40 34 21 19 19 26 28 25 18 27 #stubs 20 30 #stubs 13 SMDS case study 20 20 9 10 10 0 0 RC MC RT MT Optim. RC MC RT MT Optim. Min Mean 100 46 85 Max 87 50 43 80 63 63 40 35 34 GNU Eiffel case study 32 60 25 28 30 40 38 #stubs 34 25 22 22 #stubs 28 40 27 27 20 17 20 9 10 0 0 RC MC RT MT Optim. RC MC RT MT Optim. Results summary

  47. Variantes possibles • Mixte Big-Bang/Incrémental strict • Planifier aussi le contexte dont on dépend (Pascale Thévenod)

  48. Subsystem that can be integrated in one block (would need at least 1 stub) Remaining part of the system

  49. D F H I J G • Mixte Big-Bang/incrémental strict A E C B

  50. F H I J G Mixte Big-Bang/incrémental strict • 3 classes par composante : problème NP-complet … encore ! D A E C B

More Related