1 / 27

Qualitätssicherung von Software (SWQS)

Qualitätssicherung von Software (SWQS). Prof. Dr. Holger Schlingloff Humboldt-Universität zu Berlin und Fraunhofer FOKUS. 15.7.2014: Modellbasierter Test (Jaroslav Svacina). Specification-based Testing. Constructing the test suite from the specification

yetta
Download Presentation

Qualitätssicherung von Software (SWQS)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Qualitätssicherung von Software (SWQS) Prof. Dr. Holger Schlingloff Humboldt-Universität zu Berlin und Fraunhofer FOKUS 15.7.2014: Modellbasierter Test (Jaroslav Svacina)

  2. Specification-based Testing • Constructing the test suite from the specification • as opposed to constructing it from the implementation(= code-based testing) • code-based testing cannot detect missing requirements • specification-based testing cannot detect additional (unspecified) features specifiedbehaviour implementedbehaviour testedbehaviour

  3. Model-based Testing • SUT is compared to a formal model (requirements) • Automation of test case design • Early validation of requirements • Traceability • Easy maintenance of test suites for regression testing

  4. Model-based Design and Model-based Testing • Often: same syntax, different pragmatics • e.g. test cases can be formulated in Java • e.g. system spec can be formulated with LTL Requirements System Spec Test Spec Test Model System Model System Impl. Test Cases

  5. System models vs. Test models • Models can help in the development of complex systems • The more concrete the formalism, the closer it is to an implementation • executable code may be generated from state diagrams • We might add additional information such as timing, communication, variables and such. • Test model as opposed to implementation model describes properties of the targeted system • not aiming at a complete description of the system • not aiming at the generation of executable code

  6. Model-based Testing

  7. Model-based Testing

  8. Modelling • Describe system requirements for test generation • High abstraction level

  9. Modelling Notations • Pre/Post (state based) notations • State variables, operations (pre-/postcondition) • B, SpecExplorer (c#), UML OCL • Transition based notations • Focus on describing the transitions between states of the system • UML State Machines, Simulink State Flow • …

  10. Example: A Kitchen Toaster • A toaster • what is the technical process? • what are the states, events and signals of the (technical) process? • what are the boundaries of the system? • which information processing is to be done? • what are the interfaces between technical system and information processing component?

  11. Modeling the Toaster • User Interfaces: turning knob, side lever, stop button • Internals: heating element, retainer latch • Extra: defrost button • First approach: timing is neglected (timer event) • Advanced approach: timing depends on various parameters

  12. Toaster – Simple State Machine

  13. Toaster – Hierarchical Design

  14. Toaster – with Variables

  15. Test Generation

  16. Test Generation • Algorithm • Graph traversal yields abstract test cases • Dijkstra shortest path algorithm • Depth-first and breadth-first search • Evolutionary algorithm • Model Transformation • Static analysis for input parameters • Partition Testing • Classification tree method • Boundary value analysis

  17. All-States with Dijkstra and DFS

  18. Evolutionary Test Generation

  19. Test Selection

  20. Test Generation from State Machines • Define a test case to be any execution path • How to generate such paths? How many paths to generate? When to stop testing?  Coverage criteria e.g. • all-states • all-transitions • all-transition-pairs • all-n-paths • Test goal: one particular item to be covered

  21. Tests for State Machines • For a state machine, a test case is just a finite sequence of external triggers and actions • A test goal is a particular entity of the state machine (region, pseudostate, transition, n-path, …; for each test and goal it is defined whether the test reaches this goal • The coverage of a test suite is the percentage of reached test goals • the coverage can either be measured during the execution of a test suite, or statically before execution

  22. Coverage for State Machines • common coverage criteria for UML state machines • all-states • all-transitions • all configurations: all combinations of parallel substates • n-transition-coverage means all reachable transition sequences of length n are covered (esp.: pairs) • All-loop-free-paths • All-n-loop-paths • decision coverage, condition coverage, MC/DC, ... • all-requirements

  23. Szenarien als Testselektionskriterium • Schwierige, nicht ausreichende Möglichkeiten zur Beschreibung der Testselektionskriterien • Einflussnahme der Domänenexperten auf den Testselektionsprozess

  24. Szenarien als Testselektionskriterium

  25. Automated Test Generation Tools • More than a dozen commercial and experimental research tools available • Usually quite costly (>10K€ per license) • Mostly from UML State Machines • Comparison e.g. by mutation analysis

  26. Conformiq

  27. ParTeG – Partition Test Generator • Since we didn’t like the pricing, and wanted to experiment with different technologies, a Ph.D. student built his own… • http://parteg.sourceforge.net/ • UML class & state transition diagrams, connected by OCL • Plugin for Eclipse, supports XMI import / export

More Related