Software analysis and testing
1 / 42

Software Analysis and Testing - PowerPoint PPT Presentation

  • Updated On :

Software Analysis and Testing. Paolo Tonella Alessandro Marchetto Cu D. Nguyen Mariano Ceccato Software Engineering (SE) unit, Fondazione Bruno Kessler (FBK). Team. Paolo Tonella Cu D. Nguyen Mariano Ceccato Alessandro Marchetto Software Engineering research unit

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Software Analysis and Testing' - cynara

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Software analysis and testing l.jpg

Software Analysis and Testing

Paolo Tonella

Alessandro Marchetto

Cu D. Nguyen

Mariano Ceccato

Software Engineering (SE) unit, Fondazione Bruno Kessler (FBK)

Slide2 l.jpg

Paolo Tonella Cu D. Nguyen Mariano Ceccato Alessandro Marchetto

Software Engineering researchunit

at Fondazione Bruno Kessler

Slide3 l.jpg

FBK (more than 350 researchers) is a research organization of the Autonomous Province of Trento that promotes research in the areas of science, technology and humanities.

FBK objectives are to: (i) conduct research that obtains recognition at an international level; (ii) carry out applied research of strategic importance to the province; (iii) publicize scientific results and promote economic development; and (iv) encourage innovation throughout the province.

Fbk cit l.jpg

Three main areas of Information Technology:

- Engineering

- Content

- Interaction

FBK is organized into Research Units.

  • - Research units are research groups that are above critical mass but still manageable.

  • Research Units include senior and young researchers, postdocs, PhD students, project managers, system architects and programmers.

  • - Research Units are strongly encouraged towards collaborations and projects that can exploit synergies among difference competences.

Web site l.jpg
Web site

Material for the course:

- Course objectives and program

- Course agenda

- Notes and slides of the lectures

- Project deliverables

- Exam rules and dates

Lecture schedule l.jpg
Lecture schedule

Wed. 16.30 -18.30 room A107

Fri. 14.30 -16.30 room A107

Please, come to the Lab lectures with your laptop or ask us for having a laptop.

… Is any laptop needed?

Slide7 l.jpg

The exam consists of an oral discussion of the project carried out during the laboratory.

  • During the course we will work on a project.

  • The exam will consist of some questions on the project implementation plus one question on the theory behind the project.

  • Project deliverables include both implementation artifacts (Eclipse project, test cases, etc.) and documentation artifacts (reports). Both are mandatory for admission to the exam.

  • The next lecture will be devoted to the presentation of the course project.

Project must be delivered one week before the exam date!

Objective l.jpg

  • This course aims at teaching students how to analyze and test a software system, when it is evolved to accommodate a set of change requirements (e.g., adding new functionalities, bug fixing, adaptation or restructuring, etc.), by executing a software project which involves:

    • definition of acceptance tests for the change requirements;

    • definition of tests for the user interface;

    • definition of unit tests for the modules implemented to realize the change requirements;

    • application of adequacy testing criteria to the implemented modules;

    • regression testing w.r.t. the preserved functionalities of the evolved software

Program l.jpg


- Which testing to apply?

- How to define test cases?

- How to automate test cases execution?

- How to decide when to stop testing?

  • Tools:

  • Fitnesse

  • Junit

  • MuJava

  • MuClipse

  • Clover

  • Jumble

  • Emma


  • Testing:

    • Acceptance Test

    • GUI testing

    • Unit test

    • Structural test (path and data flow)

    • Mutation test

    • Automated path generation and path testing

    • Regression test

    • Test case prioritization

Background and Context:

  • Software maintenance and evolution

  • Code analysis

  • Software testing


  • program analysis and understanding

  • user interface test

  • acceptance test creation and execution

  • unit test creation and execution

  • coverage and mutation test

  • regression test

Agenda l.jpg

Intro & course project: Sep 14, 16, 21, 23

Acceptance testing: Sep 28, 30; Oct 5

GUI testing: Oct 7, 12

Unit testing: Oct 14, 19

Analysis and testing theory: Oct 21, 26, 28; Nov 2, 30

Debugging: Nov 4, 9, 11, 16

Coverage and mutation testing: Nov 18, 23, 25

Regression testing: Dec 2, 7

Advanced topic: Dec 14, 16

Slide11 l.jpg

Software Maintenance


Software Testing

Software maintenance l.jpg
Software maintenance

Software maintenance has been defined as “the modification of a software product after delivery to correct faults, to improve performance or other attributes, or to adapt the product to a changed environment.” (ANSI/IEEE, 1983)

Maintenance activities can be classified into four categories:

  • Perfective maintenance

  • Adaptive maintenance

  • Corrective maintenance

  • Preventive maintenance

Software maintenance13 l.jpg
Software maintenance

Maintenance activities are difficult:

  • Continuing change.

  • Increasing complexity.

  • Fundamental law of program evolution.

  • Conservation of organizational stability (invariant work rate).

  • Conservation of familiarity (perceived complexity).

Software maintenance steps l.jpg
Software maintenance steps





Change location




Ripple effects


Legacy system l.jpg
Legacy system

  • They were implemented years ago ( 1970)

  • Their technology became obsolete (obsolete languages, language styles, hardware, …)

  • They have been maintained for a long time ( 30 years)

  • Their structure is deteriorated and does not facilitate understanding

  • Their documentation (if it exists) has become obsolete

  • Original authors are not available

  • Attention!

  • They contain business rules not recorded elsewhere

  • They can not be easily replaced (important!)

  • They represent a large investment

Legacy dilemma l.jpg
Legacy dilemma

What should we do with legacy code?

  • to build the new system from scratch.

  • trying to understand the legacy code and to reconstitute it in a new form.

Reverse engineering l.jpg
Reverse Engineering

Reverse engineering is the process of taking something (a device, an electrical component, a car, a software, …) apart and analyzing its working in details, usually with the intention to construct a new device or program that does the same thing.

  • Forward engineering is the traditional process of moving from high-level abstractions to the physical implementation of a system.




  • Reverse engineering is “the inverse” of Forward engineering




“Abstract Code Representation”


Re engineering l.jpg

  • Re-engineering is the examination (reverse engineering) of a system to reconstitute it (forward engineering) in a new form.

  • This process may include modifications with respect to new requirements not met by the original system (Semantics cannot be preserved).

  • The re-engineering process takes many forms, depending on its

  • objectives.

  • Sample objectives are:

  • - code migration/porting (ex. C to C++)

  • - reengineering code for reusing it

Restructuring l.jpg

Restructuring is the transformation from one representation to another at the same relative abstraction level - while preserving the system external behavior (functionality and semantics).


Code level: - from an unstructured (“spaghetti”) form to a

structured form (“goto-less”)

- conversion of set of “if-statements” into a

“case structure”.

Design level: to introduce design patterns (e.g., Model-View-Controller architecture).

Program analysis l.jpg
Program analysis

  • Program analysis is the (automated) inspection of a program to infer some properties. In some cases, properties can be inferred without running the program (static analysis). In other cases, properties can be inferred only running the program (dynamic analysis).

  • Examples are:

    • - Type analysis (type inference)

    • - Dead code analysis

    • - Clone analysis

    • - Pointer Analysis

Software testing l.jpg
Software Testing

  • Softwaretesting is a key activity, both in software development and in software maintenance. Its objectives are opposed to that of development: instead of making the system work, it aims at breaking it, thus revealing the presence of defects. This is the main reason why development and testing teams should be separate.

  • Code analyses are extremely helpful to testing. Specifically, structural testing

  • (aka white-box testing), as opposed to functional testing (aka black-box

  • testing), assumes the possibility to access the internal structure of the program, analyze it and derive testing criteria from such an analysis. The outcome supports:

    • Test case production/automatic generation.

    • • Definition of stopping criteria.

Testing l.jpg

  • One of the practical methods commonly used to detect the presence of errors (failures) in a computer program is to test it for a set of inputs. Testing detects errors; only exhaustive testing, usually infeasible, can prove correctness (absence of errors).

Our program

I1, I2, I3, …, In, …

The output

is correct?

Expected results

= ?

Obtained results


Examples of test case l.jpg
Examples of test case

  • Test Input Description: 1. Login to <Abc page> as administrator 2. Go to Reports page 3. Click on the ‘Schedule reports' button 4. Add reports 5. Update

  • Expected Results: The report schedule should get added to the report schedule Table

  • Test case for sort:

  • Test data: <12 -29 32 >

  • Expected output: -29 12 32

Terminology l.jpg
Terminology …

  • Failure: it is an observable incorrect behavior or state of a given system. In this case, the system displays a behavior that is contrary to its specifications/requirements. Thus, a failure is tied (only) to system executions/behaviors and it occurs at runtime when some part of the system enters an unexpected state.

  • Fault: (commonly named “bug/defect”) it is a defect in a system. A failure may be caused by the presence of one or more faults on a given system. However, the presence of a fault in a system may or may not lead to a failure, e.g., a system may contain a fault in its code but on a fragment of code that is never exercised so this kind of fault do not lead to a software failure.

  • Error: it is the developer mistake that produce a fault. Often, it has been caused by human activities such as the typing errors.

Slide26 l.jpg

Example …

  • Failure: x = 3 means y =9 Failure!

    • This is a failure of the system since the correct output would be 6

  • Fault: The fault that causes the failure is in line 5. The * operator is used instead of +.

  • Error: The error that conduces to this fault may be:

    • a typing error (the developer has written * instead of +)

    • a conceptual error (e.g., the developer doesn't know how to double a number)

Terminology27 l.jpg
Terminology …

  • Test Case: input sequence and associated expected output

    • Test Suite: a set of test cases for a system

  • Testing: testing is the process of executing a program with the intent of finding errors

    • Testing cannot guarantee the absence of faults,

    • Strategies for defining test suites,

    • Formal methods (e.g., model checking) can be used to statically verify software properties, this is not testing.

  • Debugging: finding and fixing faults in the code

Slide28 l.jpg

Sources for test cases definition …

  • The requirements to the program (its specification)

  • An informal description

  • A set of scenarios (use cases)

  • A set of sequence diagrams

  • A state machine

  • The system itself (the code or the execution of the application)

  • A set of selection criteria

  • Heuristics (e.g., guidelines for testing)

  • Experience (of the tester)

Testing three main questions l.jpg
Testing: three main questions …

A randomly selected set of inputs is often not adequate …

  • At which level conducting the testing?

    • System

    • Unit

    • Integration

  • How to choose inputs?

    • Considering the program as a black box

      • using the specifications/use cases/requirements

    • Considering the program as a white box

      • using the structure

  • How to identify the expected output?

    • Test oracles

  • Test phases l.jpg
    Test phases

    • Acceptance Testing– this checks if the end user functionalities are actually delivered. It is often a contractual prerequisite for the user to accept and pay for the software.

    • Unit testing – this is testing of a single function, procedure, class. It is usually done by the developer, not by a separate testing team.

    • Integration testing – this checks that units tested in isolation work properly when put together. It often requires drivers and stubs to simulate the missing components while integrating the system.

    • System testing – here the goal is to ensure that the whole system works properly.

    • Regression Testing – this checks that the system preserves its functionality after maintenance and/or evolution.

    Acceptance vs unit testing l.jpg
    Acceptance vs. Unit Testing

    • Acceptance Tests are specified by the customer and analyst to test that the overall system is functioning as required (Did developers build the right system?).

    • Acceptance tests typically test the entire system, or some large chunk of it.

    • Unit Tests are tests written by the developers to test a functionality as they write it.

    • Unit tests typically test each unit of a system in isolation.

    Iterative software development l.jpg

    “At different points in the process”

    Iterative Software development

    Write and execute unit tests

    Execute acceptance tests

    Write acceptance tests






    “Written before”

    “Executed after the development”

    Slide33 l.jpg


    Testing tools

    FIT/Fitnesse (High level)


    Perfomance and

    Load Testing






    Junit (Low level)

    Web UI




    Badly designed systems makes testing difficult l.jpg
    Badly designed systems makes testing difficult

    • We have a thick GUI that has program logic. The interfaces between the modules are not clearly defined.

    • Testing of specific functions cannot be isolated.

    • Testing has to be done through the GUI

    • Testing is difficult.

    “Badly designed system”

    GUI-test drivers

    Well architected applications makes testing simple l.jpg
    Well architected applications makes testing simple

    Design for testability

    • The GUI does not contain any program logic other than dealing with presentation.

    • The interfaces between the modules are well defined.

    • This give us testing advantages. Unit and System acceptance testing are simpler and they can be automated

    “Well architected application”

    How good these test cases are l.jpg
    How good these test cases are?

    Adequacy = level of confidence of a test suite applied to the system under test

    Several criteria:

    • coverage of the requirements (what if 100% covered wrt the implemented features)

    • coverage of the code (what if 100% covered wrt the code)

    • fault detection (what if no fault found)

    Coverage testing l.jpg
    Coverage Testing

    Coverage measures describe the degree to which a program has been tested

    Many type of coverage measures

    • statements

    • branches

    • paths

    • methods, classes

    • requirement specifications, etc.

    1 scanf("%d", &x);

    2 a = x + 1;

    3 b = x - 1;

    4 if (a < 10)

    5 x++;

    6 if (b > 20)

    7 x--;

    8 printf("%d\n", x);

    • Example with two input data:

    • X=7 => a=8 and b=6

    • Statements [1, 2, 3, 4, 5, 6, 8]

    • (2) X=22 => a=23 and b=21

    • Statements [1, 2, 3, 4, 6, 7, 8]

    Mutation testing l.jpg
    Mutation Testing

    • Mutant: a copy of the original program with a small change (seeded fault)

    • Mutant killed: if its behaviors/outputs differ from those of the original program

    • Mutant score: number of killed mutants

    Regression testing l.jpg



    Version X+1












    Regression testing

    Selective re-testing of a system or component to verify that modifications have not caused unintended effects

    - Ripple effects

    Can be conducted at each of the test levels: unit, integration, system

    Version X

    • find affected testcases (red)

    • change affected testcases (red)

    • execute them

    • define new test cases, if necessary

    Test case prioritization l.jpg
    Test case prioritization

    The faults revealed by a test case are unknown until the test case is executed and its output is evaluated against the oracle.

    The order in which test cases are executed affects:

    • The rate of fault detection: good orderings reveal faults earlier than bad ones;

    • The rate of code coverage: good orderings meet the required coverage level earlier than bad ones.

      The information can be used in regression testing to prioritize the re-executions of the test cases, thus early finding faults.

    Conclusions l.jpg

    • Software maintenance implies several activities (reverse engineering, analysis, etc.)

    • One of the most relevant and hard activities is software testing

    • Different types of testing exist: acceptance, unit, etc.

    • The motivation for acceptance testing is demonstrating working functionalities from the end-user perspective.

    • The motivation for unit testing is finding faults in the unit the developer is working on.

    • Several issues impact testing: test case definition, test case execution, adequacy criteria, etc.

    • Badly designed systems make testing difficult.

    Additional references l.jpg
    Additional references

    - V.R. Basili and R.W. Selby. Comparing the Effectiveness of Software Testing Strategies. IEEE Transactions on Software Engineering. 1987

    - Jim Heumann. Generating Test Cases From Use Cases. Online IBM journal. 2001

    - Peter Zielczynski. Traceability from Use Cases to Test Cases. online IBM journal


    - R.C.Martin and G.Melnik. Tests and Requirements, Requirements and Tests: A Möbius Strip. IEEE Software 2008

    - Y.K.Malaiya, M.N.Li, J.M.Bieman, R.Karcich. Software reliability growth with test coverage. IEEE Transactions on Reliability

    - Q. Yang, J.Jenny Li and D.M. WEISS. A Survey of Coverage-Based Testing Tools. The Computer Journal 2007

    - Ben H. Smith and Laurie Williams. Should Software Testers Use Mutation Analysis to Augment a Test Set. Journal of Software Systems 2009

    -Florentin Ipate, Raluca Lefticaru. State-based Testing is Functional Testing! TAIC-PART 2007

    -Jeff Offutt, Shaoying Liu, Aynur Abdurazik and Paul Ammann. Generating test data from state-based specifications. Jounal on Software Testing, Verification and Reliability. 2003