This presentation is the property of its rightful owner.
Sponsored Links
1 / 27

Testing PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

Testing. Course notes for CEN 4010. Outline. Introduction: terminology and philosophy Factors that influence testing Testing techniques. Why Do We Test?. We test to find bugs “Testing is the process of executing a program with the intent of finding errors”

Download Presentation


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript



Course notes for

CEN 4010



  • Introduction:

    • terminology and philosophy

  • Factors that influence testing

  • Testing techniques

Why do we test

Why Do We Test?

  • We test to find bugs

    • “Testing is the process of executing a program with the intent of finding errors”

  • A good test intends to find bugs

    • “A good test case is one that has a high probability of detecting an as-yet undiscovered error”

  • A test that finds no bug is a failure

    • “A successful test case is one that detects an as-yet undiscovered error”

Testing vs debugging

Testing vs. Debugging

  • Testing != Debugging

  • While testing may reveal a symptom of an error, it may not uncover the exact cause of the error

  • Debugging is the process of locating the exact cause of an error, and removing that cause

Our very real problem

Our Very Real Problem

  • We can never say for an arbitrary program:

    • This software has no errors

    • This software works only as intended

    • This software is safe

      Testing proves the presence, not the absence, of bugs

      Absence of evidence is not evidence of absence

4 dimensions of the testing

Outputs (i.e. Results)



Good Goal











4 dimensions of the Testing

Where does testing fit

Where Does Testing Fit?

When to test

When to test ?

What other tests are there

What other Tests Are There?

Test activities analysis

Test activities: Analysis

  • Test the Models against the Problem specification

    • “Play computer” through the models

  • Analysis models must accurately describe the problem to be solved, and the boundaries of the problem domain

Test activities design

Test activities: Design

  • Test the Models against the Solution domain.

    • “Play computer” through the models

    • Incorporating all required “design” classes

      • GUI frameworks

      • Third-party software and class libraries

      • Collection/Container classes

      • Operating environment classes (wrappers)

      • Interface classes to external resources. E.g.,

        • RDBMS,

        • network,

        • communications.

Test metrics

Test Metrics

  • Traditional Metrics:

    • No. of bugs per 10,000 Source Lines of Code

    • IEEE Standard 982.1

    • McCabe Complexity Measure

    • Halstead Software Science Measures

Test techniques

Test Techniques:

  • The “Best of all Possible Worlds” technique:

    • Run it...and make sure it doesn’t crash, then ship it!

  • For the rest of us:

    • Desk Checking

    • Inspections &Walkthroughs

    • Black-box/White-box testing

      • Boundary testing

      • Path testing

      • Code coverage

      • Error message testing

      • Top-down & Bottom-up

Desk checking

Desk Checking

  • Simple, Labor-intensive

  • Procedure

    • Developer has his/her design or code manually reviewed by another developer

    • “a second pair of eyes”

    • Developer must justify decisions to reviewer.

    • Developer reviews/accepts/rejects reviewer’s recommendations.

Inspections walkthroughs

Inspections & Walkthroughs

  • Formal or informal as needed.

  • Well-known technique.

  • Procedure:

    • Group gather to review artifacts (docs to code).

    • Group composed of “role” representatives (e.g. “user”, “QA”, “Standards”, etc.).

    • Rules of conduct apply to participants

    • Responsibility for verification and validation of artifact is removed from the developer.



  • Author narrates, statement by statement, the logic of his program.

  • Reviewers listen to the author, raise questions and ask for clarification in their attempt to uncover errors.

  • Ironically, most of the errors are discovered by the author as he/she “teaches” the program to the group.



  • Programmer does not narrate to the group.

  • One of the group plays the role of “tester”.

  • Tester comes to review armed with a small set of simple test cases to apply to the program.

  • Group “play computer” by performing manual simulation of the system using the test data.

  • Data are important as a vehicle to stimulate discussion.

  • Most errors are uncovered by questioning of the author rather than “execution” of data.

Inspections walkthroughs1

Inspections & Walkthroughs

  • When should I perform one?

    • Anytime, so long as the product under review is complete (i.e. tangible, understandable, and objective).

    • After each milestone with delivered artifacts (e.g. Requirements Spec., Analysis Models, Design Spec., pseudo-code or flowcharts prepared, program code written, etc.).

    • Yourdon’s milestones [YOUR79]:

      • after the design artifacts are completed

      • after the code is prepared but not compiled

      • after the first compilation

      • after the first “clean” compilation

      • after the first test data set have been executed successfully

      • after the programmer thinks all test cases have been executed successfully

Inspections walkthroughs2

Inspections & Walkthroughs

  • Who should participate?

    • Author/developer/presenter/producer

    • Moderator/coordinator

    • Scribe/secretary

    • Standards Bearer

    • Maintenance Expert

    • User Representative

    • GUI Expert

White box testing

White-Box Testing

  • White-box testing is the testing of the underlying implementation of a piece of software (e.g., source code) without regard to the specification (external description) for that piece of software.

  • The goal of white-box testing of source code is to identify such items as

    • (unintentional) infinite loops,

    • paths through the code which should be allowed, but which cannot be executed, and

    • dead (unreachable) code.

Black box testing

Black-Box Testing

  • Black-box testing is the testing of a piece of software without regard to its underlying implementation.

  • Specifically, it dictates that test cases for a piece of software are to be generated based solely on an examination of the specification (external description) for that piece of software.

  • The goal of black-box testing is to demonstrate that the software being tested does not adhere to its external specification. (Note that if there is no "external specification" it will be difficult to conduct black-box testing.)

Test categories

Black Box Tests

Boundary-value Analysis

Equivalence Partitioning

Cause-Effect Graphing

Error Guessing

Error Message Generationfrom [MYER79]

White Box Tests

Statement Coverage

Decision/Condition Coverage

Multiple-Condition Coverage

Test Categories

Black box tests

Black Box Tests

  • Boundary-value Analysis

    • Uses test cases “generated on, and immediately around, the boundaries of the input an output for a given piece of software.”

  • Equivalence Partitioning

    • Divide test sets into equivalence partitions: “collections of items which can all be regarded as identical at a given level of abstraction. I.e., a set of data items which will all evoke the same general behavior from a given software module.”

  • Cause-Effect Graphing

    • Generates test sets based on input combinations, specified by a combinatorial logic network or Boolean relations, and the expected outputs from those input combinations.

Black box tests1

Black Box Tests

  • Error Guessing

    • An intuitive and empirical technique of selecting test cases which elicit failures.

    • A.k.a. “a gift”.

  • Error Message Generation

    • Selecting test cases which will elicit the error messages defined in the software under test.

White box tests

White Box Tests

  • Statement Coverage

    • a.k.a. “basis path testing”

    • Use test cases which cause each executable statement to be executed at least once.

  • Decision/Condition Coverage

    • All statements are executed at least once, and all binary decisions have a TRUE and FALSE outcome at least once, all exceptions are raised at least once, and all possible interrupts are forced to occur at least once.

White box tests1

White Box Tests

  • Multiple-Condition Coverage

    • Use test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.

Path testing

Path Testing

  • Why can’t we just test all possible paths?

  • The flowchart on the right has approximately 100 trillion possible paths that may be executed!!!

  • At 1ms/test, working 24 hrs/day, 365 days/year: it will take 3170 years to test this simple structure.

  • Login