Lesson 09 software verification validation and testing l.jpg
This presentation is the property of its rightful owner.
Sponsored Links
1 / 78

Lesson 09 Software Verification, Validation and Testing PowerPoint PPT Presentation


  • 77 Views
  • Uploaded on
  • Presentation posted in: General

Lesson 09 Software Verification, Validation and Testing. Includes: Software Testing Techniques Intro to Testing. Includes materials adapted from Pressman: Software Engineering: A Practitioner’s Approach , Fifth Edition, McGraw-Hill, 2000. Software Testing.

Download Presentation

Lesson 09 Software Verification, Validation and Testing

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Lesson 09 software verification validation and testing l.jpg

Lesson 09Software Verification, Validation and Testing

  • Includes:

  • Software Testing Techniques

  • Intro to Testing

Includes materials adapted from Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Software testing l.jpg

Software Testing

Testing is the process of exercising a

program with the specific intent of finding

errors prior to delivery to the end user.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


We design test cases to l.jpg

We Design Test Cases to...

  • have high likelihood of finding errors

  • exercise the internal logic of software components

  • exercise the input and output to uncover errors in program function, behavior, and performance

Goal is to find maximum number of errors with the minimum amount of effort and time!

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Testing is destructive activity l.jpg

Testing is “Destructive” Activity

  • designing and executing test cases to “break” or “demolish” the software.

  • Must change your mindset during this activity

The objective is to find errors therefore errors found are good not bad. Tell that to a manager!

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Testing objectives l.jpg

Testing Objectives

  • Execute a program with intent of finding an error

  • Good test case has high probability of finding an as-yet undiscovered error

  • Successful test case finds an as-yet undiscovered error

Successful testing uncovers errors.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Testing demonstrates l.jpg

Testing demonstrates ...

  • Software functions work as specified

  • Behavioral and performance requirements appear to be met

  • Data collected is an indicator of reliability and quality

TESTING CANNOT SHOW THE ABSENCE OF ERRORS AND DEFECTS. Testing only shows that errors and defects are present.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Basic principles of testing l.jpg

Basic Principles of Testing

  • All testing traceable to requirement

  • Plan testing long before testing begins. Plan and design tests during design before any code has been generated.

  • Pareto Principle - 80% errors in 20% of components

  • Start small and progress to large. First test individual components (unit test), then on clusters of integrated components (integration test), then on whole system

  • Exhaustive testing not possible but we can assure that all conditions have been exercised

  • All testing should not be done by developer - need independent 3rd party

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Testability l.jpg

Testability

  • Operability —it operates cleanly

  • Observability—the results of each test case are readily observed

  • Controllability—the degree to which testing can be automated and optimized

  • Decomposability—control scope of testing

  • Simplicity—reduce complex architecture and logic to simplify tests

  • Stability—few changes are requested during testing

  • Understandability—of the design and documents

Testability refers to how easily product can be tested. Design software with “Testability” in mind.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


What testing shows l.jpg

What Testing Shows

  • Errors

  • Requirements conformance

  • Performance

  • An indication of quality

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Who tests the software l.jpg

Who Tests the Software?

Developer

IndependentTester

Understands the system but will test “gently” and is driven by “delivery”

Must learn about the system but will attempt to break it and is driven by quality

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Software testing11 l.jpg

Software Testing

  • Black Box Testing Methods

  • White Box Testing Methods

  • Strategies for Testing

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Black box testing l.jpg

Black Box Testing

  • Based on specified function, on the requirements

  • Tests conducted at the software interface

  • Demonstrates that the software functions are operational, input is properly accepted, output is correctly produced, and integrity of external info is maintained

  • Uses the SRS as basis for construction of tests

  • Usually performed by independent group

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


White box testing l.jpg

White-Box Testing

… Our goal is to ensure that all statements and conditions have been executed at least once.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


White box testing i l.jpg

White Box Testing -- I

  • Based on internal workings of a product; requires close examination of software

  • Logical paths are tested by providing test cases that exercise specific sets of conditions and/or loops

  • Check status of program by comparing actual results to expected results at selected points in the software

Exhaustive path testing is impossible

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Exhaustive testing l.jpg

Exhaustive Testing

loop < 20 times

14

There are 10 possible paths! If we execute one

test per millisecond, it would take 3,170 years to

test this program!!

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


White box testing ii l.jpg

White Box Testing -- II

  • Logic errors and incorrect assumptions usually occur with special case processing

  • Our assumptions about flow of control and data may lead to errors that are only uncovered during path testing

  • We make typing errors; some uncovered by compiler (syntax, type checking) BUT others only uncovered by testing. Typo may be on obscure path

Black box testing can miss these types of errors

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Selective testing l.jpg

Selective Testing

Selectedpath

loop < 20 times

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Software testing techniques testing analysis l.jpg

Software Testing Techniques Testing Analysis

Includes materials adapted from Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Test case design l.jpg

Test Case Design

Uncover errors in a complete manner with a minimum of effort and time!

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Basis path testing i l.jpg

Basis Path Testing -- I

  • A white box testing technique - McCabe

  • Use this technique to derive a logical measure of complexity

  • Use as a guide for defining a “basis set” of execution paths

  • Test cases derived to execute the basis set are guaranteed to execute every statement at least one time during testing

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Cyclomatic complexity l.jpg

Cyclomatic Complexity

  • This is a quantitative measure of the logical complexity of a program.

  • Used in conjunction with basis set testing it defines the number of independent paths in the basis set

  • It provides an upper bound for the number of tests that ensure all statements have been executed at least once.

  • See http://www.mccabe.com/pdf/nist235r.pdf for more detailed paper McCabe’s Cyclomatic Complexity.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Basis path testing ii l.jpg

Basis Path Testing -- II

First, we compute the cyclomatic

Complexity::

1

A

number of simple decisions + 1

1,2,3=3 decisions+1

B

2

C

or

number of enclosed areas + 1

A,B,C=3 areas +1

3

In this case, V(G) = 4

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Cyclomatic complexity23 l.jpg

Cyclomatic Complexity

A number of industry studies have indicated that the higher the V(G), the higher the probability of errors.

modules

V(G)

modules in this range are more error prone

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Basis path testing iii l.jpg

1

2

3

4

5

6

7

8

Basis Path Testing -- III

Next, we derive the

independent paths:

Since V(G) = 4,

there are four paths

Path 1: 1,2,3,6,7,8

Path 2: 1,2,3,5,7,8

Path 3: 1,2,4,7,8

Path 4: 1,2,4,7,2,...7,8

Note the … implies insertion

of path 1, 2, or 3 here.

Finally, we derive test

cases to exercise these

8

Paths.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Creating flow graphs l.jpg

Creating Flow Graphs

  • Circle (node) represents one or more statements

  • Arrows (edges) represent flow or control. Must terminate in a node.

  • Region is an area bounded by edges and nodes. The area outside the flow graph is included as a region.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Calculating cyclomatic complexity from flow graph l.jpg

Calculating Cyclomatic Complexity from Flow Graph

  • Count the number of regions

  • V(G) = E - N + 2

    • where E = number of edges

    • N = number of nodes

  • V(G) = P + 1

    • where P = number of predicate nodes (2 or more edges leave the node)

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Basis path testing notes l.jpg

Basis Path Testing Notes

  • - You don’t need a flow chart or graph but the picture helps when you trace program paths

  • - Count each simple logical test as 1, compound tests count as 2 or more (depending on number of tests)

  • Basis Path Testing should be applied to critical modules.

  • Some Development Environments will automate calculation of V(G)

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Deriving test cases l.jpg

Deriving Test Cases

  • Using design or code as a foundation, draw a corresponding flow graph

  • Determine the cyclomatic complexity

  • Identify the basis set of linearly independent paths

  • Prepare test cases that will force execution of each path in the basis set

  • Exercise - create flow graph from example

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Graph matrices l.jpg

Graph Matrices

  • Software tools exist that use a graph matrix to derive the flow graph and determine the set of basis paths

  • Square matrix whose size equals the number of nodes on the flow graph

  • Each node is identified by number and each edge by letter

  • Can add link weight for other more interesting properties (e.g. processing time, memory required, etc.)

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Control structure testing l.jpg

Control Structure Testing

  • Basis path testing is not enough

  • Must broaden testing coverage and improve quality of testing

    • Condition Testing

    • Data Flow Testing

    • Loop Testing

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Condition testing i l.jpg

Condition Testing --I

  • Exercise the logical conditions in a program module

  • Boolean variable or relational expression

  • Compound conditions - one or more conditions

  • Detect errors in conditions AND also in rest of program. If test set is effective for conditions, likely also for other errors.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Condition testing ii l.jpg

Condition Testing --II

  • Branch Testing - test each True and False branch at least once

  • Domain Testing - 3 or 4 tests for a relational expression. Test for greater than, equal to, less than. Also a test which makes the difference between the 2 values as small as possible.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Data flow testing l.jpg

Data Flow Testing

  • Selects test paths according to the locations of definitions and uses of variables in the program.

  • Can’t use for large system but can target for suspect areas of the software

  • Useful for selecting test paths containing nested if and loop statements

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Loop testing l.jpg

Loop Testing

  • White box technique focuses on validity of loop constructs

  • Four different types of loops:

    • Simple loops

    • Nested loops

    • Concatenated loops

    • Unstructured loops - should redesign to reflect structured constructs

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Loop testing35 l.jpg

Loop Testing

Simple

loop

Nested

Loops

Concatenated

Loops

Unstructured

Loops

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Loop testing simple loops l.jpg

Loop Testing: Simple Loops

Minimum conditions—Simple Loops

1. skip the loop entirely

2. only one pass through the loop

3. two passes through the loop

4. m passes through the loop m < n

5. (n-1), n, and (n+1) passes through

the loop

where n is the maximumnumber

of allowable passes

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Loop testing nested loops l.jpg

Loop Testing: Nested Loops

Nested Loops

  • Start at the innermost loop. Set all outermost loops to their minimum values.

  • Test the min+1, typical, max-1 and max for the innermost loop while holding the outermost loops at minimum values.

  • 3. Move out one loop and set it up as in step 2 holding all loops at typical values until the outermost loop has been tested.

ConcatenatedLoops

If the loops are independent of each other then treat as simple loops. Otherwise treat as nested loops.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Black box testing38 l.jpg

Black-Box Testing

requirements

output

input

events

Also called behavioral testing

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Black box testing39 l.jpg

Black Box Testing

  • Does not replace white box testing

  • A complementary approach

  • Focuses on functional requirements of the software

  • Tries to find following types of errors:

    • incorrect or missing functions

    • interface errors

    • errors in data structures or database access

    • behavior or performance errors

    • initialization or termination errors

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Black box testing40 l.jpg

Black Box Testing

  • Done during later stages of testing

  • Tests designed to answer following questions

    • how is functional validity tested?

    • How is system behavior and perf tested?

    • What classes of input will make good test cases?

    • Is system sensitive to certain input values?

    • How are the boundaries of a data class isolated?

    • What data rates and data volume can the system take?

    • What effect will specific data comb. have on system

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Equivalence partitioning l.jpg

Equivalence Partitioning

  • Black box method that divides the input domain of a program into classes of data from which test cases can be derived

  • Strive to design a test case that uncovers classes of errors and reduces the total number of test cases that must be developed and run. E.g. incorrect processing of all character data

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Equivalence partitioning42 l.jpg

Equivalence Partitioning

user

queries

data

output

formats

mouse

picks

errors

prompts

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Sample equivalence classes l.jpg

Sample Equivalence Classes

Valid data

  • User supplied commands

  • Responses to system prompts

  • Filenames

  • Computational Data

    • physical parameters

    • bounding values

    • initiation values

  • Output data formatting

  • Responses to error msgs

  • Graphical data (e.g. mouse picks)

Invalid data

  • Data outside bounds of the program

  • Physically impossible data

  • Proper value supplied in the wrong place

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Equivalence class definition guidelines l.jpg

Equivalence Class DefinitionGuidelines

  • Input condition specified range - one valid and 2 invalid classes defined

  • Input condition requires specific value, one valid and 2 invalid classes defined

  • Input condition specifies a number of a set, one valid and one invalid class defined

  • Input condition is Boolean, one valid and one invalid class defined

  • E.g. prefix - 3 digit number not beginning with 0 or 1; Input condition: range - specified value >200; Input condition: value - 4 digit length

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Boundary value analysis l.jpg

Boundary Value Analysis

  • More errors occur at boundary of the input domain

  • BVA leads to selection of test cases that exercise the boundaries

  • Guidelines:

    • Input in range a..b: select a, b, just above and just below a and b

    • Inputs with number of values: select min and max, just above and below min, max

    • Use same guidelines for output conditions

    • boundaries on data structures (array with 100 entries): test at boundary

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Software testing strategies l.jpg

Software Testing Strategies

Vi and Ira Glickstein


Testing goals review l.jpg

Testing Goals - Review

  • Goal is to discover as many errors as possible with minimum effort and time

  • Destructive activity - people who constructed the sw now asked to test it

    • Vested interest in showing sw is error-free, meets requirements, and will meet budget and schedule

    • Works against thorough testing

  • Therefore, should the developer do no testing? Should all testing be done independently and testers get involved only when developers finished with construction?

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Testing strategies l.jpg

Testing Strategies ...

  • In the past, only defense against programming errors was careful design and the intelligence of the programmer

  • Now we have modern design techniques and formal technical reviews to reduce the number of initial errors in the code

  • In Chapter 17 we discussed how to design effective test cases, now we discuss the strategy we use to execute them.

  • Strategy is developed by project manager, software engineer, and testing specialists. It may also be mandated by customer.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Why is testing important l.jpg

Why is Testing Important?

  • Testing often accounts for more effort than any other sw engineering activity

  • If done haphazardly, we

    • waste time

    • waste effort

    • errors sneak thru

  • Therefore need a systematic approach for testing software

  • Work product is a Test Specification (Test Plan)

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


What is a test plan l.jpg

What is a Test Plan?

  • a road map describing the steps to be conducted

  • specifies when the steps are planned and then undertaken

  • states how much effort, time, and resources will be required

  • must incorporate test planning, test case design, test execution, and data collection and evaluation

Should be flexible for customized testing but rigid enough for planning and management tracking.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Strategic issues l.jpg

Strategic Issues

  • Specify requirements in a quantifiable manner so the requirement can be tested.

  • State testing objectives explicitly

  • Understand potential users and develop profiles

  • Develop testing plan - in increments quickly

  • Build robust software with error checking

  • Use effective Formal Test Reviews (FTRs) to find errors early - save time/$

  • Conduct FTRs on tests and test strategy

  • Develop continuous improvement - collect metrics

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Testing strategy l.jpg

Testing Strategy

unit test

integration

test

Component level

Integrate components

validation

test

system

test

Requirements level

System elements tested as a whole

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Verification and validation l.jpg

Verification and Validation

  • Verification - ensure sw correctly implements specified function

    • “Are we building the product right?”

  • Validation - ensure sw is traceable to requirements

    • “Are we building the right product?”

  • Independent Test Group (ITG) performs V&V - works closely with developer to fix errors as they are found

  • ITG starts at beginning of project thru finish

  • ITG reports to organization apart from SW

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Comparison of testing types l.jpg

Comparison of Testing Types

Eliminate duplication of testing between different groups to save time/$

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Unit testing l.jpg

Unit Testing

module

to be

tested

Types of testing

interface

localdatastructures

boundaryconditions

independent paths (basis paths)

error handling paths

test cases

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Unit test environment l.jpg

Unit Test Environment

driver

interface

local data structures

Module

boundary conditions

independent paths

error handling paths

stub

stub

test cases

Testing is simplified if unit has only one function (hi cohesion) - fewer test cases

RESULTS

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Drivers and stubs l.jpg

Drivers and Stubs

  • Driver - Main program that accepts test case data, passes such data to the module, and prints relevant results

  • Stub - replace modules that are subordinate to unit under test; uses the subordinate module’s I/F, may do minimal data manipulation, prints verification of entry, returns control to module undergoing testing.

  • Overhead when writing drivers and stubs

  • Sometimes, can’t adequately unit test with simple overhead sw - then wait till integration (drivers and stubs may be used here)

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Types of computation errors l.jpg

Types of Computation Errors

  • Misunderstood or incorrect arithmetic precedence

  • Mixed Mode operations

  • Incorrect initialization

  • Precision inaccuracy

  • Incorrect symbolic representation of an expression

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Types of control flow errors l.jpg

Types of Control Flow Errors

  • Comparison of different data types

  • Incorrect logical operators or precedence

  • Expectation of equality when precision error makes unlikely

  • Incorrect comparison of variables

  • Improper or nonexistent loop termination

  • Failure to exit

  • Improperly modified loop variables

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Error handling evaluation l.jpg

Error Handling Evaluation

Error conditions must be anticipated and error handling must reroute or cleanly terminate processing – Antibugging

Typical Antibugging Errors:

  • Error description is unintelligible

  • Error noted doesn’t match error encountered

  • Error condition causes system intervention

  • Exception condition processing is incorrect

  • Error description doesn’t provide enough info

Make sure error handling is tested!

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Integration testing strategies l.jpg

Integration Testing Strategies

  • Options:

    • •The “big bang” approach

    • OR

    • •An incremental construction strategy

      • Top Down

      • Bottom Up

      • Sandwich

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


What is integration testing l.jpg

What is Integration Testing?

  • Take unit tested components and build a program structure by joining the components while testing to find errors associated with interfaces between components

  • Data can be lost across an interface, one module can have inadvertent adverse affect on another, etc.

  • Program is constructed and tested in small increments. Errors are easier to isolate, interfaces are more likely to be tested completely, systematic test approach is applied.

  • Software gains maturity as integrate modules.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Top down integration l.jpg

Top Down Integration

A

top module is tested with

Stubs for B, F, G

B

F

G

stubs are replaced one at

a time with real components, "depth first"

C

as new modules are integrated,

some subset of tests is re-run - regression

D

E

What would be replaced next?

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Bottom up integration l.jpg

Bottom-Up Integration

A

B

F

G

drivers are removed and builds combined

Moving upward one at a time, "depth first"

C

worker modules are grouped into

builds that perform specific subfunction

and integrated

D

E

cluster

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Top down vs bottom up integration l.jpg

Top Down

Stubs replace low level modules which normally supply data

Therefore may delay some testing (not good)

Simulate the actual module in the stub (high overhead)

Verifies major control early

Bottom up

First integrate the low level modules that supply data

Program doesn’t exist until last module integrated

Easier test case design

Don’t need stubs - need drivers.

Top-down vs Bottom Up Integration

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Sandwich testing l.jpg

Sandwich Testing

A

Top modules are

tested with stubs

B

F

G

C

Worker modules are grouped into

builds and integrated

D

E

cluster

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Critical modules l.jpg

Critical Modules

Identify critical modules and target for early testing and focus regression testing on them. Critical modules:

  • Address several sw req

  • High level of control (high in sw structure)

  • Complex or error prone (high V(G))

  • Has definite performance requirements

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


High order testing l.jpg

High Order Testing

  • Validation Test - Test Plan outlines classes of tests to be performed. Test Procedures have specific test cases.

    • After each test case runs, either passes or have deviation which is recorded as a Software Trouble Report (STR)

    • Resolution of STRs is monitored

  • Alpha and Beta Testing - Alpha at developer’s site and Beta at customer site

  • System Test - tests to verify system elements have been properly integrated and perform required functions

    • This was performed as a System Level Acceptance Test (SLAT) at IBM.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Debugging a diagnostic process l.jpg

Debugging: A Diagnostic Process

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Debugging process l.jpg

Debugging Process

  • Debugging is a consequence of testing

  • Debugging effort is combination of the time required to diagnose the symptom and determine the cause of the error AND the time required to correct the error and conduct regression tests.

  • Regression test is a selective re-running of tests to assure that nothing has been broken when fix or modification was implemented.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Symptoms causes l.jpg

Symptoms & Causes

  • Symptom and cause may be geographically remote:

  • Symptom may disappear when another error is fixed

  • Symptom may be caused by nonerror (e.g. roundoff)

  • Symptom caused by human error; compiler error; assumptions

  • Symptom caused by timing problems

  • Hard to duplicate conditions (real-time application)

  • Symptom intermittent - with embedded systems

  • Symptom due to causes distributed across number of tasks on different processors.

symptom

cause

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Consequences of bugs l.jpg

Consequences of Bugs

infectious

damage

catastrophic

extreme

serious

disturbing

annoying

mild

Bug Type

Bug Categories:

function-related bugs,

system-related bugs, data bugs, coding bugs,

design bugs, documentation bugs, standards

violations, etc.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Debugging techniques l.jpg

Debugging Techniques

brute force / testing

backtracking

Cause elimination

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Brute force debugging l.jpg

Brute Force Debugging

  • “Let the computer find the error” - memory dumps, run-time traces, WRITE statements all over program

  • Most common and least efficient method for isolating cause of error

  • Wasted effort and time

  • Think first!

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Backtracking debugging l.jpg

Backtracking Debugging

  • Begin at the site where symptom uncovered, source code is traced backward manually until cause is found.

  • As number of LOC increases, number of backward paths becomes unmanageably large

  • Fairly common debugging approach - successful for small programs

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Cause elimination debugging l.jpg

Cause Elimination - Debugging

  • Data related to the error occurrence is organized to isolate potential causes

  • “Cause hypothesis” is devised and data used to prove or disprove the hypothesis

  • Or, a list of all possible causes is developed and tests run to eliminate each.

From Pressman: Software Engineering: A Practitioner’s Approach, Fifth Edition, McGraw-Hill, 2000

Vi and Ira Glickstein


Debugging final thoughts l.jpg

Debugging: Final Thoughts

  • Think about the symptom you are seeing

  • Use tools such as dynamic debugger to gain more insight about the bug.

  • Get help from somebody else if you are stuck. Just talking to another person can help you see the cause of the bug.

  • Every time you touch existing code, you run the risk of injecting errors. Therefore ALWAYS run regression tests on all fixes.

  • Ask the following questions: Is the bug also in another part of program? How can we prevent the bug in the first place?

Vi and Ira Glickstein


Webliography l.jpg

Webliography

  • Check the Webliography for some interesting cases of software bugs such as the Therac radiation bug and other information about testing.

Vi and Ira Glickstein


  • Login