Test inventory
Download
1 / 11

- PowerPoint PPT Presentation


  • 192 Views
  • Updated On :

Test Inventory. A “successful” test effort may include: Finding “bugs” Ensuring the bugs are removed Show that the system or parts of the system works Goal of testing (Hutcheson’s): Establish a responsive, dependable system which satisfies and delights the users.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '' - rowdy


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Test inventory l.jpg
Test Inventory

  • A “successful” test effort may include:

    • Finding “bugs”

    • Ensuring the bugs are removed

    • Show that the system or parts of the system works

  • Goal of testing (Hutcheson’s):

    • Establish a responsive, dependable system which satisfies and delights the users.

    • Perform the above within the agreed upon constraint of

      • budget,

      • schedule, and

      • other resources

satisfies and delights def/metrics?


How do we achieve that goal use a process l.jpg
How do we achieve that goal? – use a process

Bug?

no

Organize

Resource

Record

success

Plan Test

Establish

Test Cases

Execute

Test Cases

yes

Record

failure

no

Bug

Fixed?

Retest.

yes

Bug

Fix?

“wait”

Receive

Response

from dev.

Report

Problem to

developers

yes

no

Record

Problem

fixed

Record

No-fix

reason

Record data

and produce

Reports

Integrate fix

and prepare

for rebuild

-by test coverage

-by test results

-by fix results

-etc.


Planning test l.jpg
Planning Test

Test Planning Coverage = # of test cases designed / # of scenarios

- planning mostly based on requirements doc.

- test cases designed with requirements and design docs.

Test Execution Coverage = # of test cases ran / # of designed test cases

- how do we decide how much to run?

- why wouldn’t we run all the designed test cases?


A real problem is getting bugs all fixed l.jpg
A Real Problem is Getting Bugs All Fixed

  • In large complex systems that requires several steps before one reaches the actual test case, a failure may not always be reproducible!

    • Makes debugging difficult when the developer runs and it executes! (consider an internal queue size problem --- when queue is full some external inputs get dropped. ---- you may not be able to get to full queue very quickly.)

  • Under the gun of schedule, not all problems can get fixed in time for rebuild and retest. (low priority ones get delayed and eventually forgotten!)

Products get released with both “known” bugs and some

“unknown” bugs!


Successful testing needs l.jpg
Successful Testing Needs

  • Good test plan

  • Good test execution

  • Good bug fixing

  • Good fix integration

  • Good “accounting” of problems found, fixed, integrated, and released.


Keeping a list or table of test cases l.jpg
Keeping a “List” or Table of Test Cases

  • We must quantitatively keep a list of test cases so we can ask:

    • How many items are on the list

    • How long does it take to execute the complete list

    • Where are we in terms of the list (test status)

    • Can we prioritize the list

    • Arrange the list to show coverage in a tabular form

Test Case

Funct. 1

Funct. 2

Funct. 3

. . .

X

X

# 1

# 2

X

X

.

.


How do we measure test l.jpg
How do We Measure Test ?

  • Much like how we measure code ---- loc?

    • Number of lines of test script written in some language?

  • A test case may be measured by the number ofsteps involved in executing the test; e.g.

    • Step1 input field x

    • Step2 press submit

    • Step3 choose from displayed options

    • Step 4 press submit

    • (note that not all 4 steps test cases are the same --- much like not all 4 loc are the same.)

  • A test case is a comparison of actual versus expected result - - - no matter how many steps are needed to get the result.

    • This may be vastly different in the test time required.

  • Every keystroke and every mouse movement should be counted!

Your thoughts -----?


Some typical types of test l.jpg
Some Typical Types of Test

  • Unit Test – testing at “chunks of code” level; done by module author

    • Small number of keystrokes and mouse movements

  • Functional Test – testing a particular application function that is usually a requirement statement.

    • Often tested as a “black box” test

  • “System” Test – testing the system internals and internal structures. (Not to be confused with total applications system test)

    • Often tested as a “white box’ test


An interesting comparison of 2 tests l.jpg
An interesting comparison of 2 tests

item

Prod. 1

Prod. 1.1

# of test scripts (actual)

1,000

132

# user functions (actual)

236

236

# of verifications / test script

1

50

1,000

6,600

Total # of verifications performed

Average # of times a test is executed

1.15

5

1150

33,000

Total # of tests attempted - computed

20 min.

4 min

Average duration of test (known #)

Total time running the test (from log)

383 hrs

100hrs.

# verifications/ hr of testing

2.6

66


Some more interesting numbers l.jpg
Some more interesting numbers

  • Efficiency = work done/expended effort = verifications/ hr of testing

    • For prod1: efficiency was 2.6. and prod2: it was 66

  • Cost is the inverse of efficiency: exp. effort/ work done

    • For prod1 : 383 p-hrs/1000 verifications = .383 p-hrs/verification

    • For prod1; 383 p-hrs/ 236 functions = 1.6 p-hrs/function verified

  • How big is the test? --- number of test scripts identified

  • Size of test set – number of test scripts that will be executed to completion

  • Size of test effort – total time required to perform all the test activities: plan, analyze, execute, track, retest, integrate fixes, etc.

  • Cost of total test effort - - - size of test effort in person-hours multiplied by dollars per person hour.

* Test schedule should be built based on historical information from past &

an estimate of the current effort


How do we create a test inventory l.jpg
How do we create a test inventory

  • Data collected from inspections/reviews of requirements/design/etc.

  • Known Analytical methods

    • Path analysis

    • Data analysis

    • Usage statistics profile

    • Environmental catalog (executing environments)

  • Non-Analytical

    • Experts’ gorilla test

    • Customer support’s past intuition/brainstorming


ad