1 / 77

COTS Testing

COTS Testing . Diff. With in-house components. Interface (pre and post conditions) are not clearly specified. No Arch. and code. Black boxes to component user. Why use COTS. Why COTS Testing. Failure of Ariane5.

missy
Download Presentation

COTS Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COTS Testing

  2. Diff. With in-house components • Interface (pre and post conditions) are not clearly specified. • No Arch. and code. • Black boxes to component user. Why use COTS

  3. Why COTS Testing • Failure of Ariane5. • explosion resulted from insufficiently tested software reused from the Ariane 4 launcher.

  4. COTS Evaluation and selection

  5. Why rigorous evaluation of COTS? • Large number of alternative products. • Multiple stakeholders. • Large number of Quality criteria. • Compatibility with other products.

  6. Why evaluation difficult • Large number of evaluation criteria. • Different opinions are usually encountered among different stakeholders. • Evaluation criteria are not easily measurable at evaluation time. • Gathering relevant info. is prohibitively expensive. • COTS market is changing fast, evaluation must be performed several times during lifecycle. • Evaluation deals with uncertainty info.

  7. AHP Technique • Originally designed for economic and political science domains. • Requires a pair wise comparison of alternatives and pair wise weighting of selection criteria. • Enables consistency analysis of comparisons and weights, making possible to assess quality of gathered info.

  8. AHP Technique (contd.) • Allows alternatives to be measured on a ratio scale,we can determine how much better an alternative compared to other. • Practically usable if number of alternatives and criteria are sufficiently low, because comparisons are made by experts.

  9. Selection in practice Follows three stages • Informal screening for a set of requirements using selection thresholds. • More systematic evaluation using AHP process. • Detailed Information gathering involves testing, prototyping and reading technical documents.

  10. State of the art in COTS testing

  11. How to provide information to user • Component meta-data approach. • Retro-components approach. • Component test bench approach. • Built-in test approach. • Component+ approach. • STECC strategy.

  12. Component meta-data approach Component Binary code Call graphs, Testing info. done by provider Metadata

  13. Component metadata (contd.) Component server functionality Meta DB Metadata req Metadata

  14. Retro-components approach Component server functionality Meta DB Metadata req and test data Metadata

  15. Component test bench approach • A set of test cases called test operation is associated with each interface of a component. • A test operation defines the necessary steps for testing a specific method. • The concrete test inputs and expected test output packaged in a test operation.

  16. Built-in test approach. Component Functionality Tester Test case generator

  17. Built-in test approach(contd.) Normal mode. Maintenance mode. Functionality Functionality Tester Test case generator

  18. Built-in test approach(contd.) Base Component Inheritance Derived Component

  19. Component+ approach Tester Built-in testing enabled component Test case generator Functionality Handler Test executor Failure Recovery mech.s Interface

  20. Disadv. of BIT and component+ • Static nature. • Generally do not ensure that tests are conducted as required by the component user • The component provider makes some assumptions concerning the requirements of the component user, which again might be wrong or inaccurate.

  21. STECC strategy Server query functionality Meta DB Metadata Req. Tester Metadata Test generator

  22. Levels of Testing • Unit Testing. • Integration Testing. • System Testing

  23. Types of testing • Functionality Testing . • Reliability Testing. • Robustness Testing. • Performance Testing. • Load Testing. • Stress Testing. • Stability Testing. • Security Testing.

  24. Certifying COTS When considering a candidate component, developers need to ask three key questions: • Does component C fill the developer’s needs? • Is the quality of component C high enough? • What impact will component C have on system S?

  25. Certifying COTS(contd.)

  26. CERTIFICATION TECHNIQUES • Black-box component testing. • System-level fault injection. • Operational system testing. • Software Wrapping. • Interface propagation Analysis.

  27. Black box Testing • To understand the behavior of a component, various inputs are executed and outputs are analyzed. • To catch all types of errors all possible combinations of input values should be executed. • To make testing feasible, test cases are selected randomly from test case space.

  28. Black box test reduction using Input-output Analysis • Random Testing is not complete. • To perform complete functional testing, number of test cases can be reduced by Input-output Analysis.

  29. How to find I/O relationships • By static analysis or execution analysis of program.

  30. Fault Injection Component request Fault simulation tool Fault simulation tool Exceptions, No response Erroneous or malicious input

  31. Operational System Testing • complements system-level fault injection. • System is operated with random inputs (valid and invalid inputs) • Provides more accurate assessment of COTS quality. • To ensure that a component is a good match for the system.

  32. Software Wrapping Input wrapper Output wrapper Component output Input

  33. Instrumentation configuration file

  34. Interface propagation Analysis COTS Component 1 COTS Component 2 Fault Injector • Modify input, call correct method. • Call correct method, modify output. • Call perturbed function.

  35. Fault Injection used for • Robustness Testing. • Error propagation Analysis. • Reliability Testing. • Security Testing.

  36. Robustness Testing

  37. COTS testing for OS failures COTS component Operating System Wrapper

  38. Ballista approach • Based on fault injection technique. • Test cases are generated using parameter types of an interface. • Independent of internal functionality. • Testing is not complete.

  39. Test value Data Base

  40. Test value Data Base(contd.) • Integer data type: 0, 1, -1, MAXINT, -MAXINT, selected powers of two, powers of two minus one, and powers of two plus one. • Float data type: 0, 1, -1, +/-DBL_MIN, +/-DBL_MAX, pi, and e. • Pointer data type: NULL, -1 (cast to a pointer), pointer to free’d memory, and pointers to malloc’ed buffers of various powers of two in size.

  41. Test value Data Base(contd.) • String data type (based on the pointer base type): includes NULL, -1 (cast to a pointer), pointer to an empty string, a string as large as a virtual memory page, a string 64K bytes in length. • File descriptor (based on integer base type): includes -1;MAXINT; and various descriptors: to a file open for reading, to a file open for writing, to a file whose offset is set to end of file, to an empty file, and to a file deleted after the file descriptor was assigned.

  42. Test case generation • All combinations of values for the parameter types are generated. • Number of test cases generated are product of number of parameters and test base for that type.

  43. Error propagation analysis • Interface Propagation Analysis is used by injecting faults at one component. • This is done at component integration level. • A known faulty input is injected using fault injector into the system. • Components effected by this input are observed (how they handle the faulty input).

  44. Performance Testing

  45. Middleware • Application’s execution and Middleware cannot be divorced in any meaningful way. • In order to predict the performance of application component, performance of its middleware should be analyzed.

  46. Performance prediction Methodology Application’s performance prediction is three step process. • Obtaining Technology performance. • Analyzing Architecture specific behavioral characteristics. • Analyzing Application specific behavioral characteristics.

  47. Technology performance profile

More Related