1 / 19

Object-Oriented Validation and Verification Starting at the Beginning!

Object-Oriented Validation and Verification Starting at the Beginning!. The scope of a test is the collection of software components to be verified: system : complete integrated application as a black box integration : interactions of the components of a system or subsystem

nida
Download Presentation

Object-Oriented Validation and Verification Starting at the Beginning!

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Object-Oriented Validation and Verification Starting at the Beginning!

  2. The scope of a test is the collection of software components to be verified: system: complete integrated application as a black box integration: interactions of the components of a system or subsystem unit: in OO, testing an instance of a class Different testing strategies (or guidelines, a.k.a. heuristics) apply to different scopes… ** we will first learn how to create test models and then how to derive tests! ** Beyond the notions of tests and test cases, Binder introduces the idea of a test point: A test point is a specific value for test case input and state variables. Some well-known heuristics for test point selection include equivalence classes, boundary value analysis and special values testing. They are examples of a general strategy called partition testing: can we define sets of equivalent test points. Later, we will need to consider how applicable this strategy is to each the difference scopes of testing we consider. About Scope

  3. Testing (implicitly of code) is inherently bottom up: units (methods and classes) then clusters and subsystems, and finally systems. But contrary to Binder, we are not limiting ourselves to the testing of code and first want to consider the creation and V&V of test-ready models, that is, models from which tests can be generated. We will therefore start with the modeling process put in place in the previous course: (see next slide) from a problem description to requirements (FRs and NFRs), assumptions, use-cases and use-case diagrams to use-case maps to message sequence charts and class diagrams to finite state machines and/or code This process is NOT formal: e.g., how to write use-cases, how to go from use-case maps to MSCs, how to build hierarchical statecharts are all subjects of disagreement between different researchers! Where to Start?

  4. A Scenario Driven Modeling Approach Problem Description Use Case 1 Use Case 2 Use Case 3 Reqs UCMs MSCs Inter-scenario relationships FSMs and/or code

  5. OO V&V requires a traceability scheme: This scheme will be incrementally examined as we scrutinize the supplied case study. At the beginning of that document, we find a problem description (section 1) from which FRs, NFRs and Assumptions are obtained (section 2). Everything else is traced back to these artifacts. This first transformation appears to be very ad hoc… And yet our validation activities will depend on what we find at the origin of traceability links! Eric Yu proposes a Goal-Oriented Language (GRL) for capturing and reasoning about functional and mostly non-functional requirements (also called quality attributes): distinguish satisfiability (strict criteria) from satisficability (acceptable limit) GRL + UCM = URN (User Requirements Notation) submitted to ITU check out GRL tutorial at www.cs.toronto.edu/km/GRL: it mostly emphasizes soft goals… there is also a tool and a methodology Where to Start? (Take 2)

  6. For ITU-SG10, the definition of requirement traceability relations is important for many reasons: To evaluate requirements coverage. An important question that a developer must be able to answer is: Are all requirements addressed in the current version of the system? In order to answer this question, one must be able to determine precisely the set of requirements that are referenced in the different system models. To evaluate the impact of requirements modifications. Another important question that a developer must be able to answer is: What are the model elements that are related to a specific requirement? This question must often be answered in the case where modifications are made to requirements. The existence of traceability relationships allows evaluating the impact of modifications on the different models, and making the changes to affected models in a consistent manner. To allow requirements testing. In a scenario-driven (or use case-driven) process, requirements are associated with specific scenarios. Therefore, in order to test that the current implementation of a system is correct with respect to a specific requirement, one must first determine the set of scenarios that are related to the requirement. About Requirements Traceability (1)

  7. To allow the identification of conflicting requirements. The causes of system errors are various. One important cause of errors is conflicting requirements. This type of error is often difficult to prevent and is only discovered late in the development process. For this reason, when an error is found in the system, it is important to be able to trace it back to the different models, and ultimately to requirements, and see where the error has been introduced. If the error comes from conflicting requirements, then these requirements can be precisely identified. To reduce maintenance efforts. If one can determine precisely the set of model elements that can be impacted by the modification of a specific requirement (or model element), then the cost of modification would be significantly reduced. To preserve the rationale for design decisions. Knowing the original reasons for design decisions helps maintainers and enhancers to evaluate whether implementation should be changed in the light of new circumstances. Such re-engineering of implementations may be essential to keeping a product vital and competitive in the marketplace. About Requirements Traceability (2)

  8. Use Cases Revisited (Binder section 8.3 and chapter 14) As few use-cases as possible describing as many scenarios as possible.

  9. Fundamental truth about software testing: individual verification of components cannot guarantee a correctly functioning system. We need to test the system against the requirements Binder suggests 3 patterns discussed in this section Complete, consistent and testable requirements are necessary to develop an effective test suite. Each element of our Requirement Abstraction Layer must be a test model! UML ’s use cases are typically assumed to capture the requirements when in fact they each capture a set of scenarios associated to some requirements… Use cases are in NL and thus not test-ready. Semantics: tables 8.1 and 8.2 (p.280): <<uses>> and extends>> are transitive: Binder suggests at least checking every fully expanded extension combination… this is a form of path testing… keep that in mind... System Scope

  10. Jacobson suggests some tests can be derived from use cases: basic paths, alternative paths, associated reqs Several questions remain unanswered: how to choose test cases? In what order shall I apply my tests? When am I done? Binder suggests “extended UCs”, for which we need to determine: the frequency of each use case sequential dependencies (if not all relationships) between UCs operational variables Operational variables are inputs, outputs, and environment conditions that: lead to « significantly different » paths of a use case This idea will carry through to UCMs abstract the state of the system under test result in « significantly different » system responses Extended Use Cases

  11. Consider a typical Use Case Diagram for an ATM system: Figure 8.4 p.279 Table 14.1 p.722 considers different resulting paths of a same use case in terms of input and output combinations This viewpoint can be re-expressed in terms of operational variables for each use case: table 14.2 p.726 we need 4 variables to capture all combinations we will discuss combinational models in detail later but we must understand NOW that the variants do not overlap! We have partitioned the input and output space successfully! Finally, we can minimally ensure every variant is made true at least once, and false at least once. A true test case is a set of values that satisfies all conditions in a variant A false test case has at least one condition false see table 14.3 p.727 Operational Variables

  12. We don’t yet know where Binder’s patterns fit in our design process. We will get to that later. For now, we consider his pattern format and study individual patterns. The proposed format is: Name: suggests a general approach Intent: kind of test suite produced by this pattern Context: When does this pattern apply? Fault Model: What kinds of faults are to be detected? Strategy: How is the test suite designed and coded? Oracle: How can we derive expected results? Automation: How much is possible? Entry and Exit Criteria: Pre- and Post conditions to use Consequences: Advantages and disadvantages Binder ’s Format for Testing Patterns

  13. Intent: Build a system-level test suite by modeling essential capabilities as extended use-cases Context: Applies if most, if not all, essential requirements of the SUT can be expressed as extended Use Cases Strategy: A UC specifies a family of responses to be produced for specific combinations of external input and system state. This pattern represents these relationships as a decision table. More on how to use decision tables later. We need a complete inventory of operation variables and of their constraints (see recipe pp.724-728) we should use a traceability matrix like the one of Figure 14.1 p.729 An extended UC model can be used to catch faults similar to the ones expected of other models based on combinational logic (see next slide). Pattern 1: Extended UC Test

  14. From combinational logic: domain faults: usually on boundary of conditions go back to the triangle example… expired date… etc. logic faults: logic of specification is incorrectly coded ands, ors and nots got mixed up somewhere... (e.g., cash status) incorrect handling of don ’t cares output needs to stay the same across different values for DCs incorrect or missing dependency on pre-conditions a UC behaves correctly despite a violated pre-condition... From system scope: undesirable feature interactions (or is it scenario interactions) e.g., ATM shut downs while user is doing a transaction! incorrect output (e.g., wrong balance) abnormal termination (e.g., ATM eats your card…) omissions and surprises e.g., ATM does not get validated, all your accounts are zeroed… Pattern 1: Expected Faults

  15. Oracle: expected results by human intuition Automation: Appendix in Binder ’s book presents tools (rather distantly) relevant to use case testing Entry criteria: extended UCs must have been validated (via traceability? GRL?) no execution of test cases at system level before its components have been tested (i.e., bottom up test execution… next slide) Exit criteria: (as a % of completeness of req coverage) XUVC = (# of implemented UCs) * (Total # variants tested) * 100 (# of required UCs) (Total # of variants) Consequences: no one agrees on level of abstraction of a UC! Leaves out performance, fault tolerance, etc. extended UC reduces to a decision table Pattern 1: Other Fields

  16. Intent: verifies that all basic operations are exercised for each class in the system under test… Strategy: build a use case/class coverage table matrix such as the one of Figure 14.2 p.733 C: creation; R: read, U: update, D: delete Exit criterion: all basic operations of each class have been exerciced at least once… Pattern 2: Covered in CRUD

  17. Intent: Allocate the overall testing budget to each use case in proportion to its relative frequency. Context: any time you use Pattern 1, specially in the presence of a combinatorial explosion of possible paths. Strategy: you must somehow (!) obtain an operational profile from the potential users. Then you merely sort. Table 14.5 p.738 for ATM is representative! Pattern 3: Allocate Tests by Profile

  18. Several issues are typically downplayed if not ignored through use cases: Configuration (wrt versions of s/w and h/w) Compatibility Setup/shutdown Performance (see next slide) Etc. For Human Computer Interaction: Usability (McGrenere: « bloated UIs> »), security, documentation, operator procedure testing, etc. Beyond system testing: Alpha and beta testing (by independent volunteers), acceptance testing (by real customer), compliance testing (wrt standards and regulations) Implementation Specific System Tests

  19. We need quantitative formulations of performance requirements: Throughput: number of tasks completed per unit of time Response time: we need average and worst-case Utilization: how busy is the system Other issues: We need a worst case analysis Performance modeling initially requires lots of magic numbers Load testing considers how the system responds to increases input events Concurrency testing: load testing with concurrent events Stress testing: rate of inputs exceeds design limits Recovery Testing: testing recovery from a failure mode For real-time systems we must distinguish 3 types of events: Repeating: must be accepted within a certain interval Intermittent critical: aperiodic input with response within a fixed interval of time Repeating critical: combination of 2 previous ones About Performance

More Related