1 / 90

Test, Verifica e Validazione, Qualità del Prodotto Software

Test, Verifica e Validazione, Qualità del Prodotto Software. G. Berio. Process framework Framework activities work tasks work products milestones & deliverables QA checkpoints Umbrella Activities. Contesto. Prodotto intermedio. Parte del Prodotto Software (delivered to the customer).

ruth-haley
Download Presentation

Test, Verifica e Validazione, Qualità del Prodotto Software

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Test, Verifica e Validazione, Qualità del Prodotto Software G. Berio

  2. Process framework • Framework activities • work tasks • work products • milestones & deliverables • QA checkpoints • Umbrella Activities Contesto Prodotto intermedio Parte del Prodotto Software (delivered to the customer) • Communication • Planning • Modeling • Requirements analysis • Design • Construction • Code generation • Testing • Deployment Prevent the non quality; quality forecasting on work products Evaluating the quality; quality assessment on deliverables System engineering Es. il modello analitico è certamente un work product e potrebbe anche essere un deliverable

  3. Argomenti • Testing in generale • Testing Conventional Unit Code • Tecniche statiche (di Verifica) e Conventional Unit Code Testing • Testing OO Unit Code • Testing in the large • Testing attributi di qualità diversi da correttezza, affidabilità, robustezza, safety e prestazioni

  4. Testing: Definition(s) Pressman writes: “Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user”. (Dijkstra, 1987) Better (SWEBOK): Testing is an activity performed for evaluating product quality, and for improving it, by identifying defects and problems. Software testing consists of the dynamic verification of the behavior of a program on a finite set of test cases, suitably selected from the usually infinite executions domain (inputs), against the expected behavior. Traduzione Pressman: Test = Collaudo

  5. Software product: Test what should be delivered to the customer Why we test Expected Quality attributes Code (Componenti) User manual Installed Code Technical documentation Software and System Requirement specifications What we test Design Model

  6. Componente < > Unit Unit Prossima richiesta?() Unit Creare() Valutare Costo Unit Costo Valutare Costo & Disponibilità

  7. Symptoms (Failure) & Causes (Fault, Defect) symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors cause may be due to a system or compiler error symptom cause (defect) cause may be due to assumptions that everyone believes fault failure symptom may be intermittent Elaborated from Pressman

  8. Testing and lack of "continuity" • Testing behaviors of software code by examining a “set of test cases“ • Impossible to extrapolate behavior of software code from a finite set of test cases • No continuity of behavior • it can exhibit correct behavior in infinitely many cases, but may still be incorrect in some cases

  9. Categorie di Testing • Validation (acceptance) testing (*) • To demonstrate to the developer and the customer that the software meets its requirements; • A successful test shows that the software operates as intended. • Defect testing • To discover defects in the software where its behavior is incorrect or not in conformance with its specification; • A successful test is a test that makes the software perform incorrectly and so exposes a defect in the software itself. (*) Traduzione Pressman: Validation testing=Collaudo di convalida o Collaudo di validazione

  10. What is a “Good” Test Case? • A good defect test case has a high probability of finding an error (failure) • A good test case is not redundant. • A good test case should be neither too simple nor too complex • A good test case should normally be repeatable (i.e. leading to the same results) Elaborated from Pressman

  11. When stopping defect testing? • If defect testing (on a set of test cases) does not detect failures, we cannot conclude that software is defect-free • Still, we need to do testing driven by sound and systematic principles…

  12. Expected Behavior nel Defect Testing • Lo expected behavior per un unit si dovrebbe ottenere da: • Specifica dei requisiti • Tuttavia le componenti introdotte nella progettazione non hanno un diretto legame con la specifica dei requisiti ma • Nel progetto hanno tuttavia una loro specifica (opposta al codice) +/- precisa: • Automi • Pre-post condizioni • Descrizione dei tipi di Input e Output • Diagrammi di sequenza • Etc. • Chiameremo tale specifica, la specifica del componente (software); spesso, riferendosi alla specifica di tutti i componenti, si parla genericamente di specifica del software (corrispondente al modello di progetto, come il modello di analisi corrisponde alla specifica dei requisiti) • La specifica del componente deve comprendere la specifica dello unit su cui il test deve essere eseguito (o permettere di derivare la specifica dello unit); dalla specifica dello unit si dovrebbe ottenere lo expected behavior

  13. Acceptance Testing • To be performed by the Customer or End users • The expected behavior (dell’intero software) should be fixed since the beginning: a special section in the requirement (specification) document should explicitly be devoted to “how to perform acceptance testing”

  14. Esempio Utente richiede l’ascensore Casi d’uso di specifica dei requisiti Ottenuti da…. fermare partire Pre: utente preme bottone Caso d’uso di accettazione Ascensore richiesto per la prima volta Post: ascensore arriva in 1’

  15. Testability (Verificabilità) • Testability: un attributo di qualità del software che indica se è possibile condurre il testing sul codice • Nel caso di acceptance testing, è necessario aver definito i test cases nel documento dei requisiti • Nella teoria del defect testing, si parla di ORACLE per indicare che l’expected behavior è dato un ORACOLO

  16. Testing Strategy • We begin by ’testing-in-the-small’ and move to ‘testing-in-the-large’ • The software architecture is the typical way for incrementally driving the ‘testing-in-the-large’ • For non OO software (conventional components): • The module or part of module (unit) is the initial focus (in the small) • Integration of modules • For OO software (OO components): • the OO class or part of class (unit) that encompasses attributes and operations, is the initial focus (in the small) • Integration is communication and collaboration among classes Elaborated from Pressman

  17. Software Testing Strategy In the small: component testing unit test integration test Defect testing In the large system test Validation (acceptance) test Elaborated from Pressman

  18. Who Tests the Software? developer independent tester Understands the system Must learn about the system, but, will test "gently" but, will attempt to break it and, is driven by quality and, is driven by "delivery" From Pressman

  19. Effort to Repair Software Verifica e Validazione, cioè il controllo sui prodotti intermedi Sistematici: usi dei modelli e di passaggi e incrementi sistematici tra modelli (defects are detected at different stages)

  20. Effort to Repair Software • It remains easier to build the correct software the main issue of Software Engineering… eliciting and specifying requirements and moving to design are ways towards building correct software as well. • Testing, Verification and Validation (and also Quality Assurance) should only confirm thta the performed work is in the right direction.

  21. Verifica e Validazione • La verifica si applica alla specifica dei requisiti, al modello di progetto etc. e non necessariamente (solo) al codice, ma si orienta a relazioni tra prodotti intermedi (es. se il modello di progetto è equivalente al modello analitico) • (build the (work) product in the right way) • La validazione comprende la valutazione se i requisiti sono stati ben compresi (chiamata convalida dei requisiti da Pressman) e, in ogni momento, se ciò che si sta facendo corrisponde a ciò che il committente ha richiesto • (build the right (work) product)

  22. Test sul Codice, V&V • Il test sul codice indica contemporaneamente il fatto che il codice è eseguito con cui si vorrebbe dimostrare l’esistenza di difetti oppure l’assenza di difetti in alcuni casi predeterminati • La verifica e validazione (V&V) indica un insieme di attività svolte in diversi punti dell’Ingegneria dei requisiti e dell’Ingegneria della progettazione, non solo sul codice • La verifica e validazione possono essere basate su tecniche di analisi statica (automatica o manuale) del codice ma più generalmente sono svolte sui prodotti intermedi; talvolta, l’obiettivo della verifica è provare la correttezza ovvero provare alcune proprietà che traducono formalmente gli attributi di qualità considerati • Poiché il testing richiede normalmente l’esecuzione del codice e quindi può considerarsi coincidente con le tecniche dinamiche di verifica e validazione del software (ma V&V sono più generali…) • La verifica e validazione sono, a loro volta, parte del controllo di qualità (quality assurance) del prodotto, molto focalizzate su alcuni attributi di qualità (correttezza, affidabilità, robustezza, safety e prestazioni) Traduzione del Pressman: Convalida=Validazione

  23. Tecniche di Verifica e Validazione • Dinamiche --- eseguono il codice del software, quindi sono tipicamente tecniche di Testing e si classificano in: • Black box • White box (o Glass box) • Statiche --- non eseguono il codice del software, quindi sono tipiche della V&V e, a loro volta, parte della quality assurance e distinte in: • Automatizzate • Model checking • Correcteness proofs • Symbolic execution • Data flow analysis • Manuali (formal technical reviews) • Ispezione (Inspection) • Walkthrough • Le tecniche statiche e dinamiche possono essere applicate insieme (cioè non sono alternative); talvolta, il risultato di una tecnica statica può essere usato una tecnica dinamica

  24. Sintesi Orientati a: correttezza, affidabilità, robustezza, safety e prestazioni e a work products o deliverable quali il modello analitico, il modello di progetto, il codice • Communication • Planning • Modeling • Requirements analysis • Design • Construction • Code generation • Testing sul Codice • Deployment • Communication • Planning • Modeling • Requirements analysis • Design • Construction • Code generation • Testing sul Codice • Deployment Quality assessment and forecasting = Quality Assurance Testing su Deliverable; Verifica e Validazione su Work Products Orientato a: correttezza, affidabilità, robustezza, safety e prestazioni legati al codice Qualunque attributo di qualità

  25. Foundations of Code Testing

  26. Definitions (1) • P (unit code), D (input domain), R (output domain): • P: D  R • Expected Behavior defined by OR  D  R: • P(d) behaves as expected if <d, P(d)>  OR • P behaves as expected if all P(d) behave as expected

  27. Definitions (2) • Test case t (in Italiano, caso di test) • an element of D • Defect test(ing) is successful if there is at least a un-expected test case, or even Test(ing) is successful if all test cases are un-expected) • Acceptance test(ing) is successful if P(t) behaves as expected, for all test cases t fixed with the customer

  28. Definitions (3) • Ideal set of test cases (for defect testing) • if P does not behave as expected, there is a test case t in the set such that P(d) does not behave as expected • If P: D  R is the usual behavior of a program, there is no algorithm to derive test cases that would prove program correctness • However, a set of test cases as much as possible approximating an ideal set of test cases, should be designed by designing test cases members of this set

  29. Test Case Design for Defect Testing "Bugs lurk in corners and congregate at boundaries ..." Boris Beizer OBJECTIVE to discover defects COVERAGE in a complete manner CONSTRAINT with a minimum of effort and time

  30. Software Defect Testing Methods black-box methods white-box methods Methods Strategies Note: methods are not the same as technique in software engineering; however, here we consider method=technique according to Pressman’s view: both terms will be used in this sense.

  31. Black-box vs White-Box • Black box testing can experiment defects such as missing or functionality not behaving according to its expected behavior • tests what the unit is supposed to do • It is less suitable for experiment defects such as unreachable code, hidden functionality (i.e. what is unexpected), run-time errors raised by code • White box testing can experiment defects in unit code, even disregarding the expected behavior • tests what the unit does • It is suitable for experiment (especially unexpected) defects in code (and sometime, in design) but cannot find missing or incomplete functionality • Therefore, both methods are required.

  32. Black-box vs White-box Black-box White-box Test unit code on its expected behavior Test (control) structure of the unit code Need the expected behavior (i.e. at least, module specification) Do not necessarily need expected behavior Can find missing and incomplete behavior Can’t find missing or incomplete behavior

  33. Conventional Unit Defect Testing

  34. Black box methods derives test cases from the expected behavior

  35. Black-Box Testing • Unit viewed as a Black-box, which accepts some inputs and produces some outputs • Test cases are derived solely from the expected behavior, without knowledge of the internal structure of the unit code • Main problem is to design (a minimal set of) test cases increasing the probability of finding failures, if any

  36. Black Box Test-Case Design Techniques • Equivalence class partitioning • Boundary value analysis • Cause-effect graphing • Other

  37. Equivalence Class Partitioning • To partition the unit input domain in equivalence classes (i.e. assuming that data in a class are treated identically by the unit code) • The basis of this technique is that test of a representative value of each class is equivalent to a test of any other value of the same class. • Identify valid as well as invalid equivalence classes i.e. data not being part of the unit input domain(indeed, parameter types are usually an incomplete specification of the input domain; additionally, the expected behavior should be tested in situations where other modules do not provide correct input, addressing robustness and simplifying the integration testing) • For each equivalence class, generating one test case to exercise the unit with an input representative of that class

  38. Example (based on parameter) • Possible input for x of type INT but with the additional specification : 0 <= x <= max valid equivalence class for x : 0 <= x <= max invalid equivalence classes for x : x < 0, x > max • 3 test cases can be generated It might be part of the Expected Behavior (paramter types are usually an incomplete idea of the possible input); additionally, test is also for evaluating robustness; and finally, integration testing is simplified if we also know how a unit behaves in unexpected situations

  39. Guidelines for Identifying Equivalence Classes Input Valid Eq ClassesInvalid Eq Classes range of values one valid two invalid (e.g. 1 - 200) (value within range) (one outside each each end of range) number N valid values one valid two invalid (less than, more than N) Set of input values one valid eq class one invalid each handled per each value (e.g. any value not in valid input set ) differently by the program (e.g. A, B, C)

  40. Guidelines for Identifying Equivalence Classes Input Valid Eq ClassesInvalid Eq Classes Any condition one one (e.g. ID_name must begin (e.g. it is a letter) (e.g. it is not a letter) with a letter ) • If you know that elements in an equivalence class are not handled identically by the program, split the equivalence class into smaller equivalence classes. • In very special cases, some of the generated classes cannot be tested (just because they are explicitly forbidden by the program)

  41. Identifying Test Cases for Equivalence Classes • Assign a unique identifier to each equivalence class • Until all valid equivalence classes have been covered by test cases, write a new test case covering as many of the uncovered valid equivalence classes as possible. • Each invalid equivalence class cover by a separate test case.

  42. Boundary Value Analysis • Design test cases that exercise values that lie at the boundaries of an input equivalence class and for situations just beyond the ends. • Also identify output equivalence classes, and write test cases to generate o/p at the boundaries of the output equivalence classes, and just beyond the ends. • Example: input range 0 <= x <= max Test cases with values : 0, max ( valid inputs) -1, max+1 (invalid inputs)

  43. Testing boundary values • Equivalence classes input domain in classes, assuming that behavior is "similar" for all data within a class • Some typical code defects, however, just happen to be at the boundary between different classes

  44. Test cases and the expected behavior • Introduce test cases to support all the equivalence classes and the boundary values • Introduce the expected behavior (output) per each test case • This is the most important point of Testability: a program needs to be testable i.e. the expected behavior should be known (i.e. how associating Inputs and Outputs in black-box, for instance) • The expected output can be “undefined” but this should not happen, especially with invalid equivalence classes are not explicitly forbidden (not applicable in general)

  45. Esempio

  46. Applicazione delEquivalence class partitioning

  47. Valide e non valide

  48. Valide e non valide

  49. Non valide, valide già coperte

  50. Valide e non valide

More Related