1 / 35

P roduct Quality?

P roduct Quality?. What is product quality?. Quality, simplistically, means that a product should meet its specification The software product should deliver the required functionality ( functional requirements ) with the required quality attributes ( non–functional requirements ).

beck
Download Presentation

P roduct Quality?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Product Quality?

  2. What is product quality? • Quality, simplistically, means that a product shouldmeet its specification • The software product should deliver the required functionality (functional requirements) with the required quality attributes (non–functional requirements)

  3. Problematic for software systems • Some quality requirements are difficult to specify in an unambiguous way • Software specifications are usually incomplete and often inconsistent • Quality attributes are frequently conflicting and increase development costs, so there is a need for weighting and balancing. • Differentusersmay have different opinions aboutwhatconstitutes software quality.

  4. Qualityattributes • Correctness: the degree to which the software product performs the required functions accurately. • Reliabilityis the degree to which the software behaves well and in the way the users expect. • Robustness implies that the software is unlike to fail or break. • performance relates to the response time of the software system. • Usabilityrelates to the userfriendness of the software.

  5. Qualityattributes • Maintainbility :Correcting software errors and deficiencies • Scalibility: Ease with which the software canbescaled up or evolved in response to the growth in demand for itsfunctionality. • Reusability :Defines the level to which the software components canbeused for construction of other products. • Portability : Can software run on various hardware/software platforms withoutanymodifications or afterundergoingminorcustomization or parameterization ?

  6. Qualitycontrol Qualitycontrolis mostlyabouttestingthe quality of a product ( to eliminate problems ) Quality assurance is aboutbuildingqualityinto the product.

  7. Testing – V model

  8. “V” ModelEach phase has corresponding test or validation counterpart Requirements Analysis Acceptance Test Integration Test System Design Program Design Unit Test Implementation

  9. Sawtooth Model (Brugge) Requirements Analysis Demo Prototype 1 Demo Prototype 2 Acceptance Test Integration Test System Design Program Design Unit Test Implementation Quality is guaranteed at each project stage.

  10. Unittesting • The most ‘micro’ scale of Testing • A unit = smallest testable software component • Objects and methods • Procedures / functions • Performed by Programmer • A tester can help. • Requires detailed knowledge of the internal program design and code. • The units are tested in isolation. • Ensures the component is working according to the detailed design/build specifications of the module. • Also known as component, module, or program testing.

  11. Unit Test ( white box) • White-box testing • Sometime called structural testing • Tester studies the code and decide on data inputs to exercise all program statements (not all path combinations). • Test coverage measures ensure that all statements have been executed at least once • Derivation of test cases according to program structure. Knowledge of the program is used to identify additional test cases. Also suitable for design models and specification documents (walkthrough and inspections)

  12. IntegrationTesting • Testing of more than one (tested) unit together to determine if they function correctly. • Focus on interfaces • Communication between units • It is done using the integration test design prepared during the architecture design phase. • Helps assembling incrementally a whole system, ensuring the correct ‘flow’ of data from the first through the final component. • Done by developers/designers and testers in collaboration • Also called Interface Testing or Assembly Testing.

  13. Systemtesting • Testing the system as a whole - Black-boxtype testingthat is based on overall requirementsspecifications; covers all combined parts of a system. • Ensures that system meets all functional and business requirements. • Focus • Verifying that specifications are met • Validating that the system can be used for the intended purpose • The system test design is derived from the system design documents and is used in this phase. • It can involve a number of specialized types of tests to check performance, stress, documentation etc. Sometimes testing is automated using testing tools. • Done by Independent testing group

  14. System testing • Black-box testing (testing to specifications) • An approach to testing where the program is considered as a ‘black-box’ , taking some inputs and produces some outputs. • Tester does not know or choose to ignoretheinternalworkings of the program. • The program test cases are based on the systemspecification • Test planning can begin early in the software process • Testing is done by feeding the test unit with datainputsandverifyingthattheexpectedoutputisproduced. • Also applicable to constrainttesting ( performance and security ) and missingfunctionalities.

  15. Acceptance testing • To determine whether a system satisfies its acceptance criteria and business requirements or not. • Similar to System testing in that the whole system is checked, but the important difference is the change in focus. • Done by real business users. • It enables the customer to determine whether to accept the system or not. • Also called as Beta Testing, Application Testing or End User Testing. • Approach • Should be performed in real operating environment . • Customer should be able to perform any test based on their business processes. • Final Customer sign-off.

  16. Testingtechniques It is never possible to test for all possible data inputs or codeexecutionpaths. Testingtechniquescanbeclassifiedaccording to 5 criteria : • Visibility • Automation • partitioning • Coverage • scripting

  17. Testingtechniques • Visibility • Automation

  18. Partitioning -Equivalencepartitioning • Groups data inputs (and implicitly , data outputs)into partitions constituinghomogenoues test targets( testing with onememberimplies test with othermember in the same partition ) • Supported by Black boxtesting

  19. Boundaryvalue • Additional data analysistechnique • Bounadaryvaluesareextreme cases withingequivalence partitions. • Ex: Partition of integer from 1 to 100. Bounadryvalueanalysisrecommends tests to be done on the values on the edgesthat is : -1 , 0 , +1 as well as for 99,100, 101

  20. Coverage Determine how muchcode is going to beexercised by a whitebox test • Operation coverage Ensurethateach operation in the code is exercised at leastonce by the whitebox test • Pathcoverage • Numberingpossibleexecutionpaths ( infinite in large program ) in the program • Exercisingthemone by one. • For large program , choose the most critical and frequentlyusedones. • Testingcanbemanual or automatic

  21. Manualtesting • Human tester interracts with the applicationunder test conducts, according to a predefined test script and observe the results. • Test script definesstep-by-steptesting actions and expectedoutcomes. • Use cases areused to write test scripts. Problems: • Freuqently output is not presented to the creen • Live data are not predefined • Expensive

  22. Automatedtesting • Use software testingtools to execute large volumes of test without human participation • Tools canproduce post test reports • Automated testingcanbedividedinto : • Regressiontesting • Exercisingtesting

  23. Regressiontesting • Repetitiveexecution of the same test scripts on the same data to be sure that the system has not beenbroken by successive changes to the code (changes not related to the testedfunctionality) . • Execution of the script at scheduled test times.

  24. Exercisingtesting • Tool generates automatically and randomlyvariouspossibleactions instead of the user. • Mad user hitting anypossiblekey on keyboard , selectinganypossible menu item , ….etc.

  25. Test planning • Part of the quality management plan. • Definestestingschedule, budget , tasks (test cases) and resources. • Test Plan includecode and otherprojectartifactstestings. • Specifywhich test cases shouldbeconducted • Human and material ressources shouldbeallocated. • Test database created and test software toolsinstalled

  26. Testing Approaches • We will look at a small sample of approaches for testing • White-box testing • Control-flow-based testing • Loop testing • Data-flow-based testing • Black-box testing • Equivalence partitioning

  27. Control-flow-based Testing • A traditional form of white-box testing • Step 1: From the source, create a graph describing the flow of control • Called the control flow graph • The graph is created (extracted from the source code) manually or automatically • Step 2: Design test cases to cover certain elements of this graph • Nodes, edges, paths

  28. 1 2 3 4 5 6 7 8 Example of a Control Flow Graph (CFG) s:=0;d:=0; if (x+y < 100) s:=s+x+y; else d:=d+x-y;} while (x<y) { x:=x+3;y:=y+2;

  29. Statement Coverage • Basic idea: given the control flow graph define a “coverage target” and write test cases to achieve it • Traditional target: statement coverage • Need to write test cases that cover all nodes in the control flow graph • Intuition: code that has never been executed during testing may contain errors • Often this is the “low-probability” code

  30. 1 2 3 4 5 6 7 8 Example • Suppose that we write and execute two test cases • Test case #1: follows path 1-2-exit (e.g., we never take the loop) • Test case #2: 1-2-3-4-5-7-8-2-3-4-5-7-8-2-exit (loop twice, and both times take the true branch) • Do we have 100% statement coverage? T F

  31. Branch Coverage • Target: write test cases that cover all branches of predicate nodes • True and false branches of each IF • The two branches corresponding to the condition of a loop • All alternatives in a SWITCH statement • In modern languages, branch coverage implies statement coverage

  32. Branch Coverage • Statement coverage does not imply branch coverage • Can you think of an example? • Motivation for branch coverage: experience shows that many errors occur in “decision making” (i.e., branching) • Plus, it subsumes statement coverage.

  33. 1 2 3 4 5 6 7 8 Example • Same example as before • Test case #1: follows path 1-2-exit • Test case #2: 1-2-3-4-5-7-8-2-3-4-5-7-8-2-exit • What is the branch coverage? T F

  34. Black-box Testing • Unlike white-box testing, here we don’t use any knowledge about the internals of the code • Test cases are designed based on specifications • Example: search for a value in an array • Postcondition: return value is the index of some occurrence of the value, or -1 if the value does not occur in the array • We design test cases based on this spec

  35. Test cases Design Post-conditions:

More Related