1 / 20

CIS224 Software Projects: Software Engineering and Research Methods

CIS224 Software Projects: Software Engineering and Research Methods. David Meredith d.meredith@gold.ac.uk www.titanmusic.com/teaching/cis224-2006-7.html. Lecture 7 Product Quality: Verification, Validation and Testing (Based on Stevens and Pooley, 2006, Chapter 19). Introduction.

ponce
Download Presentation

CIS224 Software Projects: Software Engineering and Research Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CIS224Software Projects: Software Engineering and Research Methods David Meredith d.meredith@gold.ac.uk www.titanmusic.com/teaching/cis224-2006-7.html Lecture 7 Product Quality: Verification, Validation and Testing (Based on Stevens and Pooley, 2006, Chapter 19)

  2. Introduction • To build high-quality systems, we need techniques that focus on • The quality of the product • The quality of the process • Product-focused techniques for ensuring software is of high quality: • Verification • Making sure we’ve “built the product right” • Product satisfies design and specified requirements • “The purpose of Verification (VER) is to ensure that selected work products meet their specified requirements.” (Capability Maturity Model Integration for Development, version 1.2) • Validation • Making sure we’ve “built the right product” • Design satisfies real-world requirements and fits intended usage • “The purpose of Validation (VAL) is to demonstrate that a product or product component fulfills its intended use when placed in its intended environment.” (Capability Maturity Model Integration for Development, version 1.2) • Testing is used for both verification and validation

  3. Quality review • Software is high quality if it meets the users’ requirements • Useful and usable • Makes people’s lives easier or better • Reliable • Few bugs • Must be easy to thoroughly test software • Flexible • Easy to change software in response to requirements change • Easy to maintain and debug software • Affordable • Not too expensive to buy and maintain • Implies easy and quick to develop and maintain • Available • Available for different platforms • Development must complete

  4. How can high quality software be achieved? • Need to combine two orthogonal approaches • Focus on product • Testing for verification and validation • Focus on process • Faults in product may be due to faults in process • Need to evaluate process frequently and improve it when necessary

  5. Verification • Involves testing product against specification • If using UML, then, in testing phase of each iteration, we might • Verify that use cases satisfy requirements • Verify that classes can provide use cases • Verify that code correctly implements classes • Includes “sanity checks” that ensure project satisfies basic requirements, e.g., • code compiles cleanly • program runs without errors (e.g., segmentation faults, unhandled exceptions) • UML diagrams syntactically correct

  6. Verification • Verification can be • Informal: developer manually compares code with design and ensures they correspond • Best if tester not the same person as the programmer • Devise a checklist of most common problems to look for • Reviews • Formal: formal proof of equivalence of code and design • Rarely done in practice • Only possible if two things being compared have formal, well-defined semantics • UML does not have formal semantics, therefore formal proof of equivalence of UML with code is impossible in principle • UML can be guaranteed syntactically correct if produced using a UML modelling tool that enforces correct syntax • Equivalence of UML with code presumably ensured if using MDA or executable UML

  7. Validation • Validation harder than verification because don’t know exactly what you’re looking for • Have to identify ways in which system is less useful to the customer than it should be • Implies requires customer involvement • Worst errors arise from communication failure between customer and developer on what is required • Need to have frequent customer evaluations as in iterative process • May need to build rapidly a throw-away prototype to ensure that customer and developer have same idea of what is required

  8. Usability • System must not just provide required functionality • Must also allow users to exploit functionality effectively – they must be able to • Carry out their tasks quickly and easily with minimum stress • Recover from errors • Developers bad at evaluating usability of their software • Real users make mistakes that developers could never imagine • If understand how something can be done, hard to imagine not understanding it • Developers usually not typical users • Typically understand system better than users and domain less well than users • Employ expert in usability and consult users frequently • Thomas Landauer proposes “user-centred design” • One day’s focus on usability can increase work efficiency of system by 25% • Landauer, Thomas K. (1996). The Trouble with Computers: Usefulness, Usability and Productivity. MIT Press, Cambridge, MA.

  9. Testing • Testing contributes more to verification than validation • Can test designs and system components as well as complete running systems • Testing has three aims: • To find bugs (most important aim) • To convince customer that there are no bugs • To provide information for system evolution, e.g. • Information on future requirements • Information on present performance

  10. Testing • A successful test is one that finds a bug • Different types of testing • Usability testing: checks if system is easy to use effectively • Module (unit) testing: tests individual modules of a system • in OO, modules will usually be classes • Integration testing: tests that parts of system work together properly • System testing: checks that system meets functional and non-functional requirements • Acceptance testing: validates that system is fit for purpose • Performance testing: checks satisfactory performance of module, collaborating modules or system • Stress testing: puts extreme loads on system to ensure that it degrades gracefully and does not fail catastrophically • Regression testing: tests that a change to a system does not introduce new bugs. Includes • Module tests for changed modules • Integration tests for subsystems including changed modules • Some whole-system tests (usability, system, acceptance, performance, stress)

  11. Choosing and carrying out tests • Tests can be • Black box • Chosen by looking at specification of thing to be tested • White box • Chosen by looking at structure of thing to be tested • Petschenik, N. H. (1985). Practical priorities in system testing. IEEE Software, 2(5), pp.18-23. • It is more important to • Test whole system than its components • Check that the system can still do what it could do before than to check that new features work • To test typical cases than boundary value cases

  12. Choosing and carrying out tests • When functionality changes, must carry out regression tests which must be • Repeatable • Documented (both tests and results) • Precise • In iterative development, same test carried out many times • Need automated testing • There exist specialist testing tools (e.g., junit) • But can also automate tests using scripts (e.g., Perl, shell scripts, DOS batch files, etc.) • Write test specifications as soon as requirements are understood • Some bugs only show up after complex and unusual sequence of events • Have someone test software with the aim of breaking it, however sneakily • e.g., IBM’s “Black team” (DeMarco, T. and Lister, T. (1987). Peopleware: Productive Projects and Teams. Dorset House, New York.) • Especially hard to test GUIs systematically • Many problems with GUIs are to do with usability • Usability testing cannot be automated

  13. Unit testing in an OO system • Special problems arise when testing an OO system: • What counts as a unit? • Checking hidden data in an encapsulated unit • Problems arising from inheritance and polymorphism

  14. What counts as a unit? • Unit of test must be at least the class • Can be a collection of collaborating classes (e.g., a package) • Class harder to test than a function • Cannot test each method in isolation because a method might change an object’s state in such a way that reveals a bug in another method • Can use state machine diagrams to identify important states and then test object on every state transition • Avoid classes with complex state diagrams since they are hard to understand and test!

  15. Testing and encapsulation • Encapsulation can reduce likelihood of bugs by not allowing access to details that client does not need to know about • But checking which state an object is in during a test may require access to private attributes • One solution might be to provide a method that is only used during testing – but then rely on correctness of this testing method!

  16. Inheritance and testing • Class D overrides foo() operation in class G but not bar() operation? • Do we only need to test the foo() operation in class D? • NO! Why? • Because bar() may call foo() • because of dynamic binding, class D object’s foo() method might be called when bar() message sent to it • To be safe, all subclass’s methods must be tested – even inherited ones!

  17. Testing and polymorphism • Two interacting objects: a of class C and b of a subclass of D • Could be that class of b was invented after C • Must make sure that way in which b behaves when sent messages in the public interface of class D is consistent with how an object of class D would behave in response to same messages • This must be tested when class of b is defined • Class tightly coupled with its superclasses which creates difficulties when testing • USE INHERITANCE ONLY WHEN ITS ADVANTAGES OUTWEIGH ITS DISADVANTAGES!

  18. Why is testing often done badly? • Most people find it boring! • Need to automate process as much as possible • Need to minimise bureaucracy • It’s expensive • Could take 30-50% of project time • If left until end of project, then squeezed by deadline pressure • Hence should use iterative process in which testing done in each iteration and spread over whole project • Customers often don’t realise the importance of testing and pressure developers to deliver product even if it has not been thoroughly tested

  19. Reviews and inspections • Formal technical reviews (FTR) help mostly with validation but also with verification • Meeting to find problems in a deliverable (could be code or design) • FTR participants include author, moderator, scribe, possibly one or two users • Artifact studied by participants before meeting • Meeting focuses on defects found, not solutions • After meeting, author addresses defects and then meets with moderator to discuss solutions • Problems with FTRs • Can feel like an attack on the developers • Can be very time-consuming • Can produce long lists of trivial defects and miss more fundamental problems

  20. Summary • To achieve high-quality product, need product-focused and process-focused techniques • Concentrated on product-focused techniques in this lecture • Verification - making sure the product satisfies the stated requirements • Validatation - making sure the product if fit for its intended purpose in the environment in which it will be used • Testing required to verify and validate a product • Reviewed criteria for a high quality software system • Useful, usable, reliable, flexible, affordable, available • Verification involves testing product against specification • can be informal (e.g., manual comparison) • or formal (e.g., proof of equivalence) • Validation involves identifying ways in which system is less useful to customer than it should be • necessitates customer involvement • need to have frequent customer evaluation • Usability - consult users frequently and employ expert in usability • Testing more relevant to verification than validation • Different types of testing: usability, unit, integration, system, acceptance, performance, stress, regression • Black box testing: specification • White box testing: structure • Special problems with unit testing in an OO system • General problems with testing • Formal technical reviews

More Related