1 / 16

Test-Driven Development

CSE784 Software Studio Class Notes. Test-Driven Development. Jim Fawcett Fall 2006. Specifications and Testing. Requirements specifications and testing are intimately related. The software requirements document defines completely and unambiguously obligations of the software.

estrother
Download Presentation

Test-Driven Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE784 Software Studio Class Notes Test-Driven Development Jim Fawcett Fall 2006

  2. Specifications and Testing • Requirements specifications and testing are intimately related. • The software requirements document defines completely and unambiguously obligations of the software. • The purpose of testing is to show that these obligations are satisfied. • Small programs may have simple, informal specifications, e.g., our project statements. • Large programs and software systems need much more. • Each subsystem has its own requirements document. • Each subsystem has a suite of tests run at various levels • The whole system has a suite of tests run at several levels. • Testing requires a lot of planning, covered by a Software Test Plan document.

  3. Testing • Testing is conducted at several different times and levels: • Construction Test:Occurs during implementation. Its purpose is to support the efficient construction of working code. • Unit Test:Started when construction is complete. It’s goal is to verify correctness. It is a very detailed examination of the operation of each function, in each module. • Integration Test:Started after unit tests. It’s goal is to support the composition of tested modules, built by more than one team. • Validation Test:These are torture tests designed to show that the system does not crash and can recover from inputs or environment that is outside specifed ranges. • Qualification Test:A test of legal completeness, e.g., have all the requirements been met. • Regression Test:A summary level test designed to show that an existing, tested system still works after a change in environment or the addition of new functionality during maintenance.

  4. Construction Tests • Occurs during implementation: • Goal of Construction Test is to support efficient construction of working code. • We usually build construction tests in a test stub attached to the product code, adding new tests as we implement each function. • small tests are added to the test stub as small pieces of code are added to the growing baseline. • Construction tests are universal – every module packaging an implementation has a construction test suite. • Test stubs provide an excellent way for a potential user to learn how a module works, at a deeper level than provided by the manual page.

  5. Unit Test • Unit Tests are started when construction is complete. • The goal of Unit Test is to verify correctness of each function. • Unit tests are built as separate test driver modules, one or more for each production module, organized by functionality. • They are very detailed examinations of the operation of each function, usually executed manually with a debugger. • Main issues here are to provide suitable probing inputs that exercise every path and toggle every predicate. • Each test should be preceeded by the development of a test description and procedure, incorporated as comments in the test driver code. • Because Unit Tests are labor-intensive they are expensive and so are not universal, but applied to every module on which many other modules depend.

  6. Integration Test • Integration Testing is started after Unit Test. • The goal of integration testing is to support the efficient composition of modules constructed by more than one team. • Each test should be preceded by the development of a test description and procedure, often incorporated as comments in the test code, called an integration test driver. Essentially the driver acts like a test executive, that exercises functionality of a group of modules. • Integration testing starts as a blackbox test, e.g., using only an external model of the component, based on its inputs and outputs. Often, however, it is necessary to look internally to find and fix sources of integration problems. • Integration testing often is conducted in a series of builds, where each build adds new functionality to a test baseline, or fixes problems in the existing baseline.

  7. Validation Test • Validation Testing is started during and/or after integration testing. • The goal of validation testing is to ensure “reasonable” operation in the presence of inputs or environment that are outside specified ranges. It is essentially a test of the robustness of the current system. • Under these conditions we expect outputs that do not meet specifications, perhaps resulting in error messages. • But, we want to avoid crashes or taking the system to states from which it can not recover when inputs or environment return to their specified ranges. • Validation tests use out-of-specification inputs, random inputs, pattern tests, and carefully constructed torture tests.

  8. Qualification Test • Qualification Tests are started after product development has been completed. • They test the legal completeness of the implementation, e.g., do we meet every requirement in the B-Level specification. Each requirement (“shall”) in the B-specification is allocated a test. • Does the set of B-level requirements cover the requirements of the A-level specification. • Qualification tests come in four flavors: • Inspection:Verify this requirement by an inspection of code and/or documents. • Demonstration:Verify this requirement by using the system as built. • Analysis:Run the system and collect data that will be analyzed off-line. • Test:Build signal insertion and measurement extraction software to instrument the system to evaluate requirements that cannot be tested by inspection, demonstration, or analysis. • Qualification testing is governed by a test plan and has test descriptions and procedures for each test. These are part of the Test Plan document and may also be incorporated as comments in qualification test drivers.

  9. Regression Test • Regression Tests are run any time the systems platform is changed or when new functionality is added. • These tests are fairly high-level but are designed to exercise all the required functionality. • They are intended to verify expected operation when: • Platform (computer and related hardware and software) has changed. • New functionality has been added to the software during maintenance. • Latent errors are fixed during maintenance. • Regression tests are automated. • Running a regression test requires a single command that establishes any required preconditions (directories, data, environment) and then exercises all the functionality of the system. • Test automation is usually accomplished with a test harness. • New Regression tests are added every time new functionality is added to the system during system maintenance.

  10. Extensibility of Testing • Notice that every level of testing assumes that tests are extensible: • It is easy to add new tests and integrate them with the existing tests. • Most testing is automatic. For all but Unit Testing: • We simply give a command and an extensive suite of tests is run. • Test reports are simple and obvious, although a lot of test data may be logged for off-line examination. • Each test has a well defined Pass/Fail criteria. • For all but Unit and Integration testing this is a simple boolean condition: Pass == true or Pass == false. • Clearly, with this much effort applied to testing it is cost effective to provide some infrastructure for testing. This is what Project #1, Fall 03 explored.

  11. Test Driven Development • Test-Driven Development (TDD) attempts to invert the usual sequence of development. • In TDD we start by developing a test before we write any code that the test executes. • The sequence is often like this: • Write a test driver • Add a shell for the product code the test exercises. • Attempt to compile and run, revising until this process succeeds. • Add code to the test driver to test a specific piece of required functionality. • Add code to the product to implement that functionality. • Attempt to compile and run, revising until this process succeeds. • Continue this process until the module is complete. • Clearly, some kind of testing infrastructure is appropriate here. Test harnesses are almost universally used with TDD, as the process of testing is continuous throughout development.

  12. TDD Process Here is an excerpt from “test-driven development”, David Astels, Prentice- Hall, 2003: • You maintain an exhaustive suite of Programmer Tests. • No code goes into production unless it has associated tests. • You write the tests first. • The tests determine what code you need to write. The following slides provide excerpts and interpretations drawn from this book.

  13. Programmer Tests • Unit tests are written to test that the code you’ve written works. • Programmer tests, under TDD, define what it means for the implementation to work. • Test-driven development implies that you have an exhaustive set of tests: • That is so because there is no code until there is a test that requires it in order to pass. • There should be no code in the system that was not written in response to a test. So, the test suite is, by definition, exhaustive.

  14. Extreme Programming • Extreme Programming is one of a family of development processes called Agile Development methods that uses TDD. • Extreme Programming has several fundamental steps: • Plan a series of releases with some nominal functionality associated with each. • Each release begins with a mini-plan, conducted with the customer. It continues with requirements analysis, and top-level design. • Test-Driven Development is used for all implementation, using programmer-pairs, one person to write the test and product code, another to watch, critique, suggest changes, and review. • After a release is delivered, the delivered code is refactored to improve its maintainablilty and robustness, before beginning the next release. • Refactoring uses the test apparatus developed for test-driven development.

  15. Refactoring • Refactoring is driven by three code attributes: • There is duplication in the code. We refactor to eliminate duplication. • When the code and/or its intent is not clear. • When the code “smells”, e.g., there is a subtle or not so subtle indication that there is a problem with the code: • Commented code replaced with cleaner, clearer code that needs no comments to be understood and maintained. • Data classes or structs are replaced with classes that provide member functions to manage all transformations of the data. • Inappropriate intimacy. Methods are moved so that pieces of code that need to know about each other are in the same module and, if possible, in the same class. • Partition large classes and modules into smaller, more-maintainable, parts. • Remove overly dependent classes. If a change to one class requires changes in many other places, we need to reduce dependencies, perhaps by defining interfaces to avoid using concrete names or rearranging functionality.

  16. End of Presentation

More Related