1 / 35

Programming and testing

Programming and testing. A Good Program. Works according to specification and is verifiable Is well commented Is written in an appropriate language Has a simple design Is modular, with independence Uses only sequence, selection and iteration Is independent of specific hardware constraints

oleg
Download Presentation

Programming and testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Programming and testing

  2. A Good Program • Works according to specification and is verifiable • Is well commented • Is written in an appropriate language • Has a simple design • Is modular, with independence • Uses only sequence, selection and iteration • Is independent of specific hardware constraints • Is efficient

  3. Good Commenting • 4 - 5 lines per module (subroutine/section). • 1 line per 4 - 5 lines of code. • Assembler programs should have almost one comment per line. • Comments should be brief and to the point. • Data and module names should also be brief and to the point.

  4. Pitfalls • Redundant commenting. • Obsolete comments. • Incorrect comments. • Vague comments. • Correct, but incomplete comments. • Incomprehensible comments.

  5. Top-down Design • Formal and rigorous specification of input, processing and output of each module. • When the module is properly specified, disregard internal workings. • Keep away from trivialities. • Each level of design should be expressible on a single page of flowchart. • Pay as much attention to data design as to process / algorithm design.

  6. Structure diagrams HIPO diagrams Hierarchical Input Processing and Output chart

  7. For each module: • Do an IPO chart Processing Output Input

  8. Top-down Coding • As a level is specified, the coding is done for that level, before subordinate levels are specified. • Design flaws discovered early on. • Dummy modules must be inserted, to allow for the running of the program.

  9. Some modules will take precedence over others; • A processing module cannot run without the input module being written and the results cannot be seen without the output module. • Arrange modules in the program in an organised fashion, i.e. either horizontally or vertically.

  10. Advantages of Modularity • Easier to write and debug. • Easier to maintain and change. • Easier for a manager to control (e.g. as regards delegating programming tasks to programmers of varying abilities).

  11. Techniques for Achieving Modularity • Break program into small independent subroutines. • Use decision tables. • Use symbolic parameters. • i.e.the size of a table, • relative locations within a table, • constants. • Centralise parameter definitions. • Separate I/O from computational functions. • Don’t share temporary storage locations.

  12. Testing • Module (unit/program) testing. • Subsystem testing. • Integration testing.

  13. X Y Z A X Z Y Bottom-up Testing(white Box)

  14. Top-down Testing (Black Box) • Use dummy modules to represent the lower echelons Main A B C

  15. Benefits of Top-down Testing (Black Box Testing): • System testing eliminated • Major interfaces tested first • Prototyping enabled • Usable subset available before deadline • Testing evenly distributed • Quicker results • Natural test harness

  16. Bottom-up Testing Needed • To test a module on insertion to top-down structure • To rigorously test a module where calling environment cannot • To accommodate an imperfect top-down implementation

  17. Compiling Test Data • Comprehensive test data includes • Valid input data • Invalid input data • Data testing all possibilities of all selections • Data testing invalid possibilities of all selections

  18. Comprehensive test data … • Data testing the lower and upper constraints of iterations • Data testing invalid possibilities in iterations • Note: For every unit of data (e.g. record) entered, the expected result should be known before checking the result given by the run.

  19. 1. Desk Checking • Programmer checks the program logic, by looking at it. • General errors: • Failure to follow specification. • Commenting errors. • Standards. • Fitting-in. • Logic errors.

  20. Fitting-in • CPU/memory overload errors • Timing errors • Fallback and recovery • Hardware/system software errors

  21. Sequence Logic Errors • Overload errors on internal storage • Input errors • Uninitialised variables • Invalid termination • Improper linkages between modules • Improper data declarations • Misuse or unnecessary use of common areas

  22. Selection Logic Errors • Decision tables not used. • Compound Booleans may yield unintended results.

  23. Iteration Logic Errors • Uninitialised variables • Infinite loops • Loops never executed • Loops not executed the correct number of times • Array out of bounds

  24. 2. Structured Walkthrough. • This is a presentation of a program to a group, which may include other programmers on the project, the project leader or manager and maybe a user. • All are issued with a listing of the program specification, coding, test data and results a day or two before the meeting.

  25. The purpose of the walkthrough is to provide a non-aggressive evaluation of the program, in regards to its 'goodness' as described earlier. • The programmer receives advice on where a program contains errors. It is the programmer's responsibility to correct any errors uncovered and to hold another walk-through. The idea of the walkthrough is that responsibility for the 'goodness' of the program is shared.

  26. 3.Running the Program Against Test Data. • Link the program with the required stubs, prepare the job control statements, load the test data, execute the program and print the results. • With on-line programs, a batch simulator can be used to enter transactions in batch, making test data reusable if errors are encountered.

  27. To check keystroke problems, a transaction capture facility can be used, which records keystrokes and can reconstruct them. • Checking screen output can be difficult - better to write to a file also.

  28. Evaluating Test Results • Test results can be :- • Output files • Reports • Screens • Updated data on a database • To check them, they must be printed, browsable, or compared with expected results and differences printed. • If differences exist, a storage dump may be produced. This is difficult to use, stops the test run and generally signifies serious trouble.

  29. Debugging Syntactic Errors • Syntactic errors are • Errors in punctuation or spelling • Illegal use of a reserved word • Failure to declare a variable • Use of an illegal construct • Diagnosis • Compile to get error listing • Module • Line number and text • Underlined error • Error description

  30. Semantic Errors • Errors in program logic, causing failure or incorrect results • Diagnosis • Use small and independent modules • Determine EXACT nature of error • Check for consistency • Check for programmer’s normal weaknesses • Investigate most obvious points first • Don’t assume anything is correct • Check code methodically

  31. Utilities • Traces • Core dumps • Snapshots • Desk checking • Test data loader • Test data generator • Transaction capture facility

  32. Other Testing Methods • Static program analysis • Dynamic program analyser • Mathematical proofs • Seeded bugs • Clean room approach

  33. Testing Principles • All tests should be traceable to customer requirements. • Tests should be planned long before testing begins. • 80% of all errors uncovered during testing will likely be traceable to 20% of all program modules. • Testing should begin “in the small” and progress toward testing “in the large”. • Exhaustive testing is not possible. • To be most effective, testing should be conducted by an independent third party.

  34. Notes from Pressman (2000) • “Testing is a process of executing a program with the intent of finding an error. A good test case is one that has a high probability of finding an as-yet undiscovered error. A successful test is one that uncovers an as-yet-undiscovered error.” (Meyer 1979)

  35. Pressman (2000) • Code modules should be subjected to white-box testing, in which control structures of the procedural design of the code are used to derive test cases. • These test cases should guarantee that. • all independent paths within a module have been exercised at least once, • all logic decisions are exercised on both their true and their false sides, • all loops are executed at their boundaries and. • all internal structures are exercised to ascertain their validity.

More Related