210 likes | 236 Views
Explore language selection strategies for O-O implementation, risk analysis for various languages, testing practices, and code quality considerations. Learn about self-documenting code, testing methodologies, and black-box & glass-box testing strategies.
E N D
Object-Oriented Analysis and Design Lecture 10 Implementation (from Schach, “O-O and Classical Software Engineering”)
Which Programming Language • Often, there is no choice • The client says “You must use ___” • The product must interoperate with existing software written in ___ • Even if there is a choice (“implement in the most suitable language”) • If you are a COBOL shop, but the project cries out “Java”…
Which Language? • If there is a choice, cost-benefit analysis can be used. Do it for each possible language. • Risk analysis: for each language • List risks and possible resolutions (may be different for different languages) • Choose the language with smallest overall risk
Which Language • “It shall be O-O.” But which? • Smalltalk (20 years ago, the only choice) • C++ • Compilers are cheap, but • Managers in C shops think anyone can learn C++ in 21 days (books with this title?) • Java • This will be an abrupt transition
4GL? • Focus, Oracle, spreadsheets • Designed to pack 30-50 lines of procedural code into one statement • Many success stories, but these are often due to associated Case tools • Maybe not good for a level 1 organization
“End-User Programming” • The old way: Investment manager asks for product to display her portfolio; then waits a year • The new way: The manager does it herself, using 4GL • Dangerous? Computer pros taught to mistrust computer output; users taught to accept it • Amateurs doing DB updates? • Training for managers?
Good Programming Practice • Consistent, meaningful variable names • Meaningful to maintainers! • minimumFrequency, frequencyMaximum? • Incorporate type information: Hungarian naming conventions like Microsoft’s m_hIcon, for “my handle to an Icon”
Self-Documenting Code • I write the code today, but I am a different programmer one week from today • Include a prologue, giving • File name • Purpose • Programmer’s name • Date coded • Date modified • Alphabetical list of variables • Location of test data
Other Considerations • Constant parameters • Really very few of these • Use const in C++, static final in Java • Or, read in from a parameter file • Code layout for readability • Nested blocks • Coding standards
Class Testing • Informal, by the programmer • Formal, by a testing group • Non-execution tests • Execution-based tests (test cases) • There is limited time for testing, so make test cases count • Test to spec (black-box) • Test to code (glass-box)
Testing to Specification • Suppose the spec says that 5 types of commission and 7 types of discount have to be incorporated • Obviously, 35 different tests, and the black box principle says you have to do them all • 20 factors, each with 4 possible values?
Testing to Code • Flow chart with > 41012 possible paths Loop 18 times
Testing to Code • Examining every path is no guarantee: • Paths may not be present: should have a test for the value of d • Testing all paths is not reliable if ((x + y + z)/3 == x) print “x, y, and z are equal” else print “x, y, and z are unequal” x = n/d
Black-Box Testing Strategies • Equivalence classes • Spec says database must handle any number of records from 1 to 16,383 • Equivalence class 1: less than 1 record • Equivalence class 2: between 1 and 16,383 records • Equivalence class 3: more than 16,383 records • Use one test case for each equivalence class • “A successful test case detects a previously undetected fault.” The flip side?
Black-Box Testing Strategies • An extension: boundary analysis • Test case 1: 0 records • Test case 2: 1 record • Test case 3: 2 records • Test case 4: 723 records • Test case 5: 16,382 records • Test case 6: 16,383 records • Test case 7: 16,384 records
Black-Box Testing Strategies • Boundary analysis on output specs • Suppose outputs must be in the range [$0, $4,984.90] • Test inputs that surround these output boundaries
Glass-Box Techniques • Statement coverage: every line executed at least once (problems) • Branch coverage: all branches executed at least once (problems) • Path coverage: all paths executed at least once (problems) • All definition-use paths: paths between variable definition and use executed (dead code problems)
Other Ideas • Complexity metrics: “put your good where it will do the most” • We saw O-O metrics earlier, and they work • Code walkthroughs and inspections: very effective • “Cleanroom”: Code inspections before compiling. In the spirit of PSP (from the SEI folks)
Debug or Just Rewrite? • If the metrics say it’s a complex class or design, insist it be re-written • If you test for “a while” and keep finding bugs, insist it be re-written • Classes C1 and C2 • Test C1 for 4 hours, find 2 bugs • Test C2 for 4 hours, find 40 bugs • C2 is much more likely to still have bugs
Debug or Just Rewrite? • Empirical support 1.0 Probability of existence of additional faults 0.0 Number of faults already found
Conclusions • Lots of issues in implementation • Which language? • Documentation and coding standards • Testing is hard • Tools help, but there is no “silver bullet” • Management won’t know all this stuff, so you have to help…