1 / 31

Techniques for Systematic Software Reuse

Techniques for Systematic Software Reuse. 1. Domain analysis 2. Object-oriented techniques • analysis • design • programming 3. Metrics 4. Standards and standard interfaces 5. Designing for reuse 6. Using reuse to drive requirements

alta
Download Presentation

Techniques for Systematic Software Reuse

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Techniques for SystematicSoftware Reuse

  2. 1. Domain analysis 2. Object-oriented techniques • analysis • design • programming 3. Metrics 4. Standards and standard interfaces 5. Designing for reuse 6. Using reuse to drive requirements The key to any systematic program of software reuse is measurement. Techniques

  3. Domain Analysis Domain analysis is the technique of classifying existing software artifacts to determine if they can be used to meet the requirements for a new system. Classification schemes can be evaluated for success in locating reusable software assets.

  4. Two basic methods to develop classification schemes:• A bottom-up approach using basic building blocks (often called a faceted classification scheme)• A top-down approach using exhaustive methods (often called an enumerative classification scheme)Many domain analysis techniques use an object-oriented approach

  5. Independent domain analysis projects analyzingthe same application domain rarely result in thesame classification schemes. In an ideal world of domain analysis, differentactions and objects described in two differentclassification schemes would be synonyms.In reality, it is not easy to map one scheme to another.

  6. Object-Oriented Approaches: • Promising because of well-defined interfaces for source code. • Consistent with domain analysis. • Large class libraries. • Class libraries becoming standardized (C++). • A popular current technology. • Patterns can be matched

  7. Disadvantages of the Object- Oriented Approach: • Most existing systems are not object-oriented. • Technology is not very advanced in high levels of abstraction for OOD. • Object testing is not as advanced as procedure testing. • Problems with interfaces to procedural systems. • Non-trivial integration with relational databases.

  8. Metrics can aid in the following: • Determine the status and quality of a system. • Predict resources needed and expected milestones. • Provide feedback about current practices and allow effects of any changes to be estimated or measured. Product metrics: Code metrics are most commonly used Other product metrics for requirements, documentation Process metrics: cost, how,when, volatility and stability

  9. There are lots of metrics. Which ones to use? The GQM paradigm of Basili and Rombach is useful. Goals: What do we wish to achieve? Questions: What questions must we answer to determine if our goals are being met? Metrics: What metrics must be collected to answer the questions?

  10. Goal: Determine the amount of reuse in a system. Questions: • What is the nature of the software artifact being reused? (Requirements, design, architecture, source code, test plan, test cases, test results, documentation) • How much of existing systems is used in new system? • How much of the code in the new system comes from a system that has evolved from an existing subsystem? • How do we define the size of a system?

  11. Typical Metrics: • The size of the new system. • The size of the existing system. • The amount of the new system that was not included in any previous system.

  12. Code metrics: Number of program statements (SLOC, ELOC, DSI) Complexity of program statements • statements consist of operators and operands • low level view like that of assembly language • Halstead type (length, volume, effort) formulas are based on counts of operators, operands, distinct operators, distinct operands

  13. Code metrics: • Control flow metrics • • based on graphical view of program • • higher level view, ignores modularity • McCabe cyclomatic complexity is the most common • = number of logical predicates + 1 • Information flow, coupling metrics: • higher level view • emphasizes modularity • • fan-in, fan-out (Kafura & Henry) • • coupling (Dunsmore, Conte, & Shen) • • boundary value analysis (Leach & Coleman)

  14. Other Source Code Metrics: • Readability • Modularity • Some metrics can also be used for • requirements • design • documentation

  15. Well-known example: Determine the size of: #include <stdio.h> main() { int i; for (i = 0; i < 10; i++) printf("%d\n", i); } • It is difficult to measure the size of a system. • Until terms are properly defined, proper measurement is impossible.

  16. How do we measure reuse if the basic reused systems are also changing? What if our reused system must interact with other systems that are also changing? Typical of rapidly changing technology. A major configuration management problem.

  17. Measurement of reuse: • A difficult problem • Presumes existing measurement of system size. • Presumes existing measurement of cost at each life cycle phase. • Easiest to measure size of text-based systems that are reused.

  18. First step: determine the units to be measured. Some typical units: Function (for source code) File (for many types of artifacts) Directory (for many types of artifacts) Multi-level directory (for subsystems) Measurement should be consistent with any existing metrics schemes and also with the goals of the metrics program.

  19. An example: System A is 100 KLOC and is incorporated (with no modifications) into system B which is a total of 200 KLOC including A. B 200 KLOC A 100 KLOC

  20. What is the percentage of reuse? (100)/100 * 100% = 100 % ? (100)/200 * 100% = 50 % ? Something else ?

  21. What is the percentage of reuse if 10% of system A is changed before system A is inserted into system B? (90)/100 * 100% = 90 % ? (90)/200 * 100% = 45 % ? Something else? Does it matter if reuse occurs early in the life cycle?

  22. Example: A1 evolves to A2, B1 evolves to B2 System A1 100 KLOC used without change in System B1 200 KLOC System A2 150 KLOC used without change in System B2 300 KLOC A1 100KLOC A2 150 KLOC B1 200 KLOC B2 300 KLOC

  23. What is the percentage of reuse? 100/100 * 100 % = 100 % 100/200 * 100 % = 50% 100/300 * 100 % = 33.3% 150/300 * 100 % = 50% A measurement should be precise and consistent with the organization's software data collection policies.

  24. Reuse can be measured at any phase of the software life cycle. Easiest to measure if the artifact is in text format: • LOC, DSI, ESI, etc. • word count • number of bulleted requirements • Albrecht function point metric for requirements • number of defined interfaces • number of test cases • Most other metrics

  25. Standard Interfaces: There are many "standards": Languages: ANSI C, Ada, C++ Operating systems: UNIX, POSIX, Windows, Macintosh Software Database, spreadsheet, communications User Interfaces: Motif, X, UIL, MS Windows, etc. Formats: TIF, RTF, JPEG ... Local standards, including coding standards

  26. • Compliance with standards is a major issue in system integration. • Most standards are not enforced by validation suites such as ACVS. • Many COTS (Commercial-Off-The-Shelf) products do not adhere precisely to standards in either external or internal interfaces. • Standards change over time and configuration management of reuse libraries is essential. • Compliance with standards is part of certification.

  27. Designing for reuse: Designs should be reviewed for the use of standard interfaces and other standards. Designs should be reviewed for the use of existing high level building blocks and COTS products when possible. Designs should be changed if they use little existing software. Interface size and standards must be considered. Designs should be subjected to domain analysis.

  28. • Designs should be rejected if they do not use the available reuse libraries. • Designs should be rejected if they do not contribute software artifacts to a reuse library. • Design teams must have both domain experts and domain analysts on the team. • Designing for reuse is more expensive than not, at least for the pure cost of design. Costs should be underwritten as part of a systematic reuse program.

  29. Reuse-Driven Requirements • Get requirements from customer. • If an existing system meets requirements, done. • If not, 1. Perform domain analysis 2. Determine existing system closest to customer requirements. 3. Negotiate requirements with customer to determine if agreement on a system meeting fewer requirements but lower cost.

  30. • If customer will not accept new proposed solution with reduced requirements, decompose system into subsystems. • Use classification scheme determined by domain analysis for the decomposition into subsystems. • For each subsystem, produce existing new subsystem that either matches subsystem requirements or is closest to them. Negotiate reduced subsystem requirements with customer, allow selecting a subsystem with reduced functionality.

  31. • Each negotiation with customer will include detailed descriptions of the tradeoffs in capability vs. reduced cost of system. Estimation of integration and maintenance costs is essential at each stage. • Continue the process of domain and system decomposition as much as necessary, • New code is to be written only as a last resort. • COTS-only systems are the extreme case of this process. * Few models: (Waund, Ellis at Loral, Kontio, Basili at Maryland, IMMACS project at NASA/Goddard).

More Related