1 / 35

R&D SDM 1 Quality and Metrics How to measure and predict software engineering?

R&D SDM 1 Quality and Metrics How to measure and predict software engineering?. 2010 Theo Schouten. Contents. Software Quality Dimensions and factors What are “software metrics” Function oriented metrics, LOC Estimation COCOMO 2 Book chapter 26, 15, 22, 23 (version 7 e : 14, 23, 25).

belita
Download Presentation

R&D SDM 1 Quality and Metrics How to measure and predict software engineering?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. R&D SDM 1Quality and MetricsHow to measure and predict software engineering? 2010 Theo Schouten

  2. Contents • Software Quality • Dimensions and factors • What are “software metrics” • Function oriented metrics, LOC • Estimation • COCOMO 2 Book chapter 26, 15, 22, 23 (version 7e: 14, 23, 25)

  3. Views on Quality • transcendental view:immediately recognizable, but not explicitly definable • user view:meeting user goals • manufacturer’s view:conformance to the specification • product view:tied to inherent characteristics (e.g., functions and features) of the product • value-based view:how much a customer is willing to pay Software: • Quality of design:encompasses requirements, specifications, and the design of the system. • Quality of conformance: focused primarily on implementation. • User satisfaction = compliant product + good quality + delivery within budget and schedule

  4. Dissatisfaction • prof. dr. Marko van Eekelen:Leven lang Computeren, Leven Lang Foeteren? • Fingerpointing: • customers: you deliver buggy software • developers: you change your requirements, you want it too quickly • use of software in an environment where it is not designed for, security

  5. The Software Quality Dilemma • If you produce a software system that has terrible quality, you lose because no one will want to buy it. • If you spend infinite time, extremely large effort, and huge sums of money to build the absolutely perfect piece of software, then it's going to take so long to complete and it will be so expensive that you'll be out of business. • Either you missed the market window, or you simply exhausted all your resources. • So people in industry try to get to that magical middle ground where the product is good enough not to be rejected right away, such as during evaluation, but also not the object of so much perfectionism and so much work that it would take too long or cost too much to complete. [Ven03]

  6. Cost to find and repair an error • reduce errors introduced in each phase • try to find and correct as much as possible of them in the next phase

  7. Quality definitions • (degree of) conformance to: • explicitly stated functional and performance requirements • explicitly documented developments standards • implicit characteristics that are expected of all professional developed software. • An effective software process applied in a manner that creates a useful product that provides measurable value for those who produce it and those who use it.

  8. Quality Dimensions (Garvin) • Performance Quality. delivers all content, functions, and features that are specified • Feature quality. provides features that surprise and delight first-time end-users • Reliability. delivers all features and capabilities without failure • Conformance. conforms to local and external software standards that are relevant to the application? (like design and coding conventions, user interface expectations) • Durability. Can the software be maintained (changed) or corrected (debugged) without unintended side effects? • Serviceability. in an acceptably short time period. • Aesthetics. elegance, a unique flow, and an obvious “presence” that are hard to quantify but evident nonetheless.

  9. FURPS, Quality Factors • Developed at Hewlett-Packard (Grady, Caswell, 1987) • Functionality: • Feature set and capability of the system • Generality of the functions - Security of the overall system • Usability: • Human factors (aesthetics, consistency and documentation) • Reliability: • Frequency and severity of failure • Accuracy of output • MTTF - Failure recovery and predictability • Performance: • Speed, response time, resource consumption, throughput and efficiency • Supportability: • Extensibility, Maintainability, Configurability, Etc.

  10. ISO 9126 Quality Factors 6 key quality attributes, each with several sub-attributes • Functionality • Reliability • Usability • Efficiency • Maintainability • Portability Also often not direct measurable, but gives ideas for indirect measures and checklists. defect : The nonfulfilment of intended usage requirementsnonconformity : The nonfulfilment of specified requirements superseded by the new project SQuaRE, ISO 25000:200

  11. Software Quality Factors Maintainability Flexibility Testability Portability Reusability Interoperability Product Revision Product Transition Product Operation (McCall et all, 1977): Correctness Usability Efficiency Reliability Integrity and subfactors, e.g. Usability: understandability, learnability and operability

  12. Metrics • What is a metric? • “A quantitative measure of the degree to which a system, component or process pocesses a given attribute” (IEEE Software Engineering Standards 1993) : Software Quality • Different from • Measure (size of a system, component e.d), a single data point • Indicator: a metric or combination of metrics that provides insight into process, project or product • Measurement (act of determining a measure)

  13. Why important, difficult • Why is measurement important • to characterize • to evaluate • to predict • to improve • Why is measurement of a metric difficult? • No “exact” measure (‘measure the unmeasurable’, subjective factors) • Dependent on technical environment • Dependent on organizational environment • Dependent on application and ‘fitness for use’

  14. McCall metrics Metrics that affect cq influence software quality factors: • Software Quality Factors are the dependent, metrics are the independent variable • Metrics:audibility, accuracy, communication commonality, completeness, consistency, data commonality, error tolerance, execution efficiency, expandability, generality, hardware independence, instrumentation, modularity, operability, security, self-documentation, simplicity, software system independence, traceability, training. • Software quality factors = c1 m1 + c2 m2 + … + cn mn • cn is a regression coefficient based on empirical data

  15. McCall Matrix ISO 9126 also provides a basis for indirect measurements and checklist for assessing the quality of a system.

  16. Quantitative Metrics Desired attributes of Metrics (Ejiogu, 1991) • Simple and computable • Empirical and intuitively persuasive • Consistent and objective • Consistent in the use of units and dimensions • Independent of programming language, so directed at models (analysis, design, test, etc.) or structure of program • Effective mechanism for quality feedback Type of Metrics: • Size oriented • Focused on the size of the software (LinesOfCode, Errors, Defects, size of documentation, etc.) • independent of programming language? • Function oriented • Focused on the realization of a function of a system

  17. Function Oriented Metrics Function Point Analysis (FPA) a method for the measurement of the final functions of an information system from the perspective of an end user • on basis of a functional specification • method is independent of programming language and operational environment • its empirical parameters are not. • Usable for • estimate cost or effort to design, code and test the software • predict number of components or Lines of Code • predict the number of errors encountered during testing • determining a ‘productivity measure’ after delivery

  18. FPA: Count System Attributes • Count the number of each ‘system attribute’ : • User (Human or other system) External Inputs (EI) • User External Outputs (EO) • User External Inquiries (EQ) • Internal Logical Master Files (MF) • Interfaces to other systems (IF) External User Transactions EI Interface EO EQ IF EQ MFs Transactions EO EI Other Systems System Environment

  19. FPA: Weighting System Atributes • Determine per system attribute how difficult it is: • Low • Medium • High • Use the following matrix to determine the weighting factor: • Calculate the weighted sum of the system attributes: the ‘Unadjusted Function Points’ (UFP)

  20. FPA: Value Adjustment • The UFPI needs to be adapted to the environment in which the system has to operate. The ‘degree of influence’ is determined for the 14 ‘values adjustment ‘ factors:. • Data Communications • Distributed Processing • Performance Objectives • Tight Configuration • Transaction Volume • On-line Data Entry • End User Efficiency • Logical File Updates • Complex Processing • Design for Re-usability • Conversion and Installation Ease • Multiple Site Implementation • Ease of Change and Use Value between 0 and 5

  21. FPA: final function points • Total sum of the ‘degree of influence’ (DI) (0-70) • Determine Value Adjustment : VA= 0.65+ (0.01*DI) (0.65-1.35) • Function Point Index = VA*UF • Historical data can then be used, e.g. • 1 FP -> 60 lines of code in object oriented language • 12 FP's are produced in 1 person-month • 3 errors per FP during analysis, etc.

  22. Other metrics • chapter 15 (7e: 23): many product metrics • chapter 22 (7e: 25): many process and project metrics • chapter 23 (7e: 26) how to use them in estimation • effort is some function of the metric

  23. Lines Of Code • What's a line of code? • The measure was first proposed when programs were typed on cards with one line per card; • How does this correspond to statements as in Java which can span several lines or where there can be several statements on one line. • What programs should be counted as part of the system? • Dependent on the programming language and the way LOC’s are counted.

  24. Estimation based on LOC’s • Determine the functional parts of the system • Estimate the LOC per parts, using experience, historical data • multiply by the average productivity for this kind of system (and/or part), e.g. 620 LOC/month

  25. Same for FP or Object Points • Object points (alternatively named application points) are an alternative function-related measure to function points when 4Gls or similar languages are used for development. • Object points are NOT the same as object classes. • The number of object points in a program is a weighted estimate of • The number of separate screens that are displayed; • The number of reports that are produced by the system; • The number of program modules that must be developed to supplement the database code;

  26. Algorithmic cost modelling • Cost is estimated as a mathematical function of product, project and process attributes estimated by project managers: • Effort = A * Size B * M • A is an organisation-dependent constant, B reflects the disproportionate effort for large projects and M is a multiplier reflecting product, process and people attributes. • The most commonly used product attribute is code size. When using FP, product attributes are (partly) contained in the FP • Most models are similar but they use different values for A, B and M.

  27. The Software Equation • A dynamic multivariable model, developed in 1992, based on productivity data from 4000 projects • E = LOC x B0.333/ P3 x (1/t4) • E = effort in person-months • t = project duration in months • B = “special skills factor”, increases with the need for integration, testing, etc., increases with LOC • P = “productivity parameter”

  28. The COCOMO model • COnstructive COst Model • An empirical model based on project experience. • Supported by software tools • Well-documented, ‘independent’ model which is not tied to a specific software vendor. • Long history from initial version published in 1981 (algorithmic cost model) through various instantiations to COCOMO 2. • COCOMO 2 takes into account different approaches to software development, reuse, etc.

  29. COCOMO 2 models • COCOMO 2 incorporates a range of sub-models that produce increasingly detailed software estimates: • Application composition model. Used when software is composed from existing parts. • Early design model. Used when requirements are available but design has not yet started. • Reuse model. Used to compute the effort of integrating reusable components. • Post-architecture model. Used once the system architecture has been designed and more information about the system is available.

  30. Early design model • Estimates can be made after the requirements have been agreed. • Based on a standard formula for algorithmic models • PM = A´SizeB´M where • A = 2.94 in initial calibration, Size in KLOC, B varies from 1.1 to 1.24 depending on novelty of the project, development flexibility, risk management approaches and the process maturity. • M = PERS ´ RCPX ´ RUSE ´ PDIF ´ PREX ´ FCIL ´ SCED;

  31. Multipliers • Multipliers reflect the capability of the developers, the non-functional requirements, the familiarity with the development platform, etc. • RCPX - product reliability and complexity; • RUSE - the reuse required; • PDIF - platform difficulty; • PREX - personnel experience; • PERS - personnel capability; • SCED - required schedule; • FCIL - the team support facilities.

  32. Post-architecture level • Uses the same formula as the early design model but with 17 rather than 7 associated multipliers. • Project attributes • Product attributes • Computer attributes, contraints imposed by platform • Personnel attributes • The code size is estimated as: • LOC of new code to be developed; • equivalent number of lines of new code computed using the reuse model; • LOC that have to be modified according to requirements changes.

  33. The exponent term • This depends on 5 scale factors • Their sum/100 is added to 1.01

  34. Remarks • Metrics are used to get a view on quality • also to predict cost, effort and time • also to improve maturity level of company • historical data is needed • reuse of components • experience of software engineers and managers very important • research trend: model based development • rework with new system possibilities: internet (security), distributed systems, multi-core and parallel computing, the cloud, etc.

  35. Final • not that much theoretical guidance for managers • often an “externally” determined end date, cost, manpower • but • work fills up available time • leaving out “nice to have features” • less documentation

More Related