Process Maturity Models

Process Maturity Models PowerPoint PPT Presentation


  • 159 Views
  • Uploaded on
  • Presentation posted in: General

CMM Maturity levels. Initial

Download Presentation

Process Maturity Models

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


1. Process Maturity Models Evaluation of the maturity of an organization development process Benchmark for SQA practice Breakdown of software process into process areas Provides means of measuring maturity, of complete process and/or of separate process areas Examples: Capability Maturity Model (CMM), from the Software Engineering Institute (SEI), at Carnegie-Mellon University ISO 9000 quality standard series, from the International Organization for Standardization

2. CMM Maturity levels Initial – chaotic unpredictable (cost, schedule, quality) Repeatable - intuitive; cost/quality highly variable, some control of schedule, informal/ad hoc procedures. Defined - qualitative; reliable costs and schedules, improving but unpredictable quality performance. Managed – quantitative reasonable statistical control over product quality. Optimizing - quantitative basis for continuous improvement.

3. CMM Key Process Areas Functions that must be present at a particular level Level 1: Initial Any organization that does not meet requirements of higher levels falls into this classification. Level 2: Repeatable Requirements management Software project planning and oversight Software subcontract management Software quality assurance Software configuration management

4. CMM Key Process Areas Level 3: Defined Organizational process improvement Organizational process definition Training program Integrated software management Software product engineering Inter group coordination Peer reviews

5. CMM Key Process Areas Level 4: Managed Process measurement and analysis Statistics on software design/code/test defects Defects projection Measurement of test coverage Analysis of defects process related causes Analysis of review efficiency for each project Quality management

6. CMM Key Process Areas Level 5: Optimizing Defect prevention. Mechanism for defect cause analysis to determine process changes required for prevention Mechanism for initiating error prevention actions Technology innovation Process change management

7. ISO 9000 Set of standards and guidelines for quality management system Registration involves passing a third party audit, and passing regular audits to ensure continued compliance ISO 9001 standard applies to software engineering – 20 areas

8. ISO 9000 Management responsibility Quality system Contract review Design control Document control Purchasing Purchaser-supplied product Product identification and traceability Process control Inspection and testing

9. ISO 9000 Inspection, measuring, and test equipment Inspection and test status Control of nonconforming product Corrective action Handling, storage, packaging, and delivery Quality records Internal quality audits Training Servicing Statistical Techniques

10. Software Quality Metrics Measure quantitative indication of the extent, amount, dimension, capacity or size of some attribute of a product or process Measurement act of determining a measure Metric (IEEE Std. 610.12) quantitative measure of the degree to which a system, component, or process possesses a given attribute may relate individual measures Indicator metric(s) that provide insight into software process, project, or product

11. Objective Use of metrics is vital for: Estimates of development effort: Schedule Cost Size of team Progress tracking Deviations from schedule, cost Determining when corrective action is needed. A history of project metrics can help an organization improve its process for future product releases, and improve software quality.

12. Types of Metrics Product Metrics: Measurements about the size / complexity of a product. Help to provide an estimate of development effort. Process Metrics Progress tracking. Staffing pattern over the life cycle Productivity Determine the effectiveness of various process stages, and provide targets for improvement. Operational Metrics: Collection of data after product release.

13. Problems of Measurement Data collection: How to collect data without impacting productivity? Usually need to collect data via tools, for accuracy, currency. Inappropriate focus: Developers target “ideal” metric values instead of focusing on what is appropriate for the situation. Perception that metrics will be used to rank employees.

14. Product metrics Objective: obtain a sense of the size or complexity of a product for the purpose of: Scheduling: How long will it take to build? How much test effort will be needed? When should the product be released? Reliability: How well will it work when released?

15. Lines of Code This is the “classic” metric for the size of a product. Often expressed as KLOC (thousand lines of code) Sometimes counted as (new + modified) lines of code for subsequent releases, when determining development effort. Advantages: Easy to measure with tools. Appears to have a correlation with effort. Disadvantages: “What is a line of code?” Comments? Declarations? Braces? Not as well correlated with functionality. Complexity of statements in different programming languages. Automated code generation from GUI builders, parser generators, UML, etc.

16. “Hello, world” in C and COBOL #include <stdio.h> int main(void) { printf("Hello World"); } 5 lines of C, versus 17 lines of COBOL 000100 IDENTIFICATION DIVISION. 000200 PROGRAM-ID. HELLOWORLD. 000300 000400* 000500 ENVIRONMENT DIVISION. 000600 CONFIGURATION SECTION. 000700 SOURCE-COMPUTER. RM-COBOL. 000800 OBJECT-COMPUTER. RM-COBOL. 000900 001000 DATA DIVISION. 001100 FILE SECTION. 001200 100000 PROCEDURE DIVISION. 100100 100200 MAIN-LOGIC SECTION. 100300 BEGIN. 100400 DISPLAY " " LINE 1 POSITION 1 ERASE EOS. 100500 DISPLAY "Hello world!" LINE 15 POSITION 10. 100600 STOP RUN. 100700 MAIN-LOGIC-EXIT. 100800 EXIT.

17. Function Point Analysis Defined in 1977 by Alan Albrecht of IBM Details: http://www.nesma.nl/english/menu/frsfpa.htm Attempts to determine a number that increases for a system with: More external interfaces More complex

18. Function Point Analysis Steps to performing function point analysis: Identify user functions of the system (5 types of functions are specified). Determine the complexity of each function (low, medium, high). Use the function point matrix to determine a value for each function in step 1, based on its complexity assigned in step 2. Rate the system on a scale of 0 to 5 for each of 14 specified system characteristics. Examples: communications, performance, Combine the values determined in steps 3 and 4 based on specified weightings.

19. Using Function Points Analysis To use function point analysis for estimation, the following information is needed: Standard rate of development per function point. Collected from previous projects. Special modifications (plus or minus) to the standard rate for the project to be estimated; Factors: new staff, new development tools, etc. Include additional factors not based on project size.

20. McCabe’s Cyclomatic Complexity Based on a flow graph model of software. Measures the number of linearly independent paths through a program. Uses graph components: E: Number of edges in flow graph N: Number of nodes in flow graph Cyclomatic complexity: M = E – N + 2 Claim: if M > 10 for a module, it is “likely” to be error prone.

21. Calculating Cyclomatic Complexity

22. Cyclomatic Complexity With a cyclomatic complexity of 6, there are 6 independent paths through the flow graph: E1-E2-E15 E1-E5-E3-E14-E15 E1-E5-E6-E4-E13-E14-E15 E1-E5-E6-E7-E12-E13-E14-E15 E1-E5-E6-E7-E8-E10-E11-E12-E13-E14-E15 E1-E5-E6-E7-E8-E9-E11-E12-E13-E14-E15

23. Software Package Metrics From R.C. Martin, Agile Software Development: Principles, Patterns, and Practices Metrics determined from a package of classes in an object-oriented language Number of classes and interfaces Coupling: Number of other packages that depend on classes in this package (CA) Number of packages depended on by classes in this package (CE). Ratio of abstract classes to total classes (0 to 1) “Instability”: ratio CE / (CA+CE)

24. Additional OO metrics Class dependencies: classes used, classes used by Degree of inheritance: Number of subclasses for a class Maximum number of inheritances from root of tree to a class. Number of method calls invoked to respond to a message.

25. Process Metrics Uses: Keep the current project on track. Accumulate data for future projects. Provide data for process improvements.

26. Defect Density Measurements of the number of defects detected, relative to a measurement of the code size. Defects found per line of code Defects found per function point Can be determined to determine effectiveness of process stages: Density of defects removed by code inspection Density of defects removed by testing Total defect density.

27. More defect metrics Average severity of defects: Classify defects by severity (e.g. scale of 1 to 5), and then determine the average severity of all defects. Cost of defect removal: Estimated $ cost of removing a defect at each stage of the development process.

28. Timetable / cost metrics Timetable metrics: Ratio of milestones that were achieved on schedule, to the total number of milestones. Average delay of milestone completion. Cost metrics: Development cost per KLOC Development cost per function point.

29. Productivity Metrics Number of development hours per KLOC Number of development hours per function point. Number of development hours per test case.

30. In-Process Metrics for software testing Metrics that are effective for managing software testing, They are used to indicate “good” or “bad” in terms of quality or schedule and thus to drive improvements, A baseline for each metric should always be established for comparison Helps to answer the difficult question: How do you know your product is good enough to ship?

31. S curve This metric tracks the progress of testing over time. Is the testing behind schedule? If it falls behind, actions need to be taken. Planned curves can be used for release-to- release comparison Planned curve needs to be done carefully

32. S curve Test Progress S curve (Planned, Attempted, Actual) The X-axis of the S curve is the time units (e.g. weeks); The Y-axis is the number of test cases; The S curve depicts the cumulative Planned test cases for each time unit Test cases attempted Test cases completed successfully

34. Defect arrivals over time This metric is concerned with the pattern of defect arrival. More specifically, The X-axis is the time units (e.g. weeks) before product release; The Y-axis is the number of defects reported during each time unit. Include data for a comparable baseline whenever possible. A comparison of the patterns of several releases is useful.

35. Defect arrival metric

36. Defect arrival metric A positive pattern of defect arrival is Higher arrival earlier, An earlier peak (relative to base line) and A decline to a lower level earlier before the product ship date. The tail-end is especially important as it indicates the quality of the product in the field.

37. Defect backlog Defect backlog: the number of testing defects remaining at any given time. Like defect arrivals, defect backlog can be used and graphed to track test progress. the X-axis is the time units (weeks) before product ship; the Y-axis is the number of defects in backlog. Unlike defect arrivals, the defect backlog is under control of development Comparison of actual data to target suggests actions for defect removal Defect backlog should be kept at a reasonable level Release-to-release comparison is useful.

38. When is the product ready to release? This is not an easy question and there is no general criterion. The application of those in-process metrics together with the baselines may provide good insight for answering the question. Recommended: Weekly defect arrivals Defects in backlog

39. Operational Metrics Collection of data after the release of a product. Used for: Reliability estimation Customer satisfaction Feedback for enhancements. Data can be obtained from: Product logs / monitors, if available. Support line usage Customer problem reports. Customer surveys Maintenance effort.

40. Support line metrics Number of support calls, based on each type of issue: Defect not previously known Duplicate of defect already known Installation problem Usability problems Unclear documentation User errors Call rate per time period. Average severity of issue.

41. Customer Satisfaction Using customer surveys – e.g. customer rating Very satisfied Satisfied Neutral Dissatisfied Very dissatisfied Examples of metrics: Percent of completely satisfied customers Percent of satisfied customers (satisfied and completely satisfied) Percent of dissatisfied customers (dissatisfied and completely dissatisfied) Percent of non satisfied (neutral, dissatisfied and completely dissatisfied)

42. Reliability Measures the availability of the product for service. Possible measures: Percentage of time available out of total time. Expected amount of outage time over a standard time interval. Mean time to failure (MTTF): average amount of operational time before a failure would occur.

  • Login