1 / 57

Software Productivity Research LLC Web: SPR email: cjonesiii@CS

Software Productivity Research. MEASUREMENT, METRICS AND INDUSTRY LEADERSHIP. Capers Jones, Chief Scientist Emeritus. Software Productivity Research LLC Web: www.SPR.com email: cjonesiii@CS.com. January 5, 2010. SOURCES OF MEASUREMENT INFORMATION. Enterprises providing data

faith
Download Presentation

Software Productivity Research LLC Web: SPR email: cjonesiii@CS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Productivity Research MEASUREMENT, METRICS AND INDUSTRY LEADERSHIP Capers Jones, Chief Scientist Emeritus Software Productivity Research LLC Web: www.SPR.com email: cjonesiii@CS.com January 5, 2010

  2. SOURCES OF MEASUREMENT INFORMATION • Enterprises providing data • Fortune 500 companies 170 • Small-medium companies 185 • Federal Government agencies 20 • State-local Government agencies 25 • Universities 7 • Military services 3 • (Plus web sites and literature)

  3. MEASURES AND METRICS 2010 • Usage among SPR clients Percent • Financial measures 100% • Software measures 75% • Sales and market measures 65% • Warranty and Quality Measures 60% • Shareholder measures 55% • Balanced-scorecard measures 45% • Supply-chain measures 25% • ITIL measures 25%

  4. MEASUREMENT WEB CITATIONS CIRCA 2010 • Search-engine citations Number of sites • ITIL measures 8,590,000 • Lines of code measures 8,110,000 • Return on investment (ROI) 7,300,000 • Function point metrics 6,680,000 • Total Quality Management (TQM) 6,350,000 • Benchmarks 2,890,000 • Six-Sigma Measures 1,840,000 • Balanced-scorecard measures 509,000

  5. Enterprise Employee Demographic Opinion Measures Survey Completed Project Measures Annual Productivity Report • Development • Enhancement • Conversion • Packages • Contracts SOFTWARE MEASURES AND REPORTS Enterprise Measurement Function point uses highlighted Program Production Library & Backlog Measures Soft Factor Measures Quality Operational Productivity Measures Measures Measures Down User Defect Time On-Going Reuse Measures Satisfaction Measures Project Measures Response Measures Non-Test Time • Designs • Code • Documents • Test Cases • Estimates Measures Milestone Measures Testing Measures Cost & Budget Variance Measures Post-Release Measures Annual Productivity Report Annual Satisfaction Monthly Quality Report Monthly Monthly Survey Quality Progress Report Report • Ease of Use • Functionality • Completed Milestones • Defect Volumes • Support • Plan vs. Actuals • Defect Severities • Red Flag Items • Defect Origins • Defect Causes • Removal Efficiency

  6. Financial Perspective Balanced Scorecard Learning Perspective Customer Perspective Process Perspective WHAT IS A BALANCED SCORECARD? Source: Dr. David Norton Dr. Robert Kaplan Harvard Business School

  7. WHAT IS ITIL? • ITIL = Information Technology Infrastructure Library • A collection of > 30 books dealing with service management • ITIL is increasing in popularity and usage • Major ITIL topics cover service desks, problem management, • change reports, and service-level agreements. • Gaps in ITIL coverage include lack of discussion of • error-prone modules, bad fix injection, entropy, and • correlation of defect removal efficiency to reliability

  8. DIFFICULT MEASUREMENTS CIRCA 2010 • Data base volumes or size • Data quality • Data cost of ownership • Web content size • Web content quality • Intangible value • Outsource measures and metrics • Cancelled project measures

  9. DEALING WITH VALUE MEASUREMENTS • Tangible Financial Value Factors • Cost Reduction $ saved • Direct Revenues $ gained • Indirect Revenues $ gained • User Effectiveness, Efficiency $ saved • Non- Financial Value Factors • User Satisfaction Market share • Employee Satisfaction, Morale Turnover • Competitive Advantages Market share • Human Life or Safety ? • National Defense ? • Enterprise Security ? • Federal or regulatory mandates ? • Enterprise Prestige ?

  10. DEALING WITH RISK MEASUREMENTS • Tangible Financial Risk Factors • Cancelled Projects $ lost • Cost Overruns $ spent • Schedule Delays $ spent • Poor Quality $ spent • Breach of Contract Litigation $ spent • Intangible Risk Factors • User Dissatisfaction Market loss • Employee Dissatisfaction Turnover • Poor quality Market loss • Patent, theft litigation ? • Human Life or Safety Failures ? • Federal or regulatory complaints ? • Loss of Enterprise Prestige ?

  11. HAZARDOUS MEASUREMENTS CIRCA 2010 • Metrics and Measures Behaving Badly • Cost per Defect (Penalizes quality) • Lines of Code (Ambiguous) • Cost per Line of Code (Penalizes new languages) • Lines of Code per Month (Ignores non-code work) • Staff Work Hours per month (Ignores non-work tasks) • Industry averages (Vague and ambiguous)

  12. METRICS AND MEASURES FOR INNOVATION External Product Innovation Patents and inventions Research & Development spending Market share and New Market Growth Internal Process Innovation Time to Market Process Assessments Development costs CMM Level > 3 Quality and Reliability Industry Benchmarks Customer satisfaction Six-Sigma measures

  13. METRICS AND INDUSTRY LEADERSHIP • MEASURES WHERE INDUSTRY LEADERS OFTEN EXCEL: • Research and development • Market shares and market growth • Shareholder value • Time to market • Unit development costs of products • Customer satisfaction • Service and support • Warranty repairs • Staff morale

  14. METRICS AND INDUSTRY LEADERSHIP • MEASURES WHERE INDUSTRY LEADERS OFTEN EXCEL: • Six Sigma results • Capability Maturity Model (CMMI)

  15. SOFTWARE METRICS USAGE IN 2010 • Usage among SPR clients Percent • IFPUG function points 85% • Backfiring (LOC to function points) 75% • Lines of code (LOC) metrics 30% • Other function points (COSMIC, Mark II, etc.) 20% • (IFPUG function points #1 in overall usage in 2010.) • (COSMIC function points growing in 2010.)

  16. WHAT IS A FUNCTION POINT? • In 1979 IBM placed function points in the public domain. • Function points are the weighted totals of five factors: • Factor Number Weight TOTAL • Number of Inputs 10 X 5 = 50 • Number of Outputs 10 X 4 = 40 • Number of Inquiries 50 X 5 = 250 • Number of Logical files 5 X 10 = 50 • Number of Interfaces 10 X 7 = 70 • _____ • Unadjusted function points 460 • Complexity adjustment multiplier 1.2 • Adjusted function points 552

  17. SOFTWARE PROJECTS MEASURED CIRCA 2010 • Metrics Used for U.S. Benchmarks Projects • Backfiring (LOC to function points) 90,000 • IFPUG function points 55,000 • Lines of code (LOC) metrics 25,000 • Other function points (COSMIC, Mark II, etc.) 10,500 • New high-speed, low-cost function points 500 • Backfiring #1 in volume of data in 2010

  18. METRICS CONVERSION: A WEAK LINK IN 2010 Metrics Conversion is Ambiguous Between: • Physical lines of code and logical statements • Logical statements and function points • Physical lines and function points • IFPUG Function points and British MK II function points • IFPUG Function points and COSMIC function points • IFPUG Function points and object points • IFPUG Function points and Agile story points • IFPUG Function points and Use Case points • IFPUG complexity factors and cyclomatic complexity • Calendar months and staff months • Work hours, work days, work weeks, work months • Current dollars and inflated dollars • Burden rates when benchmarking cost data

  19. SOFTWARE MEASUREMENT STATUS IN 2010 • Fortune 500 companies with productivity measures: 30% • Fortune 500 companies with quality measures: 45% • Fortune 500 companies with complete measures: 15% • Fortune 500 companies with missing measures: 85% • Number of software measurement personnel 5,500 • Number of software projects measured: 160,000 • Number of software projects not measured 50,000,000 • We need to do much more than we have done!

  20. SUCCESSFUL AND UNSUCCESSFUL PROGRAMS SUCCESSFUL MEASUREMENT PROGRAMS – Measurements are used as baselines for process improvement – Measurements are used as benchmarks within industry – Monthly reports of key measures to senior executives – Measurements used as goals for improvement targets – Annual productivity and quality report produced UNSUCCESSFUL MEASUREMENT PROGRAMS – Measurements not used for improvement programs – Measurements not used for industry comparisons – No reports of key measures to executives – No goals for improvement targets – No annual productivity and quality report produced

  21. MEASUREMENTS AND SOFTWARE CLASSES – Systems software Best quality measurements Best software quality – Information systems Best productivity measurements Best use of function point metrics Widest use of ITIL measurements – Outsource vendors Best benchmark measurements Best baseline measurements Shortest delivery schedules – Commercial software Best user satisfaction measurements Best testing metrics – Military software Most SEI process assessments Best software reliability

  22. WHY SOFTWARE MEASUREMENT HAS VALUE Companies that measure: Companies that don’t: On-time projects: 75% On-time projects: 45% Late projects: 20% Late projects: 40% Cancelled projects: 5% Cancelled projects: 15% Defect removal: > 95% Defect removal: Unknown Cost estimates: Accurate Cost estimates: Optimistic User satisfaction: High User satisfaction: Low Software status: High Software status: Low Staff morale: High Staff morale: Low

  23. WHERE SOFTWARE MEASUREMENT ADDS VALUE MEASUREMENT VALUE FACTORS PERCENT • Litigation risk avoidance 75% • Defect removal improvement 50% • Scrap and rework reduction 50% • Reliability and availability improvement 50% • Maintenance cost reduction 45% • Poor technology acquisitions 40% • Defect prevention improvement 25% • Development cost reduction 25% • Development schedule reduction 25% • Paperwork reduction 20% • Customer satisfaction improvement 15% • Staff morale improvement 15%

  24. CORPORATE MEASUREMENT EXPERIENCES MEASUREMENT VALUE FACTORS PERCENT • Customer satisfaction on key products + 50% • Software maintenance cost reductions - 45% • Software maintenance schedule reductions - 35% • Development schedule reductions - 20% • Development productivity rates + 15% • Software staff morale + 15% (Results from 4 years of software measurements)

  25. ANNUAL “TAX” FOR SOFTWARE MEASUREMENTS YEAR ASSESSMENT PRODUCTIVITY QUALITY TOTAL Year 1 1.5% 2.0% 2.0% 5.5% Year 2 1.0% 1.5% 1.5% 4.0% Year 3 1.0% 1.5% 1.5% 4.0% Year 4 1.0% 1.5% 1.5% 4.0% Year 5 1.0% 1.0% 1.5% 3.5% (Percentage of annual software staff budget)

  26. ANNUAL ROI FOR SOFTWARE MEASUREMENTS YEAR ASSESSMENT PRODUCTIVITY QUALITY TOTAL Year 1 $1.25 $1.50 $1.75 $4.50 Year 2 $1.75 $2.00 $2.50 $6.25 Year 3 $2.50 $2.75 $3.50 $8.75 Year 4 $3.25 $3.25 $5.00 $11.50 Year 5 $4.00 $4.00 $7.00 $15.00 (Return for each $1.00 invested) Measurement has an excellent ROI!

  27. GOALS OF SOFTWARE MEASUREMENT Apply metrics and measurement techniques that enable software projects to be managed and controlled with professional levels of performance. Effective software measurement includes high levels of accuracy in determining quantitative factors: – Sizes of all deliverables – Schedules of all activities – Resources and costs expended – Staffing levels of software specialists – Defect levels and removal efficiency – Demographics of software personnel – Customer support and maintenance data

  28. SUBJECTIVE SOFTWARE MEASUREMENT Effective software measurement also includes important but subjective or qualitative factors: – SEI maturity level of project if available – Staff specialization and experience levels – Volatility of requirements change – Value of processes, languages, and tools used – Risk, value, and projected ROI of major projects – User satisfaction after deployment – Staff and management morale Source: Capers Jones, Applied Software Measurement McGraw Hill, 1996

  29. QUANTITATIVE AND QUALITATIVE MEASUREMENTS QUALITATIVE QUANTITATIVE DATA DATA Size Personnel Effort MEASURING Processes Schedule Technology Documentation Environment Defects Productivity Rates Process Assess. Where You Are Why You Are ASSESSING Quality Levels Skills inventory IMPROVING How You Should Be Best Case Models

  30. TWELVE CRITERIA FOR MEASUREMENT SUCCESS The Measurement Program Should: 1. Benefit the executives who fund it 2. Benefit the managers and staff who use it 3. Generate positive ROI within 12 months 4. Meet normal corporate ROI criteria 5. Be as accurate as financial data 6. Explain why projects vary 7. Explain how much projects vary 8. Link assessments with quantitative data 9. Support multiple metrics 10. Support multiple kinds of software 11. Support multiple activities and deliverables 12. Lead to improvement in software results

  31. EIGHT MEASUREMENT HAZARDS The Measurements Program Should Not: • Conceal the names of projects and units • Become a “religion” that is followed without question 2. Show only overall data without any details 3. Omit non-coding activities such as design 4. Omit “soft factors” that explain variances 5. Support only one metric such as LOC 6. Omit quality and show only productivity 7. Set ambiguous or abstract targets - 10 to 1 productivity improvement - 10 to 1 quality improvement - 30% schedule improvement

  32. SOFTWARE COST MEASUREMENT DIFFICULTIES Twelve Software Cost Variations 1. Occupation Group (analysts, programmers, SQA, etc.) 2. Industry (banking, telecommunications, government, etc.) 3. Geographic region (urban, rural, country, state) 4. Company size (very small, small, medium, large) 5. Longevity in position 6. Appraisals or merit system 7. Bonuses (annual, performance, profitability, etc.) 8. Equity offered to executives, managers, personnel 9. Profit sharing 10. Unpaid overtime 11. Paid overtime 12. Benefits (medical, dental, vacations, etc.)

  33. COST VARIANCES IN 2010 AVERAGE SOFTWARE ENGINEER = $75,000 PER YEAR AVERAGE SOFTWARE MANAGER = $90,000 PER YEAR Variance by country = + or - 50% Variance by length of service = + or - 25% Variance by industry = + or - 20% Variance by company size = + or - 17% Variance by geographic region = + or - 15% Variance by occupation = + or - 13%

  34. COST VARIANCES FOR THE SAME POSITION Position Industry City Annual Salary Programmer Banking Geneva $85,000 Programmer Banking New York $75,000 Programmer Telecom Chicago $70,000 Programmer Defense St. Louis $60,000 Programmer Retail Tampa $55,000 Programmer Education Biloxi $50,000 Programmer Software Bombay $20,000 Programmer Defense Beijing $15,000

  35. SOFTWARE TIME MEASUREMENT DIFFICULTIES Thirteen Software Time Variations 1. Work year variations by country 2. Vacation periods by country 3. Vacation periods by company 4. Vacation accumulations over multiple years 5. Effective work days per calendar month 6. Effective work hours per day 7. Slack time between assignments 8. Non-project time (classes, travel, department meetings) 9. Company closings (strikes, storms, fires, earthquakes) 10. Sick days per year 11. Unpaid overtime 12. Paid overtime 13. Scrap and rework

  36. PROBLEMS WITH SOFTWARE WORK PERIODS Vacation days 0 to 30 per year Vacation carry over 0 to 220 days Public holidays 0 to 15 per year Sick days 0 to ?? per year Education days 0 to 15 per year Non-project days 0 to ?? per year Travel days 0 to ?? per year Special (snow, flood) 0 to ?? per year

  37. VARIATIONS IN WORK MONTH ASSUMPTIONS Work days per month Hours per day Work hours per month 30 10 300 (crunch) 22 9 198 (workaholic) 22 8 176 (official) 20 8 160 (holidays) 22 6 132 (average) 18 6 108 (vacations) 16 4 64 (slack)

  38. U.S. SOFTWARE WORK HOURS PER DAY Regular Unpaid Total Hours Hours Hours Commercial 7.0 2.0 9.0 Web 7.0 2.0 9.0 Outsource 7.0 1.5 8.5 Military 6.5 1.0 7.5 System 6.0 1.0 7.0 MIS 5.5 1.0 6.5 Average 6.5 1.4 7.9

  39. UNPAID OVERTIME ASSUMPTIONS • UNPAID OVERTIME IS USUALLY NOT RECORDED • UNPAID OVERTIME ARTIFICIALLY LOWERS COSTS • UNPAID OVERTIME HAS MAJOR IMPACTS ON SCHEDULES TYPICAL UNPAID OVERTIME ASSUMPTIONS: JAPAN = 12 HOURS PER WEEK UNITED STATES = 5 HOURS PER WEEK CANADA = 2 HOURS PER WEEK GERMANY = 0 HOURS PER WEEK

  40. INFLATION RATES AND LONG-RANGE PROJECTS • Large projects may run more than 5 years in development. • Inflation rates need to be added for long-range measurements. • Inflation rates vary by country in any given year. • Inflation rates vary within countries from year to year. • Maximum inflation rates have topped 100% per year.

  41. U.S. AVERAGE COSTS PER FUNCTION POINT IN 2010 Unburdened Fully Burdened End-user Software $350 $500 Web Software $450 $800 Information Systems Software $750 $1,100 Outsource Software $700 $1,500 Commercial Software $1,100 $1,800 Systems Software $1,250 $2,100 State government software $1,300 $2,200 Military Software $2,500 $5,000 Average $1,006 $1,875

  42. U.S. AVERAGE PRODUCTIVITY IN 2010 Small projects Large projects (< 1000 FP) (> 10,000 FP End-user Software 35 --- Web Software 25 9 Information Systems Software 25 10 Outsource Software 28 12 Commercial Software 30 11 Systems Software 20 13 State government software 12 6 Military Software 8 3 Average 23 9 Data expressed in terms of function points per staff month

  43. CREEPING REQUIREMENTS IN 2010 Average Monthly Rate of Domain Creeping Requirements Agile software projects 10.0% Web software projects 4.0% Commercial Software 3.5% Information technology 2.5% System, embedded software 2.0% Military Software 2.0% Outsourced Software 1.5% AVERAGE 2.6%

  44. U.S. DEFECT REMOVAL EFFICIENCY LEVELS (Defect potential in terms of Defects per Function Point) Defect Defect Removal Delivered Origins Potentials Efficiency Defects Requirements 1.00 77% 0.23 Design 1.25 85% 0.19 Coding 1.75 95% 0.09 Document 0.60 80% 0.12 Bad Fixes 0.40 70% 0.12 Total 5.00 85% 0.75 U.S. averages usually do not include formal inspection or static analysis defect removal methods

  45. BEST IN CLASS REMOVAL EFFICIENCY LEVELS (Defect potential in terms of Defects per Function Point) Defect Defect Removal Delivered Origins Potentials Efficiency Defects Requirements 0.40 85% 0.08 Design 0.60 97% 0.02 Coding 1.00 99% 0.01 Document 0.40 98% 0.01 Bad Fixes 0.10 95% 0.01 Total 2.50 96% 0.13 Best in class averages do include formal inspections and static analysis defect removal methods

  46. FUNCTION POINTS AND OTHER METRICS • POTENTIAL BUSINESS METRICS • Function points - Measures software size • Data points - Measures data base size • Service points - Measures support size • Engineering points - Measures hardware size • Value points - Measures intangibles & ROI • Content points - Measures web-site contents

  47. PROPOSED SUITE OF METRICS Function Points Data Points Value Points Inputs Entities Time to market Outputs Sets Cost reduction Inquiries Attributes Revenue increase Logical files Interfaces Market share Interfaces Constraints Morale Health/Safety Service Points Engineering Points Risk reduction Customers Algorithms National security Inputs Inventions Mandates/statutes Outputs References Customer satisfaction Inquiries Feedback Logical files Constraints Web Content Points Interfaces Inputs Primary information Constraints Outputs Derivative information References Interfaces Nationalization Components Personalization Links Services Applets

  48. RATIONALE FOR FUNCTION POINTS • Software personnel > 10% of total staffing in many large • businesses. • Software portfolios > 150,000 to 5,000,000 function points • in many large businesses. • Software is a critical corporate asset, and very hard to • control. • Function points can assist is solving many software business • problems. • Function points can assist in sizing, estimating, planning, • and software quality control.

  49. RATIONALE FOR DATA POINTS • Businesses and government agencies own more data than • they own software. • There are no effective metrics for measuring data base size or • for measuring data quality. • Data is a critical corporate asset, and even harder to • control than software. • No current methods for economic studies of data mining, • data warehousing, on-line analytical processing (OLAP), etc. • Data base updates are being underestimated for Web and • and E-business due to lack of metrics and lack of estimating • tools that can handle data creation and modification.

  50. RATIONALE FOR ENGINEERING POINTS • Engineering personnel > 10% of total staffing in many large • businesses. • Many important products are hybrid and need both software • and hardware engineering. • Integrated estimation and measurement across the hardware • software boundary is difficult. • Function points and engineering points could lead to new • forms of estimation and measurement tools.

More Related