1 / 54

Future Challenges for Systems and Software Cost Estimation and Measurement

Future Challenges for Systems and Software Cost Estimation and Measurement. Barry Boehm, USC-CSSE Fall 2011. Summary. Current and future trends create challenges for systems and software data collection and analysis

Download Presentation

Future Challenges for Systems and Software Cost Estimation and Measurement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Future Challenges for Systems and Software Cost Estimation and Measurement Barry Boehm, USC-CSSE Fall 2011 USC-CSSE

  2. Summary USC-CSSE • Current and future trends create challenges for systems and software data collection and analysis • Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena • Cost drivers: effects of complexity, volatility, architecture • Alternative processes: rapid/agile; systems of systems; evolutionary development • Model integration: systems and software; cost, schedule, and quality; costs and benefits • Updated systems and software data definitions and estimation methods needed for good management • Being addressed in Nov 4-5 workshops

  3. Metrics and “Productivity” USC-CSSE • “Equivalent” size • Requirements/design/product/value metrics • Productivity growth phenomena • Incremental development productivity decline

  4. Size Issues and Definitions USC-CSSE • An accurate size estimate is the most important input to parametric cost models. • Desire consistent size definitions and measurements across different models and programming languages • The AFCAA Guide sizing chapter addresses these: • Common size measures defined and interpreted for all the models • Guidelines for estimating software size • Guidelines to convert size inputs between models so projects can be represented in in a consistent manner • Using Source Lines of Code (SLOC) as common measure • Logical source statements consisting of data declarations executables • Rules for considering statement type, how produced, origin, build, etc. • Providing automated code counting tools adhering to definition • Providing conversion guidelines for physical statements • Addressing other size units such as requirements, use cases, etc.

  5. Equivalent SLOC – A User Perspective * USC-CSSE • “Equivalent” – A way of accounting for relative work done to generate software relative to the code-counted size of the delivered software • “Source” lines of code: The number of logical statements prepared by the developer and used to generate the executing code • Usual Third Generation Language (C, Java): count logical 3GL statements • For Model-driven, Very High Level Language, or Macro-based development: count statements that generate customary 3GL code • For maintenance above the 3GL level: count the generator statements • For maintenance at the 3GL level: count the generated 3GL statements • Two primary effects: Volatility and Reuse • Volatility: % of ESLOC reworked or deleted due to requirements volatility • Reuse: either with modification (modified) or without modification (adopted) *Stutzke, Richard D, Estimating Software-Intensive Systems, Upper Saddle River, N.J.: Addison Wesley, 2005

  6. “Number of Requirements”- Early estimation availability at kite level-Data collection and model calibration at clam level Cockburn, Writing Effective Use Cases, 2001 USC-CSSE

  7. IBM-UK Expansion Factor Experience Business Objectives 5 Cloud Business Events/Subsystems 35 Kite Use Cases/Components 250 Sea level Main Steps/Main Operations 2000 Fish Alt. Steps/Detailed Operations 15,000 Clam SLOC* 1,000K – 1,500K Lava *(70 – 100 SLOC/Detailed Operation) (Hopkins & Jenkins, Eating the IT Elephant, 2008) USC-CSSE

  8. SLOC/Requirement Data (Selby, 2009) USC-CSSE

  9. Estimation Challenges: A Dual Cone of Uncertainty– Need early systems engineering, evolutionary development Uncertainties in scope, COTS, reuse, services Uncertainties in competition, technology, organizations, mission priorities USC-CSSE

  10. Incremental Development Productivity Decline (IDPD) • Example: Site Defense BMD Software • 5 builds, 7 years, $100M; operational and support software • Build 1 productivity over 200 LOC/person month • Build 5 productivity under 100 LOC/PM • Including Build 1-4 breakage, integration, rework • 318% change in requirements across all builds • IDPD factor = 20% productivity decrease per build • Similar trends in later unprecedented systems • Not unique to DoD: key source of Windows Vista delays • Maintenance of full non-COTS SLOC, not ESLOC • Build 1: 200 KSLOC new; 200K reused@20% = 240K ESLOC • Build 2: 400 KSLOC of Build 1 software to maintain, integrate USC-CSSE

  11. IDPD Cost Drivers: Conservative 4-Increment Example • Some savings: more experienced personnel (5-20%) • Depending on personnel turnover rates • Some increases: code base growth, diseconomies of scale, requirements volatility, user requests • Breakage, maintenance of full code base (20-40%) • Diseconomies of scale in development, integration (10-25%) • Requirements volatility; user requests (10-25%) • Best case: 20% more effort (IDPD=6%) • Worst case: 85% (IDPD=23%) USC-CSSE

  12. Effects of IDPD on Number of Increments SLOC • Model relating productivity decline to number of builds needed to reach 8M SLOC Full Operational Capability • Assumes Build 1 production of 2M SLOC @ 100 SLOC/PM • 20000 PM/ 24 mo. = 833 developers • Constant staff size for all builds • Analysis varies the productivity decline per build • Extremely important to determine the incremental development productivity decline (IDPD) factor per build 8M 2M USC-CSSE

  13. Incremental Development Data Challenges USC-CSSE • Breakage effects on previous increments • Modified, added, deleted SLOC: need Code Count with diff tool • Accounting for breakage effort • Charged to current increment or I&T budget (IDPD) • IDPD effects may differ by type of software • “Breakage ESLOC” added to next increment • Hard to track phase and activity distributions • Hard to spread initial requirements and architecture effort • Size and effort reporting • Often reported cumulatively • Subtracting previous increment size may miss deleted code • Time-certain development • Which features completed? (Fully? Partly? Deferred?)

  14. “Equivalent SLOC” Paradoxes USC-CSSE • Not a measure of software size • Not a measure of software effort • Not a measure of delivered software capability • A quantity derived from software component sizes and reuse factors that helps estimate effort • Once a product or increment is developed, its ESLOC loses its identity • Its size expands into full SLOC • Can apply reuse factors to this to determine an ESLOC quantity for the next increment • But this has no relation to the product’s size

  15. 1970-1974 1975-1979 1980-1984 1985-1989 1990-1994 1995-1999 2000-2004 2005-2009 COCOMO II Database Productivity Increases • Two productivity increasing trends exist: 1970 – 1994 and 1995 – 2009 • 1970-1999 productivity trends largely explained by cost drivers and scale factors • Post-2000 productivity trends not explained by cost drivers and scale factors SLOC per PM Five-year Periods USC-CSSE

  16. Productivity is not fully characterized by SF’s and EM’s What factors can explain the phenomenon? 1970- 1975- 1980- 1985- 1990- 1995- 2000- 2005- 1974 1979 1984 1989 1994 1999 2004 2009 Constant A Decreases Over Post-2000 Period • Calibrate the constant A while stationing B = 0.91 • Constant A is the inverse of adjusted productivity • adjusts the productivity with SF’s and EM’s • Constant A decreases over the periods 50% decrease over Post-2000 period USC-CSSE

  17. Candidate Explanation Hypotheses • Productivity has doubled over the last 40 years • But scale factors and effort multipliers did not fully characterize this increase • Hypotheses/questions for explanation • Is standard for rating personnel factors being raised? • E.g., relative to “national average” • Was generated code counted as new code? • E.g., model-driven development • Was reused code counted as new code? • Are the ranges of some cost drivers not large enough? • Improvement in tools (TOOL) only contributes to 20% reduction in effort • Are more lightweight projects being reported? • Documentation relative to life-cycle needs USC-CSSE

  18. Summary USC-CSSE • Current and future trends create challenges for systems and software data collection and analysis • Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena • Cost drivers: effects of complexity, volatility, architecture • Alternative processes: rapid/agile; systems of systems; evolutionary development • Model integration: systems and software; cost, schedule, and quality; costs and benefits • Updated systems and software data definitions and estimation methods needed for good management • Being addressed in Nov 4-5 workshops

  19. Cost Driver Rating Scales and Effects USC-CSSE • Application Complexity • Difficulty and Constraints scales • Architecture, Criticality, and Volatility Effects • Architecture effects as function of product size • Added effects of criticality and volatility

  20. Candidate AFCAA Difficulty Scale Difficulty would be described in terms of required software reliability, database size, product complexity, integration complexity, information assurance, real-time requirements, different levels of developmental risks, etc. USC-CSSE

  21. Candidate AFCAA Constraints Scale Dimensions of constraints include electrical power, computing capacity, storage capacity, repair capability, platform volatility, physical environment accessibility, etc. USC-CSSE

  22. Added Cost of Weak ArchitectingCalibration of COCOMO II Architecture and Risk Resolution factor to 161 project data points USC-CSSE

  23. Effect of Size on Software Effort Sweet Spots USC-CSSE

  24. Effect of Volatility and Criticality on Sweet Spots USC-CSSE

  25. Summary USC-CSSE • Current and future trends create challenges for systems and software data collection and analysis • Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena • Cost drivers: effects of complexity, volatility, architecture • Alternative processes: rapid/agile; systems of systems; evolutionary development • Model integration: systems and software; cost, schedule, and quality; costs and benefits • Updated systems and software data definitions and estimation methods needed for good management • Being addressed in Nov 4-5 workshops

  26. Estimation for Alternative Processes USC-CSSE • Agile Methods • Planning Poker/Wideband Delphi • Yesterday’s Weather Adjustment: Agile COCOMO II • Evolutionary Development • Schedule/Cost/Quality as Independent Variable • Incremental Development Productivity Decline • Systems of Systems • Hybrid Methods

  27. Planning Poker/Wideband Delphi USC-CSSE • Stakeholders formulate story to be developed • Developers choose and show cards indicating their estimated ideal person-weeks to develop story • Card values: 1,2,3,5,8,13,20,30, 50,100 • If card values are about the same, use the median as the estimated effort • If card values vary significantly, discuss why some estimates are high and some low • Re-vote after discussion • Generally, values will converge, and the median can be used as the estimated effort

  28. Agile COCOMO IIAdjusting agile “yesterday’s weather” estimates USC-CSSE

  29. Incremental Development Forms Type Examples Pros Cons Cost Estimation Small: Agile Large: Evolutionary Development Adaptability to change; rapid fielding Easiest-first; late, costly breakage Small: Planning-poker-type Large: Parametric with IDPD Evolutionary Sequential Platform base plus PPPIs Prespecifiable full-capability requirements Emergent requirements or rapid change COINCOMO with no increment overlap Prespecified Sequential Overlapped Evolutionary Product lines with ultrafast change Modular product line Cross-increment breakage Parametric with IDPD and Requirements Volatility Mainstream product lines; Systems of systems High assurance with rapid change Highly coupled systems with very rapid change COINCOMO, IDPD for development; COSYSMO for rebaselining Rebaselining Evolutionary Time phasing terms: Scoping; Architecting; Developing; Producing; Operating (SADPO) Prespecified Sequential: SA; DPO1; DPO2; DPO3; … Evolutionary Sequential: SADPO1; SADPO2; SADPO3; … Evolutionary Overlapped: SADPO1; SADPO2; SADPO3; … Evolutionary Concurrent: SA; D1 ; PO1… SA2; D2 ; PO2… SA3; D3; PO3 … USC-CSSE

  30. Evolutionary Development Implications USC-CSSE • Total Package Procurement doesn’t work • Can’t determine requirements and cost up front • Need significant, sustained systems engineering effort • Need best-effort up-front architecting for evolution • Can’t dismiss systems engineers after Preliminary Design Review • Feature set size becomes dependent variable • Add or drop borderline-priority features to meet schedule or cost • Implies prioritizing, architecting steps in SAIV process model • Safer than trying to maintain a risk reserve

  31. Future DoD Challenges: Systems of Systems Rebaseline/ Adjustment FCR1 FCR1 DCR1 OCR1 OCR2 Exploration Valuation Architecting Develop Operation SoS-Level    Candidate Supplier/ Strategic Partner n LCO-type Proposal & Feasibility Info ● ● ● Source Selection Candidate Supplier/ Strategic Partner 1 OCRx2 OCRx1 OCRx5 OCRx3 OCRx4 System x       Develop Operation Operation Operation Operation ● ● ● OCRC1 OCRC2 FCRC DCRC System C       Exploration Valuation Architecting Develop Operation OCRB2 FCRB DCRB OCRB1 System B       Exploration Valuation Architecting Develop Operation OCRA1 FCRA DCRA System A       Exploration Valuation Architecting Develop Operation USC-CSSE

  32. Conceptual SoS SE Effort Profile • SoS SE activities focus on three somewhat independent activities, performed by relatively independent teams • A given SoS SE team may be responsible for one, two, or all activity areas • Some SoS programsmay have more than one organization performing SoS SE activities USC-CSSE

  33. Planning, Requirements Management, and Architecting Size Drivers SoS SE Effort Source Selection and Supplier Oversight Cost Factors SoS Integration and Testing SoS SE Cost Model • SoSs supported by cost model • Strategically-oriented stakeholders interested in tradeoffs and costs • Long-range architectural vision for SoS • Developed and integrated by an SoS SE team • System component independence • Size drivers and cost factors • Based on product characteristics, processes that impact SoS SE team effort, and SoS SE personnel experience and capabilities Calibration USC-CSSE

  34. Comparison of SE and SoSE Cost Model Parameters USC-CSSE

  35. Summary USC-CSSE • Current and future trends create challenges for systems and software data collection and analysis • Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena • Cost drivers: effects of complexity, volatility, architecture • Alternative processes: rapid/agile; systems of systems; evolutionary development • Model integration: systems and software; cost, schedule, and quality; costs and benefits • Updated systems and software data definitions and estimation methods needed for good management • Being addressed in Nov 4-5 workshops

  36. Reasoning about the Value of Dependability – iDAVE • iDAVE: Information Dependability Attribute Value • Estimator • Use iDAVE model to estimate and track software • dependability ROI • Help determine how much dependability is enough • Help analyze and select the most cost-effective combination of software dependability techniques • Use estimates as a basis for tracking performance • Integrates cost estimation (COCOMO II), quality estimation (COQUALMO), value estimation relationships USC-CSSE

  37. iDAVE Model Framework USC-CSSE

  38. Value Value Time Time Mission Planning, Competitive Time-to-Market Real-Time Control; Event Support Value Time Event Prediction - Weather; Software Size Examples of Utility Functions: Response Time Critical Region Value Time Data Archiving Priced Quality of Service USC-CSSE

  39. Tradeoffs Among Cost, Schedule, and Reliability: COCOMO II (RELY, MTBF (hours)) • For 100-KSLOC set of features • Can “pick all three” with 77-KSLOC set of features -- Cost/Schedule/RELY: “pick any two” points USC-CSSE

  40. The SAIV* Process Model 1. Shared vision and expectations management 2. Feature prioritization 3. Schedule range estimation and core-capability determination - Top-priority features achievable within fixed schedule with 90% confidence 4. Architecting for ease of adding or dropping borderline-priority features - And for accommodating past-IOC directions of growth 5. Incremental development - Core capability as increment 1 6. Change and progress monitoring and control - Add or drop borderline-priority features to meet schedule • *Schedule As Independent Variable; Feature set as dependent variable • Also works for cost, schedule/cost/quality as independent variable USC-CSSE

  41. How Much Testing is Enough?- Early Startup: Risk due to low dependability - Commercial: Risk due to low dependability - High Finance: Risk due to low dependability- Risk due to market share erosion Sweet Spot USC-CSSE

  42. Related Additional Measurement Challenges USC-CSSE • Tracking progress of rebaselining, V&V teams • No global plans; individual changes or software drops • Earlier test preparation: surrogates, scenarios, testbeds • Tracking content of time-certain increments • Deferred or partial capabilities; effects across system • Trend analysis of emerging risks • INCOSE Leading Indicators; SERC Effectiveness Measures • Contributions to systems effectiveness • Measures of Effectiveness models, parameters • Systems of systems progress, risk, change tracking • Consistent measurement flow-up, flow-down, flow-across

  43. Some data definition topics for discussionIn SW Metrics Unification workshop Nov 4-5 USC-CSSE • Ways to treat data elements • COTS, other OTS (open source; services; GOTS; reuse; legacy code) • Other size units (function points object points, use case points, etc.) • Generated code: counting generator directives • Requirements volatility • Rolling up CSCIs into systems and systems of systems • Cost model inputs and outputs (e.g., submitting estimate files) • Scope issues • Cost drivers, Scale factors • Reuse parameters: Software Understanding , Programmer Unfamiliarity • Phases included: hardware-software integration; systems of systems integration, transition, maintenance • WBS elements and labor categories included • Parallel software WBS • How to involve various stakeholders • Government, industry, commercial cost estimation organizations

  44. Summary USC-CSSE • Current and future trends create challenges for systems and software data collection and analysis • Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena • Cost drivers: effects of complexity, volatility, architecture • Alternative processes: rapid/agile; systems of systems; evolutionary development • Model integration: systems and software; cost, schedule, and quality; costs and benefits • Updated systems and software data definitions and estimation methods needed for good management • Being addressed in Nov 4-5 workshops

  45. References Boehm, B., “Some Future Trends and Implications for Systems and Software Engineering Processes”, Systems Engineering 9(1), pp. 1-19, 2006. Boehm, B., and Lane, J., “Using the ICM to Integrate System Acquisition, Systems Engineering, and Software Engineering,”CrossTalk, October 2007, pp. 4-9. Boehm, B., Brown, A.W.. Clark, B., Madachy, R., Reifer, D., et al., Software Cost Estimation with COCOMO II, Prentice Hall, 2000. Dahmann, J. (2007); “Systems of Systems Challenges for Systems Engineering”, Systems and Software Technology Conference, June 2007. Department of Defense (DoD), Instruction 5000.02, Operation of the Defense Acquisition System, December 2008. Galorath, D., and Evans, M., Software Sizing, Estimation, and Risk Management, Auerbach, 2006. Lane, J. and Boehm, B., “Modern Tools to Support DoD Software-Intensive System of Systems Cost Estimation, DACS State of the Art Report, also Tech Report USC-CSSE-2007-716 Lane, J., Valerdi, R., “Synthesizing System-of-Systems Concepts for Use in Cost Modeling,” Systems Engineering, Vol. 10, No. 4, December 2007. Madachy, R., “Cost Model Comparison,” Proceedings 21st, COCOMO/SCM Forum, November, 2006, http://csse.usc.edu/events/2006/CIIForum/pages/program.html Northrop, L., et al., Ultra-Large-Scale Systems: The Software Challenge of the Future, Software Engineering Institute, 2006. Reifer, D., “Let the Numbers Do the Talking,” CrossTalk, March 2002, pp. 4-8. Stutzke, R., Estimating Software-Intensive Systems, Addison Wesley, 2005. Valerdi, R, Systems Engineering Cost Estimation with COSYSMO, Wiley, 2010 (to appear) USC-CSSE Tech Reports, http://csse.usc.edu/csse/TECHRPTS/by_author.html USC-CSSE

  46. Backup Charts USC-CSSE

  47. COSYSMO Operational Concept # Requirements # Interfaces # Scenarios # Algorithms + Volatility Factor Size Drivers COSYSMO Effort Effort Multipliers • Application factors • 8 factors • Team factors • 6 factors • Schedule driver Calibration WBS guided by ISO/IEC 15288 USC-CSSE

  48. 4. Rate Cost Drivers - Application

  49. COSYSMO Change Impact Analysis – I– Added SysE Effort for Going to 3 Versions • Size: Number, complexity, volatility, reuse of system requirements, interfaces, algorithms, scenarios (elements) • 13 Versions: add 3-6% per increment for number of elements • add 2-4% per increment for volatility • Exercise Prep.: add 3-6% per increment for number of elements • add 3-6% per increment for volatility • Most significant cost drivers (effort multipliers) • Migration complexity: 1.10 – 1.20 (versions) • Multisite coordination: 1.10 – 1.20 (versions, exercise prep.) • Tool support: 0.75 – 0.87 (due to exercise prep.) • Architecture complexity: 1.05 – 1.10 (multiple baselines) • Requirements understanding: 1.05 – 1.10 for increments 1,2; 1.0 for increment 3; .9-.95 for increment 4 USC-CSSE

  50. COSYSMO Change Impact Analysis – II– Added SysE Effort for Going to 3 Versions USC-CSSE

More Related