1 / 40

Critical Success Factors for Schedule Estimation and Improvement

Critical Success Factors for Schedule Estimation and Improvement. Barry Boehm, USC-CSSE http://csse.usc.edu 26 th COCOMO/Systems and Software Cost Forum November 2, 2011. Schedule Estimation and Improvement CSFs. Motivation for good schedule estimation&improvement

saburo
Download Presentation

Critical Success Factors for Schedule Estimation and Improvement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Critical Success Factors for Schedule Estimation and Improvement Barry Boehm, USC-CSSE http://csse.usc.edu 26th COCOMO/Systems and Software Cost Forum November 2, 2011

  2. Schedule Estimation and Improvement CSFs ©USC-CSSE Motivation for good schedule estimation&improvement Validated data on current project state and end state Relevant estimation methods able to use the data Framework for improving on the estimated schedule Guidelines for avoiding future schedule overruns Conclusions

  3. Motivation for Good Schedule Estimation and Improvement ©USC-CSSE • Market Capture/ Cost of Delay • Overrun avoidance • Realistic time-to-complete for lagging projects • August 2011 DoD Workshop • “No small slips”

  4. Magnitude of Overrun Problem: DoD ©USC-CSSE

  5. Magnitude of Overrun Problem:Standish Surveys of Commercial Projects ©USC-CSSE

  6. How Much Testing is Enough?- Early Startup: Risk due to low dependability - Commercial: Risk due to low dependability - High Finance: Risk due to low dependability- Risk due to market share erosion Sweet Spot ©USC-CSSE

  7. Schedule Estimation and Improvement CSFs ©USC-CSSE Motivation for good schedule estimation&improvement Validated data on current project state and end state Relevant estimation methods able to use the data Framework for improving on the estimated schedule Guidelines for avoiding future schedule overruns Conclusions

  8. Sources of Invalid Schedule Data ©USC-CSSE • The Cone of Uncertainty • If you don’t know what you’re building, it’s hard to estimate its schedule • Invalid Assumptions • Plans often make optimistic assumptions • Lack of Evidence • Assertions don’t make it true • Unclear Data Reporting • What does “90% complete” really mean? • The Second Cone of Uncertainty • And you thought you were out of the woods • Establishing a Solid Baseline: SCRAM

  9. The Cone of UncertaintySchedule highly correlated with size and cost ©USC-CSSE

  10. Invalid Planning Assumptions ©USC-CSSE No requirements changes Changes processed quickly Parts delivered on time No cost-schedule driver changes Stable external interfaces Infrastructure providers have all they need Constant incremental development productivity

  11. Average Change Processing Time: Two Complex Systems of Systems Average workdays to process changes Incompatible with turning within adversary’s OODA loop ©USC-CSSE

  12. Incremental Development Productivity Decline (IDPD) • Some savings: more experienced personnel (5-20%) • Depending on personnel turnover rates • Some increases: code base growth, diseconomies of scale, requirements volatility, user requests • Breakage, maintenance of full code base (20-40%) • Diseconomies of scale in development, integration (10-25%) • Requirements volatility; user requests (10-25%) • Best case: 20% more effort (IDPD=6%) • Worst case: 85% (IDPD=23%) ©USC-CSSE

  13. Effects of IDPD on Number of Increments SLOC • Model relating productivity decline to number of builds needed to reach 8M SLOC Full Operational Capability • Assumes Build 1 production of 2M SLOC @ 100 SLOC/PM • 20000 PM/ 24 mo. = 833 developers • Constant staff size for all builds • Analysis varies the productivity decline per build • Extremely important to determine the incremental development productivity decline (IDPD) factor per build 8M 2M ©USC-CSSE

  14. Common Examples of Inadequate Evidence We have three algorithms that met the KPPs on small-scale nominal cases. At least one will scale up and handle the off-nominal cases. We’ll build it and then tune it to satisfy the KPPs The COTS vendor assures us that they will have a security-certified version by the time we need to deliver. We have demonstrated solutions for each piece from our NASA, Navy, and Air Force programs. It’s a simple matter of integration to put them together. Our subcontractors are Level-5 organizations The task is 90% complete Our last project met a 1-second response time requirement ©USC-CSSE

  15. Problems Encountered without Evidence: 15-Month Architecture Rework Delay $100M Required Architecture: Custom; many cache processors $50M Original Architecture: Modified Client-Server Original Cost Original Spec After Prototyping 5 3 1 2 4 Response Time (sec) ©USC-CSSE

  16. Sources of Invalid Schedule Data ©USC-CSSE • The Cone of Uncertainty • If you don’t know what you’re building, it’s hard to estimate its schedule • Invalid Assumptions • Plans often make optimistic assumptions • Lack of Evidence • Assertions don’t make it true • Unclear Data Reporting • What does “90% complete” really mean? • The Second Cone of Uncertainty • And you thought you were out of the woods • Establishing a Solid Baseline: SCRAM

  17. Unclear Data Reporting ©USC-CSSE • All of the requirements are defined • Sunny day scenarios? Rainy day scenarios? • All of the units are tested • Nominal data? Off-nominal data? Singularities and endpoints? • Sea of Green risk mitigation progress • Risk mitigation planning, staffing, organizing, preparing done • But the risk is still red, and the preparations may be inadequate • 90% of the problem reports are closed • We needed the numbers down, so we did the easy ones first • All of the interfaces are defined, with full earned value • Maybe not for net-centric systems

  18. “Touch Football” Interface Definition Earned Value ©USC-CSSE • Full earned value taken for defining interface dataflow • No earned value left for defining interface dynamics • Joining/leaving network • Publish-subscribe • Interrupt handling • Security protocols • Exception handling • Mode transitions • Result: all green EVMS turns red in integration

  19. The Second Cone of Uncertainty– Need evolutionary/incremental vs. one-shot development Uncertainties in competition, technology, organizations, mission priorities ©USC-CSSE

  20. Schedule Compliance Risk Analysis Method (SCRAM) Root Cause Analysis of Schedule Slippage (RCASS) Stakeholders Requirements Subcontractors Functional Assets Workload Rework Staffing & Effort Management & Infrastructure Schedule & Duration Schedule Execution Australian MoD (Adrian Pitman, Angela Tuffley); Software Metrics (Brad&Betsy Clark) ©USC-CSSE

  21. “Our stakeholders are like a 100-headed hydra – everyone can say ‘no’ and no one can say ‘yes’.” Identification Management Communication Stakeholders • Experiences • Critical stakeholder (customer) added one condition for acceptance that removed months from the development schedule • Failed organizational relationship, key stakeholders were not talking to each other (even though they were in the same facility) ©USC-CSSE

  22. Requirements • Experiences • Misinterpretation of a communication standard led to an additional 3,000 requirements to implement the standard. • A large ERP project had two system specifications – one with the sponsor/customer and a different specification under contract with the developer – would this be a problem? • . ©USC-CSSE • What was that thing you wanted? • Sources • Definitions • Analysis and Validation • Management

  23. Workload • Number of work units • Requirements, configuration items, SLOC, test cases, PTRs… • Contract data deliverables (CDRLs) workload often underestimated by both contractor and customer • Experiences • Identical estimates in four different areas of software development (Cut & Paste estimation) • Re-plan based on twice the historic productivity with no basis for improvement • Five delivery iterations before CDRL approval ©USC-CSSE

  24. Schedule Estimation and Improvement CSFs ©USC-CSSE Motivation for good schedule estimation&improvement Validated data on current project state and end state Relevant estimation methods able to use the data Framework for improving on the estimated schedule Guidelines for avoiding future schedule overruns Conclusions

  25. Estimation Methods Able to Use the Data? ©USC-CSSE • Parametric Schedule Estimation Models • All of the parameter values available? • Critical Path Duration Analysis • Activity network kept up to date? Covers subcontractors? • Mathematical Optimization Techniques • Constraints, weighting factors mathematically expressible? • Monte Carlo based on parameter, task duration ranges • Guessing at the inputs vs. guessing at the outputs? • Expert judgment • Anybody done a system of systems with clouds before? • Use of Earned Value Management System Data • Were the net-centric interface protocols defined or not? • Root Cause Analysis of Overruns to Date • Can we get data like this from the subcontractors?

  26. Schedule Estimation and Improvement CSFs ©USC-CSSE • Motivation for good schedule estimation&improvement • Validated data on current project state and end state • Relevant estimation methods able to use the data • Framework for improving on the estimated schedule • Guidelines for avoiding future schedule overruns • Conclusions

  27. RAD Opportunity Tree Business process reengineering - BPRS Reusing assets - RVHL Eliminating Critical PathTasks Applications generation - RVHL Schedule as independent variable - O Tools and automation - O Reducing Time Per Task Work streamlining (80-20) - O Increasing parallelism - RESL Reducing Risks of Single-Point Failures Reducing failures - RESL Reducing their effects - RESL Early error elimination - RESL Reducing Backtracking Process anchor points - RESL Improving process maturity - O Collaboration technology - CLAB Minimizing task dependencies - BPRS Activity Network Streamlining Avoiding high fan-in, fan-out - BPRS Reducing task variance - BPRS Removing tasks from critical path - BPRS 24x7 development - PPOS Increasing Effective Workweek Nightly builds, testing - PPOS Weekend warriors - PPOS Better People and Incentives Personnel capability and experience - PERS Transition to Learning Organization - O ©USC-CSSE O: covered by classic cube root model

  28. Non-reuse Project Reuse project Reuse at HP’s Queensferry Telecommunication Division Time to Market (months) ©USC-CSSE

  29. The SAIV* Process Model 1. Shared vision and expectations management 2. Feature prioritization 3. Schedule range estimation and core-capability determination - Top-priority features achievable within fixed schedule with 90% confidence 4. Architecting for ease of adding or dropping borderline-priority features - And for accommodating past-IOC directions of growth 5. Incremental development - Core capability as increment 1 6. Change and progress monitoring and control - Add or drop borderline-priority features to meet schedule • *Schedule As Independent Variable; Feature set as dependent variable • Also works for cost, schedule/cost/quality as independent variable ©USC-CSSE

  30. Effect of Size on Software Schedule Sweet Spots ©USC-CSSE

  31. Schedule Estimation and Improvement CSFs ©USC-CSSE • Motivation for good schedule estimation&improvement • Validated data on current project state and end state • Relevant estimation methods able to use the data • Framework for improving on the estimated schedule • Guidelines for avoiding future schedule overruns • Conclusions

  32. Some Frequent Overrun Causes ©USC-CSSE • Conspiracy of Optimism • Effects of First Budget Shortfall • System Engineering • Decoupling of Technical and Cost/Schedule Analysis • Overfocus on Performance, Security, Functionality • Overfocus on Acquisition Cost • Frequent brittle, point-solution architectures • Assumption of Stability • Total vs. Incremental Commitment • Watch out for the Second Cone of Uncertainty • Stabilize increments; work change traffic in parallel

  33. Conspiracy of Optimism We can do 1500 man-months of software in a year *

  34. Achieving Agility and High Assurance -IUsing timeboxed or time-certain developmentPrecise costing unnecessary; feature set as dependent variable Short Development Increments Rapid Change Foreseeable Change (Plan) Short, Stabilized Development Of Increment N Increment N Transition/O&M Increment N Baseline High Assurance Stable Development Increments

  35. Evolutionary Concurrent Engineering: Incremental Commitment Spiral Model Unforeseeable Change (Adapt) Rapid Change Agile Rebaselining for Future Increments Future Increment Baselines Short Development Increments Deferrals Foreseeable Change (Plan) Short, Stabilized Development of Increment N Increment N Transition/ Operations and Maintenance Increment N Baseline Stable Development Increments Artifacts Concerns High Assurance Future V&V Resources Current V&V Resources Verification and Validation (V&V) of Increment N Continuous V&V

  36. Top-Priority Recommendation:Evidence-Based Milestone Decision Reviews ©USC-CSSE • Not schedule-based • The Contract says PDR on April 1, whether there’s an IMP/IMS or not • Not event-based • The Contract needs an IMS, so we backed into one that fits the milestones • But evidence-based • The Monte Carlo runs of both the parametric model and the IMS probabilistic critical path analysis show an 80% on-schedule probability • We have prioritized the features and architected the system to enable dropping some low-priority features to meet schedule • The evidence has been evaluated and approved by independent experts • The evidence is a first-class deliverable, needing plans and an EVMS • Added recommendation: better evidence-generation models • Covering systems of systems, deep subcontractor hierarchies, change impact analysis, evolutionary acquisition; with calibration data

  37. Backup Charts ©USC-CSSE

  38. SEPRT Seeks Performance EvidenceThat can be independently validated ©USC-CSSE

  39. Conclusions: Needs For ©USC-CSSE • Validated data on current project state and end state • Relevant estimation methods able to use the data • Framework for improving on the estimated schedule • Guidelines for avoiding future schedule overruns • There are serious and increasing challenges, but the next presentations provide some ways to address them

  40. Effect of Volatility and Criticality on Sweet Spots ©USC-CSSE

More Related