1 / 31

LCDR Jane Lochner DASN (C4I/EW/Space)

16 Practices for Improving Software Project Success. LCDR Jane Lochner DASN (C4I/EW/Space). Software Technology Conference 3-6 May 1999. (703) 602-6887 lochner.jane@hq.navy.mil. Productivity vs. Size*. Function Points Per Person Month. Software Size in Function Points.

olisa
Download Presentation

LCDR Jane Lochner DASN (C4I/EW/Space)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 16 Practices for Improving Software Project Success LCDR Jane LochnerDASN (C4I/EW/Space) Software Technology Conference3-6 May 1999 (703) 602-6887lochner.jane@hq.navy.mil

  2. Productivity vs. Size* Function Points Per Person Month Software Size in Function Points * Capers Jones, Becoming Best in Class, Software Productivity Research, 1995 briefing

  3. Percent Cancelled vs. Size(1) Software Size in Function Points(2) Probability of Software Project Being Cancelled The drivers of the bottom line change dramatically with software size. (1) Capers Jones, Becoming Best in Class, Software Productivity Research, 1995 briefing (2) 80 SLOC of Ada to code 1 function point, 128 SLOC of C

  4. Management is theProblem, ButComplexity is the Villain • When a software development is cancelled, management failure is usually the reason. "…The task force is convinced that today's major problems with military software development are not technical problems, but management problems." • When tasks of a team effort are interrelated, the total effort increases in proportion to the square of the number of persons on the team • Complexity of effort, difficulty of coordination * Report of the Defense Science Task force on Military Software, Sept 1987, Fred Brooks Chairman

  5. Software Program Managers Network (SPMN) • Consulting • Software Productivity Centers of Excellence • Bring academia & industry together • Delta One • Pilot program to attract, train, retain software workers for Navy and DoD programs • Airlie Council • Identify fundamental processes and proven solutions essential for large-scale software project success • Board of Directors • Chaired by Bgen Nagy, USAF

  6. Airlie Council • Membership • Successful managers of large-scale s/w projects • Recognized methodologists & Metric authorities • Prominent Consultants • Executives from major s/w companies • Product Approval • Software Advisory Group (SAG)

  7. Three Foundations for Project Success

  8. 3 Foundations 16 Essential PracticesTM Product Integrity and Stability Project Integrity Construction Integrity • Adopt Continuous Risk Management • Estimate Empirically Cost and Schedule • Use Metrics to Manage • Track Earned Value • Track Defects against Quality Targets • Treat People as the Most Important Resource • Adopt Life Cycle Configuration Management • Manage and Trace Requirements • Use System-Based Software Design • Ensure Data and Database Interoperability • Define and Control Interfaces • Design Twice, Code Once • Assess Reuse Risks and Costs • Inspect Requirements and Design • Manage Testing as a Continuous Process • Compile & Smoke Test Frequently

  9. Project Integrity Management Practices that: Give early indicators of potential problems Coordinate the work and the communications of the development team Achieve a stable development team with needed skills Are essential to deliver the complete product on-time, within budget, and with all documentation required to maintain the product after delivery

  10. Project Integrity • Adopt Continuous Risk Management • Estimate Empirically Cost and Schedule • Use Metrics to Manage • Track Earned-value • Track Defects Against Quality Targets • Treat People as the Most Important Resource

  11. Identify risks over entire life cycle including at least: Cost; schedule; technical; staffing; external dependencies; supportability; sustainability; political For EACH risk, estimate: Likelihood that it will become a Problem Impact if it does Mitigation & Contingency plans Measurement Method Update & Report risk statusat least monthly ALARMS: Trivial Risks Risks from unproven technology not identified No trade studies for high-risk technical requirements Management & Workers have different understanding of risks Risk Management Practice Essentials

  12. Use actual costs measured on past comparable projects Identify all reused code (COTS/GOTS), evaluate applicability and estimate amount of code modification and new code required to integrate Compare empirical top-down cost estimate with a bottom-up engineering estimate Never compress the schedule more than 85% (from nominal) ALARMS High productivity estimates based on unproven technology Estimators not familiar with industry norms No cost associated with code reuse System requirements are incomplete “Bad” earned value metrics No risk materialization costs Cost and Schedule Estimation Practice Essentials

  13. Collect metrics on: Risk Materialization Product Quality Process Effectiveness Process Conformance Make decisions based on data not older than one week Make metrics data available to all team members Define thresholds that trigger predefined actions ALARMS: Large price tag attached to request for metrics data Not reported at least monthly Rebaselining Inadequate task activity network Use Metrics to ManagePractice Essentials

  14. Establish unambiguous exit criteria for EACH task Take BCWP credit for tasks when the exit criteria have been verified as passed and report ACWP for those tasks Establish cost and schedule budgets that are within uncertainty acceptable to the project Allocate labor and other resources to each task ALARMS: Software tasks not separate from non-software tasks More than 20% of the total development effort is LOE Task durations greater than 2 weeks Rework doesn’t appear as separate task Data is old Track Earned Value Practice Essentials

  15. Establish Unambiguous Quality Goals at Project Inception Understandability; Reliability & Maintainability; Modularity; Defect density Classify defects by Type; Severity; Urgency; Discovery Phase Report defects by: When created; When found; Number of inspections present but not found; Number closed and currently open by category ALARMS: Defects not managed by CM Culture penalizes discovery of defects Not aware of effectiveness of defect removal methods Earned-Value credit is taken before defects are fixed or formally deferred Quality target failures not recorded as one or more defects Track Defects Against Quality TargetsPractice Essentials

  16. Provide staff the tools to be efficient and productive Software Equipment Facilities, work areas Recognize team members for performance Individual goals Program requirements Make professional growth opportunities available Technical Managerial ALARMS: Excessive or unpaid overtime Excessive pressure Large, unidentified, staff increases Key software staff not receiving competitive compensation Staff turnover greater than industry/locale norms Treat People as Most Important ResourcePractice Essentials

  17. Construction Integrity Development Practices that: Provide a stable, controlled, predictable development or maintenance environment Increase the probability that what was to be built is actually in the product when delivered

  18. Construction Integrity • Adopt Life-cycle Configuration Management • Manage and Trace Requirements • Use System-Based Software Design • Ensure Data and Database Interoperability • Define and Control Interfaces • Design Twice, Code Once • Assess Reuse Risks and Costs

  19. Institute CM for: COTS, GOTS, NDI and other shared engineering artifacts Design Documentation Code Test Documentation Defects Incorporate CM activities as tasks within project plans and activity network Conduct Functional & Physical Configuration Audits Maintain version and semantic consistency between CIs ALARMS: Developmental baseline not under CM control CM activities don’t have budgets, products and unambiguous exit criteria CM does not monitor and control the delivery and release-to-operation process Change status not reported No ICWGs for external interfaces CCCBs don’t assess system impact Configuration Management Practice Essentials

  20. Trace System Requirements down through all derived requirements and layers of design to the lowest level and to individual test cases Trace each CI back to one or more System Requirements For Incremental Release Model, develop Release Build-plan that traces all System Requirements into planned releases For Evolutionary Model, trace new requirements into Release Build-plan as soon as they are defined ALARMS: Layer design began before requirements for Performance; Reliability; Safety; External interfaces; and Security had been allocated System Requirements: Not defined by real end users Did not include Operational Scenarios Did not specify inputs that will stress the system Traceability is not to the code level Manage & Trace Requirements Practice Essentials

  21. Develop System and Software Architectures IAW structured methodologies Develop System and Software Architectures from the same partitioning perspective Information Objects Functions States Design System and Software Architecture to give views of static, dynamic and physical structures System Based Software Design Practice Essentials • ALARMS: • Modifications and additions to reused legacy/COTS/GOTS software not minimized • Security, Reliability, Performance, Safety, and Interoperability Requirements not included • Design not specified for all internal and external interfaces • Requirements not verified through M&S before start of Software design • Software Engineers did not participate in Architecture development

  22. Design Information Systems with Very Loose Coupling Between Hardware, Persistent Data, and application software Define data element names, definitions, minimum accuracy, data type, units of measure, and range of values Identified using several processes Minimizes the amount of translation required to share data with external systems Relationships between data items defined based on queries to be made on the database Data and Database InteroperabilityPractice Essentials • ALARMS: • Data Security requirements, business rules and high-volume transactions on the database not specified before database physical design begins • Compatibility analysis not performed because DBMS is “SQL Compliant” • No time/resources budgeted to translate COTS databases to DoD standards

  23. Ensure interfaces comply with applicable public, open API standards and data interoperability standards Define user interface requirements through user participation Avoid use of proprietary features of COTS product interfaces Place interface under CM Control before developing software components that use it Track external interface dependencies in activity network ALARMS: Assumption that two interfacing applications that comply with the JTA and DII COE TAFIM Interface Standards are interoperable E.g. JTA and TAFIM include both Microsoft and UNIX interface standards and MS and UNIX design these standards to exclude the other. Interface testing not done under heavy stress Reasons for using proprietary features not documented Define and Control Interfaces Practice Essentials

  24. Describe Execution process characteristics and features End-user functionality Physical software components and their interfaces Mapping of software components onto hardware components States and State Transitions Use design methods that are consistent with those used for System and are defined in SDP ALARMS: Graphics not used to describe different views of the design Operational scenarios not defined that show how the different views of design interact Reuse, COTS, GOTS, and program library components not mapped to the software/database components System and Software Requirements not traced to the software/database components Design Twice, Code OncePractice Essentials

  25. ALARMS: Development of "wrappers" needed to translate reuse software external interfaces Positive and negative impact of COTS proprietary features not identified No analysis of GOTS sustainment organization No plan/cost for COTS upgrades Cost of reuse code is less than 30% of new code Reuse code has less than 25% functionality fit Conduct trade study to select reuse or new architecture Establish quantified selection criteria and acceptability thresholds analyze full lifecycle costs of each candidate component Identify reuse code at program inception before start of architecture design Use architectural frameworks that dominate commercial markets CORBA/JavaBeans ActiveX/DCOM Assess Reuse Risks & Costs Practice Essentials

  26. Product Integrity Quality Practices that: Help assure that, when delivered, the product will meet customer quality requirements Provide an environment where defects are caught when inserted and any which leak are caught as early as possible

  27. Product Integrity • Inspect Requirements and Design • Manage Testing As a Continuous Process • Compile and Smoke Test Frequently

  28. ALARMS: Less than 80% of defects discovered by inspections Predominant inspection is informal code walkthrough Less than 100% inspection of architecture & design products Less than 50% inspection of test plans Inspect Requirements and DesignPractice Essentials • Inspect products that will be inputs to other tasks • Establish a well-defined, structured inspection technique • Train employees how to conduct inspections • Collect & Report defect metrics for each formal inspection

  29. Deliver inspected test products IAW integration-test plan Ensure every CSCI requirement has at least one test case Include both White- and Black-Box tests Functional, interface, error recovery, out-of-bounds input, and stress tests Scenarios designed to model field operation ALARMS: Builds for all tests not done by CM Pass/Fail criteria not established for each test No test stoppage criteria No automated test tools High-risk and safety- or security-critical code not tested early on Compressed test schedules Manage Testing as a Continuous ProcessPractice Essentials

  30. Use orderly integration build process with VDD Identifies version of software units in the build Identifies open and fixed defects against the build Use independent test organization to conduct integration tests Include evolving regression testing Document defects; CM track defects ALARMS: Integration build and test done less than weekly Builds not done by CM; small CM staff Excessive use of patches Lack of automated build tools Compile and Test FrequentlyPractice Essentials

  31. There is Hope!!! A small number of high-leverage proven practices can be put in place quickly to achieve relatively rapid bottom-line improvements. The 16-Point PlanTM

More Related