COSYSMO-IP COnstructive SYStems Engineering Cost Model – Information Processing PSM User’s Group Conference Keystone, Colorado July 24 & 25, 2002 Dr. Barry Boehm Ricardo Valerdi University of Southern California Center for Software Engineering Version 3
Outline – Day 1 • USC Center for Software Engineering • Background & Update on COSYSMO-IP • Ops Con & EIA632 • Delphi Round 1 Results • Updated Drivers • Lessons Learned/Improvements • LMCO & INCOSE Comments • Q & A
Outline – Day 2 • Review of yesterday’s modified slides to clarify terminology • A few new slides to emphasize points • Review of current driver definitions • Definition for two new Cost drivers • Technology Maturity • Physical system/information system tradeoff analysis complexity
Objectives of the Workshop • Agree on a Concept of Operation • Converge on scope of COSYSMO-IP model • Address definitions of model parameters • Discuss data collection process
8 faculty/research staff, 18 PhD students • Corporate Affiliates program (TRW, Aero Galorath, Raytheon, Lockheed, Motorola, et al) • 17th International Forum on COCOMO and Software Cost Modeling October 22-25, 2002, Los Angeles, CA • Theme: Software Cost Estimation and Risk Management • Annual research review in March 2003
COSYSMO-IP: What is it? The purpose of the COSYSMO-IP project is to develop an initial increment of a parametric model to estimate the cost of system engineering activities during system development. The focus of the initial increment is on the cost of systems engineering for information processing systems or subsystems.
Includes: System engineering in the inception, elaboration, and construction phases, including test planning Requirements development and specification activities Physical system/information system tradeoff analysis Operations analysis and design activities System architecture tasks Including allocations to hardware/software and consideration of COTS, NDI and legacy impacts Algorithm development and validation tasks Defers: Physical system/information system operation test & evaluation, deployment Special-purpose hardware design and development Structure, power and/or specialty engineering Manufacturing and/or production analysis What Does COSYSMO-IP Cover?
Candidate COSYSMO Evolution Path Oper Test & Eval Inception Elaboration Construction Transition 1. COSYSMO-IP IP (Sub)system 2. COSYSMO-C4ISR C4ISR System Physical Machine System 3. COSYSMO-Machine System of Systems (SoS) 4. COSYSMO-SoS
Current COSYSMO-IP Operational Concept # Requirements # Interfaces # Scenarios # Algorithms Volatility Factor … Size Drivers Effort COSYSMO-IP Duration Effort Multipliers Calibration • Application factors • Team factors • Schedule driver WBS guided By EIA 632
EIA632/COSYSMO-IP Mapping COSYSMO-IP CategoryEIA632 Requirement Supplier Performance 3 Technical Management 4-12 Requirements Definition 14-16 Solution Definition 17-19 Systems Analysis 22-24 Requirements Validation 25-29 Design Solution Verification 30 End Products Validation - COTS 33a EIA632 Reqs. not included in COSYSMO-IP are: 1,2,13,20,21,31,32,33b
Activity Elements Covered by EIA632, COCOMOII, and COSYSMO-IP When doing COSYSMO-IP and COCOMOII, Subtract grey areas prevent double counting. = COCOMOII = COSYSMO-IP
Past, Present, and Future Performed First Delphi Round Initial set if parameters compiled by Affiliates PSM Workshop 2001 2002 2003 Meeting at CCII Conference Working Group meeting at ARR
Future Parameter Refinement Opportunities 2003 2004 2005 Driver definitions Data collection (Delphi) First iteration of model Model calibration
Delphi Survey • Survey was conducted to: • Determine the distribution of effort across effort categories • Determine the range for size driver and effort multiplier ratings • Identify the cost drivers to which effort is most sensitive to • Reach consensus from a sample of systems engineering experts • Distributed Delphi surveys to Affiliates and received 28 responses • 3 Sections: • Scope, Size, Cost • Also helped us refine the scope of the model elements
Delphi Round 1 Results System Engineering Effort Distribution Std. Dev. Suggested 5% 15% 15% 20% 20% 15% 5% 5% Category (EIA Requirement) Supplier Performance (3) Technical Management (4-12) Requirements Definition (14-16) Solution Definition (17-19) Systems Analysis (22-24) Requirements Validation (25-29) Design Solution Verification (30) End Products Validation (33a) Delphi 5.2% 13.1% 16.6% 18.1% 19.2% 11.3% 10.5% 6.6% 3.05 4.25 4.54 4.28 5.97 4.58 6.07 3.58
Delphi Round 1 Highlights (cont.) Range of sensitivity for Size Drivers 6.48 5.57 6 Relative Effort 4 2.54 2.10 2.21 2.23 2 1 # TPM’s # Modes # Scenarios # Algorithms # Platforms # Interfaces # Requirements
Delphi Round 1 Highlights (cont.) Range of sensitivity for Cost Drivers (Application Factors) 4 EMR 2.81 2.43 2.24 2.13 2 1.13 1.74 1.93 COTS Legacy transition Architecture und. Platform difficulty Requirements und. Bus. process reeng. Level of service reqs.
Delphi Round 1 Highlights (cont.) Range of sensitivity for Cost Drivers (Team Factors) 4 EMR 2.46 2.16 1.91 1.94 1.78 1.84 2 1.25 1.28 Tool support Multisite coord. Process maturity Formality of deliv. Stakeholder comm. Personnel capability Stakeholder cohesion Personal experience
4 Size Drivers • Number of System Requirements • Number of Major Interfaces • Number of Operational Scenarios • Number of Unique Algorithms Number of Technical Performance Measures Number of Modes of Operation Number of Different Platforms COST Driver COST Driver
Size Driver Definitions (1 of 4) Number of System Requirements The number of requirements taken from the system specification. A requirement is a statement of capability or attribute containing a normative verb such as shall or will. It may be functional or system service-oriented in nature depending on the methodology used for specification. System requirements can typically be quantified by counting the number of applicable shall’s or will’s in the system or marketing specification. Note: Use this driver as the basis of comparison for the rest of the drivers.
Size Driver Definitions (2 of 4) Number of Major Interfaces The number of shared major physical and logical boundaries between system components or functions (internal interfaces) and those external to the system (external interfaces). These interfaces typically can be quantified by counting the number of interfaces identified in either the system’s context diagram and/or by counting the significant interfaces in applicable Interface Control Documents.
Size Driver Definitions (3 of 4) Number of Operational Scenarios* The number of operational scenarios** that a system is specified to satisfy. Such threads typically result in end-to-end test scenarios that are developed to validate the system satisfies its requirements. The number of scenarios can typically be quantified by counting the number of end-to-end tests used to validate the system functionality and performance. They can also be calculated by counting the number of high-level use cases developed as part of the operational architecture. Number of Modes of Operation (to be merged with Op Scen) The number of defined modes of operation for a system. For example, in a radar system, the operational modes could be air-to-air, air-to-ground, weather, targeting, etc. The number of modes is quantified by counting the number of operational modes specified in the Operational Requirements Document. *counting rules need to be refined **Op Scen can be derived from system modes
Size Driver Definitions (4 of 4) Number of Unique Algorithms The number of newly defined or significantly altered functions that require unique mathematical algorithms to be derived in order to achieve the system performance requirements. Note: Examples could include a complex aircraft tracking algorithm like a Kalman Filter being derived using existing experience as the basis for the all aspect search function. Another Example could be a brand new discrimination algorithm being derived to identify friend or foe function in space-based applications. The number can be quantified by counting the number of unique algorithms needed to support each of the mathematical functions specified in the system specification or mode description document (for sensor-based systems).
12 Cost Drivers Application Factors (5) • Requirements understanding • Architecture complexity • Level of service requirements • Migration complexity • COTS assessment complexity • Platform difficulty • Required business process reengineering • Technology Maturity • Physical system/information subsystem tradeoff analysis complexity
Cost Driver Definitions (1,2 of 5) Requirements understanding The level of understanding of the system requirements by all stakeholders including the systems, software, hardware, customers, team members, users, etc… Architecture complexity The relative difficulty of determining and managing the system architecture in terms of IP platforms, standards, components (COTS/GOTS/NDI/new), connectors (protocols), and constraints. This includes systems analysis, tradeoff analysis, modeling, simulation, case studies, etc…
Cost Driver Definitions (3,4,5 of 5) Level of service requirements The difficulty and criticality of satisfying the Key Performance Parameters (KPP). For example: security, safety, response time, the “illities”, etc… Migration complexity (formerly Legacy transition complexity) The complexity of migrating the system from previous system components, databases, workflows, etc, due to new technology introductions, planned upgrades, increased performance, business process reengineering etc… Technology Maturity The relative readiness for operational use of the key technologies.
12 Cost Drivers (cont.) Team Factors (7) • Number and diversity of stakeholder communities • Stakeholder team cohesion • Personnel capability • Personal experience/continuity • Process maturity • Multisite coordination • Formality of deliverables • Tool support
Cost Driver Definitions (1,2,3 of 7) Stakeholder team cohesion Leadership, frequency of meetings, shared vision, approval cycles, group dynamics (self-directed teams, project engineers/managers), IPT framework, and effective team dynamics. Personnel experience/continuity The applicability and consistency of the staff over the life of the project with respect to the customer, user, technology, domain, etc… Personnel capability Systems Engineering’s ability to perform in their duties and the quality of human capital.
Cost Driver Definitions (4,5,6,7 of 7) Process maturity Maturity per EIA/IS 731, SE CMM or CMMI. Formality of deliverables The breadth and depth of documentation required to be formally delivered. Multisite coordination Location of stakeholders, team members, resources (travel). Tool support Use of tools in the System Engineering environment.
Lessons Learned/Improvements Lesson 1 – Need to better define the scope and future of COSYSMO-IP via Con Ops Lesson 2 – Drivers can be interpreted in different Ways depending on the type of program Lesson 3 – COSYSMO is too software-oriented Lesson 4 – Delphi needs to take less time to fill out Lesson 5 – Need to develop examples, rating scales
LMCO Comments The current COSYSMO focus is too software oriented. This is a good point. We propose to change the scope from "software-intensive systems or subsystems" to "information processing (IP) systems or subsystems." These include not just the software but also the associated IP hardware processors; memory; networking; display or other human-computer interaction devices. System engineering of these IP systems or subsystems includes considerations of IP hardware device acquisition lead times, producibility, and logistics. Considerations on non-IP hardware acquisition, producibility, and logistics are considered as IP systems engineering cost and schedule drivers for the IOC version of COSYSMO. Perhaps we should call it COSYSMO-IP.
LMCO Comments (cont.) The COSYSMO project should begin by working out the general framework and WBS for the full life cycle of a general system. We agree that such a general framework and WBS will eventually be needed. However, we feel that progress toward it can be most expeditiously advanced by working on definitions of and data for a key element of the general problem first. If another group would like to concurrently work out the counterpart definitions and data considerations for the general system engineering framework, WBS, and estimation model, we will be happy to collaborate with them.
Points of Contact Dr. Barry Boehm [firstname.lastname@example.org] (213) 740-8163 Ricardo Valerdi [email@example.com] (213) 440-4378 Donald Reifer [firstname.lastname@example.org] (310) 530-4493 Websites http://valerdi.com/cosysmo http://sunset.usc.edu
COCOMOII Suite COPROMO COQUALMO COPSEMO COCOMOII CORADMO COCOTS COSYSMO-IP For more information visit http://sunset.usc.edu