1 / 46

COSYSMO Working Group Meeting Industry Calibration results

COSYSMO Working Group Meeting Industry Calibration results. Ricardo “two months from the finish line” Valerdi USC Center for Software Engineering & The Aerospace Corporation. Morning Agenda.

judd
Download Presentation

COSYSMO Working Group Meeting Industry Calibration results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COSYSMO Working Group MeetingIndustry Calibration results Ricardo “two months from the finish line” Valerdi USC Center for Software Engineering & The Aerospace Corporation

  2. Morning Agenda 7:30 Continental Breakfast (in front of Salvatori Hall) 8:30 Introductions [All] 9:00 Brief overview of COSYSMO [Ricardo] 9:15 Calibration results [Ricardo] 9:45 Break 10:15 Size driver counting rules exercise [All] 11:15 Mini Delphi for EIA 632 activity distributions [All] 12:00 Lunch (in front of Salvatori Hall)

  3. Afternoon Agenda 1:00 Joint meeting with COSOSIMO workshop [JoAnn Lane] 2:00 COSYSMO Risk/Confidence Estimation Prototype [John Gaffney] 2:45 Break 3:15 Open issues Local calibrations Lies, damned lies, and statistical outliers Future plans for COSYSMO 2.0 (including ties to SoS work) 4:30 Action items for next meeting: July 2005 in Keystone, CO 5:00 Adjourn

  4. 7-step Modeling Methodology Analyze Existing literature Perform Behavioral Analysis 1 Identify Relative Significance 2 Perform Expert- Judgement, Delphi Assessment 3 4 Gather Project Data Determine Bayesian A-Posteriori Update 5 WE ARE HERE Gather more data; refine model 6 7

  5. COSYSMO Operational Concept # Requirements # Interfaces # Scenarios # Algorithms + Volatility Factors Size Drivers COSYSMO Effort Effort Multipliers • Application factors • 8 factors • Team factors • 6 factors Calibration WBS guided by EIA/ANSI 632

  6. Application Factors Requirements understanding Architecture understanding Level of service requirements Migration complexity Technology Maturity Documentation Match to Life Cycle Needs # and Diversity of Installations/Platforms # of Recursive Levels in the Design Team Factors Stakeholder team cohesion Personnel/team capability Personnel experience/continuity Process maturity Multisite coordination Tool support COSYSMO Cost Drivers

  7. COSYSMO 1.0 Calibration Data Set • Collected 35 data points • From 6 companies; 13 business units • No single company had > 30% influence

  8. COSYSMO Data Sources

  9. Data Champions • Gary Thomas, Raytheon • Steven Wong, Northrop Grumman • Garry Roedler, LMCO • Paul Frenz, General Dynamics • Sheri Molineaux, General Dynamics • Fran Marzotto, General Dynamics • John Rieff, Raytheon • Jim Cain, BAE Systems • Merrill Palmer, BAE Systems • Bill Dobbs, BAE Systems • Donovan Dockery, BAE Systems • Mark Brennan, BAE Systems • Ali Nikolai, SAIC

  10. Meta Properties of Data Set Almost half of the data received was from Military/Defenseprograms 55% was from Information Processing systems and 32% was from C4ISR

  11. Meta Properties of Data Set Two-thirds of the projects were software-intensive First 4 phases of the SE life cycle were adequately covered

  12. Industry Calibration Factor Calculation is based on aforementioned data (n = 35) This calibration factor must be adjusted for each organization Evidence of diseconomies of scale (partially captured in Size driver weights)

  13. Size Driver Influence on Functional Size N = 35 # of scenariosand# of requirements accounted for 83% of functional size # of Interfacesand# of Algorithmsdrivers proved to be less significant

  14. Parameter Transformation

  15. Size vs. Effort 35 projects R-squared = 0.55 Range of SE_HRS: Min = 881, Max = 1,377,458 Range of SIZE: Min = 82, Max = 17,763

  16. Intra-Size Driver Correlation • REQ & INTF are highly correlated (0.63) • ALG & INTF are highly correlated (0.64)

  17. A Day In the Life… Common problems • Requirements reported at “sky level” rather than “sea level” • Test: if REQS < OPSC, then investigate • Often too high; requires some decomposition • Interfaces reported at “underwater level” rather than “sea level” • Test: if INTF source = pin or wire level, then investigate • Often too low; requires investigation of physical or logical I/F We will revisit these issues later

  18. A Day In the Life… (part 2) Common problems (cont.) • Algorithms not reported • Only size driver omitted by 14 projects spanning 4 companies • Still a controversial driver; divergent support • Operational Scenarios not reported • Only happened thrice (scope of effort reported was very small in all cases) • Fixable; involved going back to V&V documentation to extract at least one OPSC We will revisit these issues later

  19. The Case for Algorithms N = 21 • Reasons to keep ALG in model • Accounts for 16% of the total size in the 21 projects that reported ALG • It is described in the INCOSE SE Handbook as a crucial part of SE • Reasons to drop ALG from model • Accounts for 9% of total SIZE contribution • Omitted by 14 projects, 4 companies • Highly correlated with INTF (0.64) • Has a relatively small (0.53) correlation with Size (compared to REQ 0.91, INT 0.69, and OPSN 0.81)

  20. Cost Drivers • Original set consisted of > 25 cost drivers • Reduced down to 8 “application” and 6 “team” factors • See correlation handout • Regression coefficient improved from 0.55 to 0.64 with the introduction of cost drivers • Some may be candidates for elimination or aggregation

  21. Cost Drivers: Application Factor Distribution(RQMT, ARCH, LSVC, MIGR)

  22. Cost Drivers: Application Factor Distribution(TMAT, DOCU, INST, RECU)

  23. Cost Drivers: Team Factor Distribution(TEAM, PCAP, PEXP, PROC)

  24. Cost Drivers: Team Factor Distribution(SITE, TOOL)

  25. Top 10 Intra Driver Correlations • Size drivers correlated to cost drivers • 0.39 Interfaces & # of Recursive Levels in the Design • -0.40 Interfaces & Multi Site Coordination • 0.48 Operational Scenarios & # of Recursive Levels in Design • Cost drivers correlated to cost drivers • 0.47 Requirements Und. & Architecture Und. • -0.42 Requirements Und. & Documentation • 0.39 Requirements Und. & Stakeholder Team Cohesion • 0.43 Requirements Und. & Multi Site Coordination • 0.39 Level of Service Reqs. & Documentation • 0.50 Level of Service Reqs. & Personnel Capability • 0.49 Documentation & # of Recursive Levels in Design

  26. Candidate Parameters for Elimination • Size Drivers • # of Algorithms*^ • Cost Drivers (application factors) • Requirements Understanding*^ • Level of Service Requirements^ • # of Recursive Levels in the Design* • Documentation^ • # of Installations & Platforms^ • Personnel Capability^ • Tool Support^ Motivation for eliminating parameters is based on the high ratio of parameters (18) to data (35) and the need for degrees of freedom By comparison, COCOMO II has 23 parameters and over 200 data points • *Due to high correlation • ^Due to regression insignificance

  27. The Case for# of Recursive Levels in the Design • Reasons to keep RECU in model • Captures emergent properties of systems • Originally thought of as independent from other size and cost drivers • Reasons to drop RECU from model • Highly correlated to • Size (0.44) • Operational Scenarios (0.48) • Interfaces (0.39) • Documentation (0.49)

  28. Size driver counting rules Are there any suggested improvements? • Requirements • Need to add guidance with respect to • “system” vs. “system engineered” vs. “subsystem” requirements • “decomposed” vs. “derived” requirements • Current guidance includes • Requirements document, System Specification, RVTM, Product Specification, Internal functional requirements document, Tool output such as DOORS, QFD.

  29. Counting Rules: Requirements Number of System Requirements This driver represents the number of requirements for the system-of-interest at a specific level of design. The quantity of requirements includes those related to the effort involved in system engineering the system interfaces, system specific algorithms, and operational scenarios. Requirements may be functional, performance, feature, or service-oriented in nature depending on the methodology used for specification. They may also be defined by the customer or contractor. Each requirement may have effort associated with is such as V&V, functional decomposition, functional allocation, etc. System requirements can typically be quantified by counting the number of applicable shalls/wills/shoulds/mays in the system or marketing specification. Note: some work is involved in decomposing requirements so that they may be counted at the appropriate system-of-interest. How can we prevent requirements count from being provided too high?

  30. Counting Rules: Interfaces Number of System Interfaces This driver represents the number of shared physical and logical boundaries between system components or functions (internal interfaces) and those external to the system (external interfaces). These interfaces typically can be quantified by counting the number of external and internal system interfaces among ISO/IEC 15288-defined system elements. • Examples would be very useful • Current guidance includes • Interface Control Document, System Architecture diagram, System block diagram from the system specification, Specification tree. How can we prevent interface count from being provided too low?

  31. Counting Rules: Algorithms Number of System-Specific Algorithms This driver represents the number of newly defined or significantly altered functions that require unique mathematical algorithms to be derived in order to achieve the system performance requirements. As an example, this could include a complex aircraft tracking algorithm like a Kalman Filter being derived using existing experience as the basis for the all aspect search function. Another example could be a brand new discrimination algorithm being derived to identify friend or foe function in space-based applications. The number can be quantified by counting the number of unique algorithms needed to realize the requirements specified in the system specification or mode description document. • Current guidance includes • System Specification, Mode Description Document, Configuration Baseline, Historical database, Functional block diagram, Risk analysis. Are we missing anything?

  32. Counting Rules: Op Scn Number of Operational Scenarios This driver represents the number of operational scenarios that a system must satisfy. Such scenarios include both the nominal stimulus- response thread plus all of the off-nominal threads resulting from bad or missing data, unavailable processes, network connections, or other exception-handling cases. The number of scenarios can typically be quantified by counting the number of system test thread packages or unique end-to-end tests used to validate the system functionality and performance or by counting the number of use cases, including off- nominal extensions, developed as part of the operational architecture. • Current guidance includes • Ops Con / Con Ops, System Architecture Document, IV&V/Test Plans, Engagement/mission/campaign models. How can we encourage Operational Scenario reporting?

  33. Effort Profiling mini-Delphi • Step 4 of the 7-step methodology • Two main goals • Develop a typical distribution profile for systems engineering across 4 of the 6 life cycle stages (i.e., how is SE distributed over time?) • Develop a typical distribution profile for systems engineering across 5 effort categories (i.e., how is SE distributed by activity category?)

  34. COCOMO II Effort Distribution MBASE/RUP phases and activities Source : Software Cost Estimation with COCOMO II, Boehm, et al, 2000

  35. EIA/ANSI 632 Our Goal for COSYSMO Operate, Maintain, or Enhance Transition to Operation Operational Test & Evaluation Replace or Dismantle Conceptualize Develop ISO/IEC 15288 Acquisition & Supply Technical Management System Design Product Realization Technical Evaluation

  36. Mini Delphi Part 1 Goal: Develop a distribution profile for 4 of the 6 life cycle phases 5x6 matrix of EIA 632 processes vs. ISO 15288 life cycle phases 33 EIA 632 requirements (for reference)

  37. Previous Results Are Informative Acquisition & Supply Technical Management System Design Product Realization Technical Evaluation

  38. System life Process description ISO/IEC 15288 EIA/ANSI 632 High level practices Level of detail IEEE 1220 Detailed practices Operate, Maintain, or Enhance Transition to Operation Replace or Dismantle Conceptualize Develop Purpose of the Standards: ISO/IEC 15288 - Establish a common framework for describing the life cycle of systems EIA/ANSI 632 - Provide an integrated set of fundamental processes to aid a developer in the engineering or re-engineering of a system IEEE 1220 - Provide a standard for managing systems engineering Breadth and Depth of Key SE Standards Source : Draft Report ISO Study Group May 2, 2000

  39. 5 Fundamental Processes for Engineering a System Source: EIA/ANSI 632 Processes for Engineering a System (1999)

  40. 33 Requirements for Engineering a System Source: EIA/ANSI 632 Processes for Engineering a System (1999)

  41. Mini Delphi Part 2 Goal: Develop a typical distribution profile for systems engineering across 5 effort categories 5 EIA 632 fundamental processes 33 EIA 632 requirements (for reference)

  42. Preliminary results 4 person Delphi done last week at GSAW

  43. COSYSMO Invasion In chronological order:

  44. COSYSMO Risk Estimation Add-on Justification USAF (Teets) and Navy acquisition chief (Young) require "High Confidence Estimates“ COSYSMO currently provides a single point solution Elaboration of the “Sizing confidence level” in myCOSYSMO

  45. Final Items Open issues Local calibrations Lies, damned lies, and statistical outliers Future plans for COSYSMO 2.0 (including ties to SoS work) Action items for next meeting: July 2005 in Keystone, CO Combine Delphi R3 results and perform Bayesian approximation Dissertation defense: May 9

  46. Ricardo Valerdi rvalerdi@sunset.usc.edu Websites http://sunset.usc.edu http://valerdi.com/cosysmo

More Related