1 / 15

Factors Affecting Effort to Integrate and Test a System of Systems

Factors Affecting Effort to Integrate and Test a System of Systems. Richard D. Stutzke Science Applications International Corp. 6725 Odyssey Drive Huntsville, AL 35806-3301 USA (256) 971-6528 (office) (256) 971-6678 (facsimile) (256) 971-6562 (asst) Presented at the

keon
Download Presentation

Factors Affecting Effort to Integrate and Test a System of Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Factors Affecting Effort to Integrate and Test a System of Systems Richard D. Stutzke Science Applications International Corp. 6725 Odyssey Drive Huntsville, AL 35806-3301 USA (256) 971-6528 (office) (256) 971-6678 (facsimile) (256) 971-6562 (asst) Presented at the 20th International COCOMO and Software Cost Modeling Forum Los Angeles, California 25-28 October 2005  Richard.D.Stutzke@saic.com SoS

  2. Agenda • Context • Factors to Consider • Modeling Approach • Size Measures • Single Build • Multiple Builds • Estimating Process SoS

  3. Context • Definition: • A System of Systems (SoS) connects multiple systems to solve a large scale problem. • Engineers have built systems of systems for a century or so: • Hardware-intensive (e.g., large ships) • Complex analog/mechanical systems (e.g., airplane) • Complex digital systems (servos, computers) (e.g., modern airplane) • Large integrated plants and information systems (oil refiners, airport) • Ultralarge netcentric systems (e.g., Future Combat System) • Example: the Future Combat System • Design and integrate 14 major weapons systems • Integrate and synchronize with ~157 complementary systems • Use ~53 “critical technologies” SoS

  4. Key Activities “MBASE/RUP” SoS

  5. Essential Complexity Factors • The number of application domains • Maturity of business processes (detail of description) • Compliance (uniform use of the defined processes) • The number of sites • Homogeneity of configurations (platforms, applications) • Geographic dispersion • The number of functional threads • Use cases • Must include exception handling (90/10 rule) • Degree of autonomy ( more functionality) • Required Performance • Speed of response (ops tempo) • Dependability, Safety, Security, etc. • Unprecedentedness in any of the above SoS

  6. Man-Made Complexity Factors • Choice of solution technology • Number of domains • Technology maturity and readiness (-) • Technology refresh (+) • Maturity of engineering (-?) • Development • Test • Transition • Maturity of management processes (-) • The number of stakeholder organizations involved • Buyer, Owner, Operator, User • Developer, Maintainer • Regulator, “the Public”, “the Media” • Personnel turnover for all types of stakeholders (+) • Inadequate funding • Omitted activities (-) • Low estimates (assume “best case”, ignore risks) (+) • Incremental funding (+) • Unprecedentedness in any of the above (-) Note: Long project duration increases (+) or decreases (-) the adverse effects. SoS

  7. Modeling Assumptions • The original design identifies all of the necessary interfaces. • May overlook some interfaces • Misunderstood interfaces lead to volatility and rework. • Each interface for the SoS has two parts: • Common part : shared components that provide the interface functionality (“API”) • System-specific part : glue code for the component system. • Producing multiple builds incurs additional costs for: • Breakage • Maintenance and support • Note: These will affect the glue code and common API differently. SoS

  8. Activities Covered • Build and unit test the interface code • API (common, shared code) • Glue code (component-specific) • Implement and test each link • Identify links using an N-squared matrix • Multiple builds are messy but doable SoS

  9. An N-Squared Matrix TO FROM Notes: 1 – Assumes “symmetry” for each interface pair 2 – Each cell may identify more than one interface SoS

  10. C B A D 1 2 2 Counting a Single Release Note: # Glue modules ≠ 2 * (# Links) SoS

  11. Estimating a Single Release • Assume “constant effort” for a given activity: EAPI = Build and unit test common code EGLUE = Build and unit test one glue module ELINK = Effort to test one link • Estimated Effort ETOTAL = (# API) * EAPIBuild API + (# GLUE) * EGLUE Build Glue + (# LINK) * ELINK Test Links SoS

  12. Notation for Multiple Incremental Builds SoS

  13. Effort for Build “b” SoS

  14. Recursion Equations* • First build: L(1) = LN(1) M(1) = 0 U(1) = 0 • Subsequent Builds L(b) = L(b-1) + LN(b) U(b) = L(b-1) - M(b) *These equations assume that no links are ever deleted. SoS

  15. Steps of the Estimating Process • Identify the component systems (name, owner) • Identify and categorize the interfaces (name, owner, API complexity and/or size*) • Build the triangular N-squared matrix (from/to, glue code complexity and/or size*) • Calculate the effort by build • Build and unit test APIs and glue modules • Integrate and test links (new, modified, and possibly deleted) • Sum the values for all builds *If the interface will evolve, specify values by build and include breakage or volatility. SoS

More Related