comparison and assessment of cost models for nasa flight projects l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Comparison and Assessment of Cost Models for NASA Flight Projects PowerPoint Presentation
Download Presentation
Comparison and Assessment of Cost Models for NASA Flight Projects

Loading in 2 Seconds...

play fullscreen
1 / 31

Comparison and Assessment of Cost Models for NASA Flight Projects - PowerPoint PPT Presentation


  • 97 Views
  • Uploaded on

Comparison and Assessment of Cost Models for NASA Flight Projects. Ray Madachy, Barry Boehm, Danni Wu {madachy, boehm, danwu}@usc.edu USC Center for Systems & Software Engineering http://csse.usc.edu 21 st International Forum on COCOMO and Software Cost Modeling November 8, 2006. Outline.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Comparison and Assessment of Cost Models for NASA Flight Projects' - kato


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
comparison and assessment of cost models for nasa flight projects

Comparison and Assessment of Cost Models for NASA Flight Projects

Ray Madachy, Barry Boehm, Danni Wu

{madachy, boehm, danwu}@usc.edu

USC Center for Systems & Software Engineering

http://csse.usc.edu

21st International Forum on COCOMO and Software Cost Modeling November 8, 2006

outline
Outline
  • Introduction and background
  • Model comparison examples
  • Estimation performance analysis
  • Conclusions and future work
introduction
Introduction
  • This work is sponsored by the NASA AMES project Software Risk Advisory Tools, Cooperative Agreement No. NNA06CB29A
  • Existing parametric software cost, schedule, and quality models are being assessed and updated for critical NASA flight projects
    • Includes a comparative survey of their strengths, limitations and suggested improvements
    • Developing transformations between the models
    • Accuracies and needs for calibration are being examined with relevant NASA project data
  • This presents the latest developments in ongoing research at the USC Center for Systems and Software Engineering (USC-CSSE)
    • Current work builds on previous research with NASA and the FAA
frequently used cost schedule models for critical flight software
Frequently Used Cost/Schedule Models for Critical Flight Software
  • COCOMO II is a public domain model that USC continually updates and is implemented in several commercial tools
  • SEER-SEM and TrueS are proprietary commercial models with unique features that also share some aspects with COCOMO
    • Include factors for project type and application domain
  • All three have been extensively used and tailored for flight project domains
support acknowledgments
Support Acknowledgments
  • Galorath Inc. (SEER-SEM)
    • Dan Galorath, Tim Hohmann, Bob Hunt, Karen McRitchie
  • PRICE Systems (True S)
    • Arlene Minkiewicz, James Otte, David Seaver
  • Softstar Systems (COCOMO calibration)
    • Dan Ligett
  • Jet Propulsion Laboratories
    • Jairus Hihn, Sherry Stukes
  • NASA Software Risk Advisory Tools research team
    • Mike Lowry, Tim Menzies, Julian Richardson
  • This study was performed mostly by persons highly familiar with COCOMO but not necessarily with the vendor models. The vendors do not certify or sanction the data nor information contained in these charts.
approach
Approach
  • Develop “Rosetta Stone” transformations between the models so COCOMO inputs can be converted into corresponding inputs to the other models, or vice-versa
    • Crosscheck multiple estimation methods
    • Represent projects in a consistent manner in all models and to help understand why estimates may vary
    • Extensive discussions with model proprietors to clarify definitions
  • Models assessed against a common database of relevant projects
    • Using a database with effort, size and COCOMO cost factors for completed NASA projects called NASA 94
      • Completion dates 1970s through late 1980s
    • Additional data as it comes in from NASA or other data collection initiatives
  • Analysis considerations
    • Calibration issues
    • Model deficiencies and extensions
    • Accuracy with relevant project data
  • Repeat analysis with updated calibrations, revised domain settings, improved models and new data
outline8
Outline
  • Introduction and background
  • Model comparison examples
  • Estimation performance analysis
  • Conclusions and future work
cost model comparison attributes
Algorithms

Size definitions

New, reused, modified, COTS

Language adjustments

Cost factors

Exponential, linear

Work breakdown structure (WBS) and labor parameters

Scope of activities and phases covered

Hours per person-month

Cost Model Comparison Attributes
common effort formula

Size

Cost Factors Effort = A * Size B * EMEffort Phase and Activity

Calibrations Decomposition

Common Effort Formula
  • Effort in person-months
  • A - calibrated constant
  • B - scale factor
  • EM - effort multiplier from cost factors
example model size inputs
Example: Model Size Inputs

1 - Not applicable for reused software2 - Specified separately for Designed for Reuse and Not Designed for Reuse

example seer factors with no direct cocomo ii mapping
Example: SEER Factors with No Direct COCOMO II Mapping

PRODUCT REUSABILITY

  • Software Impacted by Reuse

DEVELOPMENT ENVIRONMENT COMPLEXITY

  • Language Type (Complexity)
  • Host Development System Complexity
  • Application Class Complexity 3
  • Process Improvement

TARGET ENVIRONMENT

  • Special Display Requirements
  • Real Time Code
  • Security Requirements

PERSONNEL CAPABILITIES AND EXPERIENCE

  • Practices and Methods Experience

DEVELOPMENT SUPPORT ENVIRONMENT

  • Modern Development Practices
  • Logon thru Hardcopy Turnaround
  • Terminal Response Time
  • Resource Dedication
  • Resource and Support Location
  • Process Volatility

PRODUCT DEVELOPMENT REQUIREMENTS

  • Requirements Volatility (Change) 1
  • Test Level 2
  • Quality Assurance Level 2
  • Rehost from Development to Target

1 – COCOMO II uses the Requirements Evolution and Volatility size adjustment factor

2 – Captured in the COCOMO II Required Software Reliability factor

3 – Captured in the COCOMO II Complexity factor

vendor elaborations of critical domain factors
Vendor Elaborations of Critical Domain Factors

* SEER factors supplemented with and may be impacted via knowledge base settings for

  • Platform
  • Application
  • Acquisition method
  • Development method
  • Development standard
  • Class
  • Component type (COTS only)
example required reusability mapping
SEER-SEM

Reusability Level

XH = Across organization

VH = Across product line

H = Across project

N = No requirements

Software Impacted by Reuse (% reusable)

100%

50%

25%

0%-

COCOMO II

XH = Across multiple product lines

VH = Across product line

H = Across program

N = Across project

L = None

Example: Required Reusability Mapping
  • Cost to develop software module for subsequent reuse
  • SEER-SEM to COCOMO II:
    • XH = XH in COCOMO II

100% reuse level = 1.50

50% reuse level = 1.40

25% reuse level = 1.32

0% reuse level = 1.25

    • VH = VH in COCOMO II

100% reuse level = 1.32

50% reuse level = 1.26

25% reuse level = 1.22

0% reuse level = 1.16

    • H = N in COCOMO II
    • N = L in COCOMO II
outline20
Outline
  • Introduction and background
  • Model comparison examples
  • Estimation performance analysis
  • Conclusions and future work
model analysis flow
Model Analysis Flow

Not all steps performed on iterations 2-n

SEER-SEM

COCOMO II

True S

performance measures
Performance Measures
  • For each model, compare actual and estimated effort for n projects in a dataset:

Relative Error (RE) = ( Estimated Effort – Actual Effort ) / Actual Effort

Magnitude of Relative Error (MRE) = | Estimated Effort – Actual Effort | / Actual Effort

Mean Magnitude of relative error (MMRE) = (MRE) / n

Root Mean Square (RMS) = ((1/n)  (Estimated Effort – Actual Effort)2) ½

Prediction level PRED(L) = k / n

where k = the number projects in a set of n projects whose MRE <= L.

cocomo ii performance examples
COCOMO II Performance Examples

MMRE Calibration Effect

PRED(40) Calibration Effect

seer sem performance examples
SEER-SEM Performance Examples

MMRE Progressive Adjustment Effects PRED(40) Progressive Adjustment Effects

outline26
Outline
  • Introduction and background
  • Model comparison examples
  • Estimation performance analysis
  • Conclusions and future work
vendor concerns
Vendor Concerns
  • Study limited to a COCOMO viewpoint only
  • Current Rosetta Stones need review and may be weak translators from the original data
  • Results not indicative of model performance due to ignored parameters
  • Risk and uncertainty were ground ruled out
  • Data sanity checking needed
slide28

Conclusions (1/2)

  • All cost models (COCOMO II, SEER-SEM, True S) performed well against NASA database of critical flight software
    • Calibration and knowledge base settings improved default model performance
    • Estimate performance varies by domain subset
  • Complexity and reliability factor distributions characterize the domains as expected
  • SEER-SEM and True S vendor models provide additional factors beyond COCOMO II
    • More granular factors for the overall effects captured in the COCOMO II Complexity factor.
    • Additional factors for other aspects, many of which are relevant for NASA projects
  • Some difficulties mapping inputs between models, but simplifications are possible
  • Reconciliation of effort WBS necessary for valid comparison between models
conclusions 2 2
Conclusions (2/2)
  • Models exhibited nearly equivalent performance trends for embedded flight projects within the different subgroups
    • Initial uncalibrated runs from COCOMO II and SEER-SEM both underestimated the projects by approximately 50% overall
    • Improvement trends between uncalibrated estimates and those with calibrations or knowledge base refinements were almost identical
      • SEER experiments illustrated that model performance measures markedly improved when incorporating knowledge base information for the domains
    • All three models have roughly the same final performance measures for either individual flight groups or combined
  • In practice no one model should be preferred over all others
    • Use a variety of methods and tools and then investigate why the estimates may vary
future work
Future Work
  • Study has been helpful in reducing sources of misinterpretation across the models but considerably more should be done *
    • Developing two-way and/or multiple-way Rosetta Stones
    • Explicit identification of residual sources of uncertainty across models and their estimates not fully addressable by Rosetta Stones
    • Factors unique to some models but not others
    • Many-to-many factor mappings
    • Partial factor-to-factor mappings
    • Similar factors that affect estimates in different ways: linear, multiplicative, exponential, other
    • Imperfections in data: subjective rating scales, code counting, counting of other size factors, effort/schedule counting, endpoint definitions and interpretations, WBS element definitions and interpretations
  • Repeating the analysis with improved models, new data and updated Rosetta Stones
    • COCOMO II may be revised for critical flight project applications
  • Improved analysis process
    • Revision of vendor tool usage to set knowledge bases before COCOMO translation parameter setting
    • Capture estimate inputs in all three model formats; try different translation directionalities
  • With modern and more comprehensive data, COCOMO II and other models can be further improved and tailored for NASA project usage
    • Additional data always welcome

* The study participants welcome sponsorship of further joint efforts to pin down sources of uncertainty, and to more explicitly identify the limits to comparing estimates across models

bibliography
Bibliography
  • Boehm B, Abts C, Brown A, Chulani S, Clark B, Horowitz E, Madachy R, Reifer D, Steece B, Software Cost Estimation with COCOMO II, Prentice-Hall, 2000
  • Boehm B, Abts C, Chulani S, Software Development Cost Estimation Approaches – A Survey, USC-CSE-00-505, 2000
  • Galorath Inc., SEER-SEM User Manual, 2005
  • Lum K, Powell J, Hihn J, Validation of Spacecraft Software Cost Estimation Models for Flight and Ground Systems, JPL Technical Report, 2001
  • Madachy R, Boehm B, Wu D, Comparison and Assessment of Cost Models for NASA Flight Projects, http://sunset.usc.edu/csse/TECHRPTS/2006/usccse2006-616/usccse-2006-616.pdf, USC Center for Systems and Software Engineering Technical Report USC-CSSE-2006-616, 2006
  • PRICE Systems, True S User Manual, 2005
  • Reifer D, Boehm B, Chulani S, The Rosetta Stone - Making COCOMO 81 Estimates Work with COCOMO II, Crosstalk, 1999