lecture 17 estimation based on software engineering a practitioner s approach 6 e r s pressman
Skip this Video
Download Presentation
Software Engineering Fall 2005

Loading in 2 Seconds...

play fullscreen
1 / 106

Software Engineering Fall 2005 - PowerPoint PPT Presentation

  • Uploaded on

Lecture 17 Estimation Based on: Software Engineering, A Practitioner’s Approach, 6/e, R.S. Pressman. Software Engineering Fall 2005. Overview. Software planning involves estimating how much time, effort, money, and resources will be required to build a specific software system.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' Software Engineering Fall 2005' - ova

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
lecture 17 estimation based on software engineering a practitioner s approach 6 e r s pressman

Lecture 17EstimationBased on: Software Engineering, A Practitioner’s Approach, 6/e, R.S. Pressman

Software Engineering

Fall 2005

  • Software planning involves estimating how much time, effort, money, and resources will be required to build a specific software system.
  • After the project scope is determined and the problem is decomposed into smaller problems, software managers use historical project data (as well as personal experience and intuition) to determine estimates for each.
  • The final estimates are typically adjusted by taking project complexity and risk into account. The resulting work product is called a project management plan.
software project planning
Software Project Planning

The overall goal of project planning is to establish a pragmatic strategy for controlling, tracking, and monitoring a complex technical project.


So the end result gets done on time, with quality!

project planning
Project Planning
  • Software project planning encompasses five major activities:

- Estimation,

- Scheduling,

- Risk analysis,

- Quality management planning, and

- Change management planning.

project planning objectives
Project Planning Objectives
  • To provide a framework that enables a software manager to make a reasonable estimate of resources, cost, and schedule.
  • \'Best case\' and \'worst case\' scenarios should be used to bound project outcomes.
  • Estimates should be updated as the project progresses.
project planning task set i
Project Planning Task Set-I

1. Establish project scope

2. Determine feasibility

3. Analyze risks

4. Define required resources

  • Determine required human resources
  • Define reusable software resources
  • Identify environmental resources
project planning task set ii
Project Planning Task Set-II

5. Estimate cost and effort

- Decompose the problem;

- Develop two or more estimates using size, function points, process tasks or use-cases;

- Reconcile the estimates.

6. Develop a project schedule

- Establish a meaningful task set;

- Define a task network;

- Use scheduling tools to develop a timeline chart;

- Define schedule tracking mechanisms.

  • Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information (metrics), the courage to commit to quantitative predictions when qualitative information is all that exists.
  • Estimation carries inherent risk and this risk leads to uncertainty.
  • Estimation begins with a description of the scope of the product.
  • The problem is then decomposed into a set of smaller problems, and each of these is estimated using historical data and experience as guides.
  • Problem complexity and risk are considered before a final estimate is made.
estimation work product
Estimation Work Product
  • A simple table delineating:

- the task to be performed,

- the functions to be implemented,

- and the cost, effort, and time involved for each.

what is scope
What is Scope?
  • Software scope describes
    • the functions and features that are to be delivered to end-users
    • the data that are input and output
    • the “content” that is presented to users as a consequence of using the software
    • the performance, constraints, interfaces, and reliability that bound the system.
  • Scope is defined using one of two techniques:
      • A narrative description of software scope is developed after communication with all stakeholders.
      • A set of use-cases is developed by end-users.

Why is a feasibility assessment part of the planning process?

  • If a project is not technically possible, there is no point in trying to build it. But technical feasibility is not the whole story. The project must also fulfill a business need to avoid building a high tech product that does not have any customers
  • Technical feasibility is not a good enough reason to build a product.
  • The product must meet the customer\'s needs and not be available as an off-the-shelf purchase.
software feasibility
Software Feasibility
  • Technology – Is a project technically feasible?
  • Finance – Can development be completed at a cost the software organization, its client, or the market can afford?
  • Time – Will the project’s time-to-market beat the competition?
  • Resources – Does the organization have the resources needed to succeed?
estimation of resources
Estimation of Resources
  • The following resources are typically included in the project estimation process:

- Human Resources- number of people required and skills needed to complete the development project;

- Reusable Software Resources- off-the-shelf components, full-experience components, partial-experience components, new components;

- Environment Resources - hardware and software required to be accessible by software team during the development process.

  • What people are available?
  • What software can you really re-use?
    • how much must be modified?
      • if over 20% modification, consider starting afresh
    • is the documentation sufficient for this customer?
      • if not or marginal, cost of re-use gets very high
    • what aspects can you re-use?
      • code
      • design
      • design concepts
      • requirements
  • What tools are available? Are they adequate?
software resources
Software Resources

For software categories:

  • Of-the-shelf components – Existing software can be acquired from a third party or has been developed internally for a past project. COTS (commercial off-the-shelf) components are purchased from a third party, are ready for use on the current project, and have been fully validated.
  • Full-experience components – Existing specifications, designs, code, or test data developed for past projects are similar to the software to be built for the current project.
  • Partial-experience components – Existing specifications, designs, code, or test data developed for past projects are related to the software to be built for the current project but will require substantial modification.Modifications required for partial-experience components have a fair degree of risk.
  • New components – Software components must be built by the software team specifically for the needs of the current project.
software project estimation options
Software Project Estimation Options
  • Delay estimation until late in the project –not practical.
  • Base estimates on similar projects already completed –unfortunately, past experience is not always a good indicator of future results.
  • Use simple decomposition techniques to estimate project cost and effort. Automated tools may assist with project decomposition and estimation.
  • Use empirical models for software cost and effort estimation.
decomposition techniques
Decomposition Techniques
  • Software sizing –A project estimate is only as good as the estimate of the size of the work to be done.
  • Problem-based estimation - using lines of code (LOC) decomposition focuses on software functions. Using function points (FP) decomposition focuses on information domain characteristics.
  • Process-based estimation - decomposition based on tasks required to complete the software process framework.
  • Use-case estimation - promising, but controversial due to lack of standardization of use-cases.
software sizing
Software Sizing
  • “Fuzzy logic” sizing
    • when you have no idea what you are doing
  • Function point sizing
    • when you know roughly the overall behavior of the software.
  • Standard component sizing
    • when you know roughly what the solution looks like. The project planner estimates the number of occurrences of each standard component and then uses historical project data to determine the delivered size per standard component.
  • Change sizing
    • when you are several (>2) cycles into the spiral model. The number and type (e.g., reuse, adding, changing, deleting code) of modifications that must be accomplished.
software sizing1
Software Sizing
  • Each of the sizing approaches should be combined statistically to create a three-point or expected-value estimate. This is accomplished by developing optimistic (low), most likely, and pessimistic (high) values for size and combining them.
expected value computation
Expected-value Computation
  • A three-point or expected-value for the estimation variable (size), S, can be computed as a weighted average of the optimistic (Sopt), most likely (Sm), and pessimistic (Spess) estimates. For example,

S = (Sopt + 4Sm + Spess)/6 (23-1)

gives heaviest credence to the ‘most likely’ estimate.

problem based estimation
Problem-Based Estimation
  • Using LOC decomposition focuses on software functions.
  • Using FP decomposition focuses on information domain characteristics.
  • Regardless of the estimation variable that is used, the project planner begins by estimating range of values for each function or information domain value. Using historical data (or intuition), the planner estimates an optimistic, most likely, and pessimistic size value for each function or count for each information domain value. An expected value can then be computed.
  • Once the expected value for the estimation value has been determined, historical LOC or FP productivity data are applied.
metrics for the size of a product
Metrics for the Size of a Product
  • Lines of code (LOC, KLOC, KDSI - thousand of delivered software instructions)
  • Function Points
estimation conventional methods loc fp approach
Estimation - Conventional Methods:LOC/FP Approach
  • Compute LOC (Line of Code)/FP using estimates of information domain values
  • Use historical data to build estimates for the project
lines of code loc
Lines of Code (LOC)
  • The end product is delivered software – amount of the product should be an indication of the effort required to produce it.
  • It is not clear how to count lines of code
    • Executable lines of code?
    • Data definitions?
    • Comments?
    • Changed/deleted lines?
  • A report, screen, or GUI generator can generate thousands of lines of code in minutes
  • Pro: LOC are "artifact" of all software development project that can be easily counted.
  • Con: LOC measures are programming language dependent. They penalize well-designed but shorter programs. They cannot easily accommodate non-procedural languages.
  • Con: When used in estimation, planner must estimate the LOC long before analysis and design have been completed.
comparing loc and fp
Comparing LOC and FP

Representative values developed by QSM (Quantitative Software Management)

  • One LOC of C++ provides approximately 2.4 times the ‘functionality’ (on average) as one LOC of C.
  • One LOC of a Smalltalk provides at least four times the functionality of a LOC for a conventional programming languages such as Ada, COBOL, or C.
example 1 loc based estimation
Example 1: LOC Based Estimation
  • Let us consider a software package to be developed for a computer-aided design application for mechanical components. The software is to execute on an engineering workstation and must interface with various peripherals including a mouse, digitizer, high-resolution colour display, and laser printer.
example 1 loc based estimation1
Example 1: LOC Based Estimation
  • A preliminary scope:

The mechanical CAD software will accept two- and three-dimensional geometric data from an engineer. The engineer will interact and control the CAD system through a user interface that will exhibit characteristics of good human/machine interface design. All geometric data and other supporting information will be maintained in a CAD database. Design analysis modules will be developed to produce the required output, which will be displayed on a variety of graphics devices. The software will be designed to control and interact with peripheral devices that include a mouse, digitizer, laser printer, and plotter.

example 1 loc based estimation2
Example 1: LOC Based Estimation
  • Can use three point estimation for estimating LOC (equation 23-1).
  • Use historical data (or something) to compute effort based on estimated LOC / pm.
  • Compute software cost from labor cost.
example 1 loc based estimation4
Example 1: LOC Based Estimation
  • A review of historical data indicates that the average productivity for systems of this type = 620 LOC/pm.

Burdened labor rate of €8000 per month, the cost per line of code is approximately €13.

  • Based on the LOC estimate and the historical productivity data:

- the total estimated project cost is

€431,000 ( 33200 x 13)

- the estimated effort is

54 person-months (33200 / 620)

example 2 function point estimation
Example 2: Function Point Estimation
  • Referring to the LOC Table, the project planner estimates external inputs, external outputs, external inquiries, internal logical files, and external interface files for the CAD software.
  • For the purpose of this estimate, the complexity weighting factor is assumed to be average.
example 2 function point estimation2
Example 2: Function Point Estimation
  • Can use expected value computation for Count
  • Use Complexity Weighting Factors

(S Fi = 52)

  • Compute

FP est = count_total * ( 0.65 + 0.01 * S Fi )

FP est = 318 * (0.65 + 0.01 * 52) = 372

example 2 function point estimation3
Example 2: Function Point Estimation
  • Organizational average productivity (labor months) for systems of this type = 6.5 FP/pm.

Burdened labor rate = €8000 per month, the cost per FP is approximately €1230.

  • Based on the FP estimate and the historical productivity data:

- the total estimated project cost is

€458,000 (FP est x 1230)

- the estimated effort is

57 person-months (FP est / 6.5)

process based estimation
Process-Based Estimation
  • The most common technique for estimating a project is to base the estimate on the process that will be used.
  • That is, the process is decomposed into a relatively small set of tasks and the effort required to accomplish each task is estimated.
process based estimation1
Process-Based Estimation

Obtained from “process framework”

framework activities



Effort required to accomplish

each framework activity for each application function

example 3 process based estimation
Example 3: Process-Based Estimation
  • CAD Software introduced earlier. The system configuration and all software functions remain unchanged and are indicated by project scope.
  • Estimates of effort (in person-months) – next Figure – for each software engineering activity are provided for each CAD software function. The engineering and construction activities are subdivided into the major software engineering tasks shown.
example 3 process based estimation1
Example 3: Process-Based Estimation
  • Gross estimates of effort are provided for customer communication, planning, and risk analysis. These are noted at the bottom of the table.
  • Horizontal and vertical totals provide an indication of estimated effort required for analysis, design, code and test. 53 percent of all effort is expanded on front-end engineering tasks (requirements analysis and design), indicating the importance of this work.
example 3 process based estimation3
Example 3: Process-Based Estimation
  • Based on an average burdened labor rate of $8,000 per month:

- the total estimated project cost is $368,000, and

- the estimated effort is 46 person-months.

  • If desired, labour rates could be associated with each framework activity or software engineering task and computed separately.
estimation with use cases
Estimation with Use-Cases
  • Use-cases provide a software team with insight into software scope and requirements.
  • Problematic, because, use-cases:

- are described using many different formats and styles, there is no standard form;

- represent an external view (the user’s view) of the software and are often written at different level of abstraction;

- do not address the complexity of the functions and features are often written at different levels of abstraction;

- do not describe complex behaviour (e.g. interactions) that involves many functions and features.

estimation with use cases1
Estimation with Use-Cases
  • Empirical data may be used to establish the estimated number of LOC or FP per use case. Historical data are then used to compute the effort required to develop the system.

For example: we may assume that one use-case corresponds to one FP.

  • The LOC Estimate can be computed in different ways. As an illustration consider the relationship on the next slide.
estimation with use cases2
Estimation with Use-Cases

LOC estimate = N x LOCavg + [(Sa/Sh -1) + (Pa/Ph -1)] x LOCadjust



N = actual number of use-cases

LOCavg = historical average LOC per use-case for this type of subsystem.

LOCadjust = represents an adjustment based on n percent of LOCavg where n is defined locally and represents the difference between this project and ‘average’ projects.

Sa = actual scenarios per use-case

Sh = average scenarios per use-case for this type of subsystem.

Pa = actual pages per use-case

Ph = average pages per use-case for this type of subsystem.

example 4 use case based estimation
Example 4: Use-Case Based Estimation
  • The CAD software introduced earlier is composed of three subsystem groups:

- User interface subsystem (includes UICF).

- Engineering subsystem group (includes 2DGA subsystem, SDGA subsystem, and DAM subsystem).

- Infrastructure subsystem group (includes CGDF subsystem and PCF subsystem).

example 4 use case based estimation1
Example 4: Use-Case Based Estimation
  • Six use-cases describe the user interface subsystem. Each use case is described by no more than 10 scenarios and has an average length of six pages.
  • The engineering subsystem group is described by 10 use-cases (these are considered to be at a higher level of the structural hierarchy). Each of these use-cases has no more than 20 scenarios associated with it and has an average length of eight pages.
  • The infrastructure subsystem group is described by five use-cases with an average of only six scenarios and an average length of five pages.
  • Using the relationship noted in Expression (23-2) with n = 30%, the Table in the next Figure is developed.
example 4 use case based estimation2

use cases








e subsystem








User interface subsystem

Engineering subsystem group

subsystem group








Infrastructure subsystem group

e subsystem group













Example 4: Use-Case Based Estimation
example 4 use case based estimation3
Example 4: Use-Case Based Estimation
  • Considering the first row of the table, historical data indicate that UI software requires an average of 800 LOC per use-case when the use-case has no more than 12 scenarios and is described in less than 5 pages
  • Using the equation 23-2, the LOC estimate for the user interface subsystem is computed.
example 4 use case based estimation4
Example 4: Use-Case Based Estimation
  • Using 620 LOC/pm as the average productivity for systems of this type and a burdened labor rate of €8000 per month, the cost per line of code is approximately €13.
  • Based on the use-case estimate and the historical productivity data:

- the total estimated project cost is €552,000

- the estimated effort is 68 person-months.

causes of estimation reconciliation problems
Causes of Estimation Reconciliation Problems
  • Project scope is not adequately understood or misinterpreted by planner
  • Productivity data used for problem-based estimation techniques is inappropriate or obsolete for the application
empirical estimation models
Empirical Estimation Models
  • Typically derived from regression analysis of historical software project data with estimated person-months as the dependent variable and KLOC, FP, or object points as independent variables.
  • Constructive Cost Model (COCOMO) is an example of a static estimation model.
  • COCOMO II is a hierarchy of estimation models that take the process phase into account making it more of dynamic estimation model.
  • The Software Equation is an example of a dynamic estimation model.
empirical estimation models1
Empirical Estimation Models
  • General Structure:

Effort = A + B * (est_var) C

where A, B and C are constants and

est_var is either LOC or FP

  • Produce widely differing results for the same values of LOC or FP.
  • Estimation models must be calibrated for local needs.
the constructive cost model cocomo model
The COnstructive COst Model (COCOMO) model
  • An empirical model based on project experience.
  • Well-documented, ‘independent’ model which is not tied to a specific software vendor.
  • COCOMO originally proposed by Boehm in 1981, now called COCOMO81
  • Later evolved to AdaCOCOMO in 1989
  • In 1995, Boehm proposed COCOMOII
cocomo note on nomenclature
COCOMO: Note on Nomenclature
  • The original model published in 1981 went by the simple name of COCOMO. This is an acronym derived from the first two letters of each word in the longer phrase Constructive Cost Model. The word "constructive" refers to the fact that the model helps an estimator better understand the complexities of the software job to be done, and by its openness permits the estimator to know exactly why the model gives the estimate it does.
  • The new model (composed of all three submodels) was initially given the name COCOMO 2.0. However, after some confusion in how to designate subsequent releases of the software implementation of the new model, the name was permanently changed to COCOMO II. To further avoid confusion, the original COCOMO model was also then re-designated COCOMO 81.
  • All references to COCOMO found in books and literature published before 1995 refer to what is now called COCOMO 81. Most references to COCOMO published from 1995 onward refer to what is now called COCOMO II.
cocomo 81
  • It is an open system, first published by Dr Barry Boehm in 1981
  • Worked quite well for projects in the 80’s and early 90’s
  • Could estimate results within ~20% of the actual values 68% of the time
cocomo 81 stages
COCOMO 81 Stages
  • COCOMO has three different stages (each one increasing with detail and accuracy):
    • Basic - applied early in a project. Gives a “ball-park” estimate based on product attributes.
    • Intermediate - applied after requirements are specified. Modifies basic estimate using project and process attributes.
    • Advanced - applied after design is complete. Estimates project phases and parts separately.
cocomo 81 project types
COCOMO 81: Project Types
  • Organic mode: Small teams, familiar environment, well-understood applications, no difficult non-functional requirements (EASY).
  • Semi-detached mode: Project team may have experience mixture, system may have more significant non-functional constraints, organization may have less familiarity with application (HARDER).
  • Embedded Hardware/software systems mode: Tight constraints, unusual for team to have deep application experience (HARD).
basic cocomo 81 formula
Basic COCOMO 81 Formula
  • Organic mode: PM = 2.4 *(KDSI) 1.05
  • Semi-detached mode: PM = 3 * (KDSI) 1.12
  • Embedded mode: PM = 3.6 (KDSI) 1.2
  • Note: KDSI is the number of thousands of delivered source code. Some authors use KLOC (thousands of lines of codes).
cocomo 81 effort pm a size b m
COCOMO 81Effort (PM) = A * SizeB * M

PM = person-months

KDSI = thousand of delivered software instructions

estimate cost and duration very early in project
Estimate Cost and Duration Very Early in Project

1. Use the function point method to estimate lines of code

2. Use Boehm’s formulas to estimate labor required

3. Use the labor estimate and Boehm’s formula to estimate duration

basic cocomo formulae boehm
Basic COCOMO Formulae (Boehm)

Effort in Person-months = aKLOC b

Duration = c Effort d

Where c = labour estimate, d = complexity of project type

Software Project a b c d

Organic 2.4 1.05 2.5 0.38

Semidetached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

basic cocomo model when should you use it
Basic COCOMO Model: When Should You Use It
  • Basic COCOMO is good for quick, early, rough orderof magnitude estimates of software costs
basic cocomo model limitations
Basic COCOMO Model:Limitations
  • Its accuracy is necessarily limited because of its lack of factors which have a significant influence on software costs
  • The Basic COCOMO estimates are within a factor of 1.3 only 29% of the time, and within a factor of 2 only 60% of the time
problems with cocomo
Problems with COCOMO
  • Amount of judgment required in determining values for cost adjustment factors and which mode applies to the software
  • Your data may not match the data used to develop COCOMO, and your company may not want to collect the data needed to correlate the model
  • COCOMO excludes the following activities
    • Management
    • Overhead Costs
    • Travel and Other Incidental Costs
    • Environmental Factors
  • COCOMO assumes a basic waterfall process model
    • 30% design; 30% coding; 40% integration and test
    • Your process may not match up well
cocomo and support activities
COCOMO and Support Activities
  • COCOMO assumes a very basic level of effort for Configuration Management and Quality Assurance
    • COCOMO assumes about 5% of the total budget for both
    • This is based on typical commercial practice at the time COCOMO was established
  • But they can take 2-4 times this much with modern software engineering practices and typical complexity of modern software products
  • Some tools add extra effort in these areas
basic cocomo model example 1
Basic COCOMO Model:Example 1
  • We have determined our project fits the characteristics of Semi-Detached mode
  • We estimate our project will have 32,000 Delivered Source Instructions. Using the formulas, we can estimate:
  • Effort = 3.0 x (32) 1.12 = 146 man-months
  • Schedule = 2.5 x(146) 0.35 = 14 months
  • Productivity = 32,000 DSI / 146 MM = 219 DSI/MM
  • Average Staffing = 146 MM /14 months = 10 FSP
cocomo ii
  • Main objectives of COCOMO II:
    • To develop a software cost and schedule estimation model tuned to the life cycle practices of the 1990’s and 2000’s
    • To develop software cost database and tool support capabilities for continuous model improvement.
    • To help people reason about the cost and schedule implications of their software decisions.
cocomo ii b boehm 1995
COCOMO IIB. Boehm (1995)
  • Why ?
    • allow for spiral model instead of waterfall model only
    • Use FP and LOC for Sizing
  • Assumptions
    • COCOMO II includes same activities as COCOMO 81
      • development activities are included: documentation, planning& control, software configuration management (CM)
      • excluded: database management, general CM, Management
    • COCOMO II has add-on effort for back-end Transition Phase (conversion, installation, training)
    • Labour categories: direct project-charged + overhead (e.g. project mgr), but not secretaries, higher management, computer centre operators, ...
cocomo ii1

Hierarchy of three different models:

  • The Application Composition Model
    • Good for projects built using rapid application development tools (GUI-builders etc.). Based on new Object Points.
  • The Early Design Model
    • This model can get rough estimates before the entire architecture has been decided. It uses a small set of new Cost Drivers, and new estimating equations. Based on Unadjusted Function Points or KLOC.
  • The Post-Architecture Model
    • Most detailed model, used after overall architecture has been decided on. You\'ll use it after you\'ve developed your project\'s overall architecture. It has new cost drivers, new line counting rules, and new equations.
cocomo ii sizing information
COCOMO II Sizing Information
  • Three different sizing options are available as part of the model hierarchy:

- Object points

- Function points

- Lines of source code

cocomo ii application composition model
COCOMO II Application Composition Model
  • Used during the early stages of software engineering, when prototyping of user interfaces, consideration of software and system interaction, assessment of performance, and evaluation of technology maturity are paramount.
  • Uses object points
object points
Object Points
  • Object points (alternatively named application points) are an alternative function-related measure to function points.
  • Object points are NOT the same as object classes.
  • The number of object points in a program is a weighted estimate of
    • The number of separate screens that are displayed;
    • The number of reports that are produced by the system;
    • The number of program modules that must be developed to supplement the database code;
object point estimation
Object point estimation
  • Object points are easier to estimate from a specification than function points as they are simply concerned with screens, reports and programming language modules.
  • They can therefore be estimated at a fairly early point in the development process.
  • At this stage, it is very difficult to estimate the number of lines of code in a system.
object points complexity
Object Points: Complexity
  • Each object instance (e.g., a screen or report) is classified into one of three complexity levels:

- simple,

- medium, or

- difficult

  • Complexity is a function of the number and source of the client and server data tables that are required to generate the screen or report and the number of views or sections presented as part of the screen or report.
application composition model i table 23 6
Application Composition Model I – Table 23.6
  • Once complexity is determined, the number of screens, reports and components are weighted according to the table 23.6 illustrated below:
application composition model ii
Application Composition Model II
  • The object point count is then determined by multiplying the original number of object instances by the weighting factor (wf) in the figure and summing to obtain a total object point count.

object points = screen x wf + report x wf + component x wf

  • When component-based development or general software reuse is to be applied, the percent of reuse (%reuse) is estimated and the object point count is adjusted.

NOP = (object points) x [(100 - %reuse)/100]

where NOP is defined as new object points.

application composition model iii
Application Composition Model III
  • To derive an estimate of effort based on the computed NOP value, a ‘productivity rate’ must be derived.

PROD = NOP/person-month

  • Once the productivity rate has been determined, an estimate of project effort can be derived as

estimated effort = NOP/PROD

major decision situations helped by cocomo ii
Major Decision SituationsHelped by COCOMO II
  • Software investment decisions
    • When to develop, reuse, or purchase
    • What legacy software to modify or phase out
  • Setting project budgets and schedules
  • Negotiating cost/schedule/performance tradeoffs
  • Making software risk management decisions
  • Making software improvement decisions
    • Reuse, tools, process maturity, outsourcing
cocomo ii differences
COCOMO II Differences
  • The exponent value b in the effort equation is replaced with a variable value based on five scale factors rather then constants
  • Size of project can be listed as object points, function points or source lines of code (SLOC).
  • A breakage rating has been added to address volatility of system
cocomo ii calibration
COCOMO II Calibration
  • For COCOMO II results to be accurate the model must be calibrated
  • Calibration requires that all cost driver parameters be adjusted
  • Requires lots of data, usually more then one company has
  • The plan was to release calibrations each year but so far only two calibrations have been done (II.1997, II.1998)
  • Users can submit data from their own projects to be used in future calibrations
importance of calibration
Importance of Calibration
  • Proper calibration is very important
  • The original COCOMO II.1997 could estimate within 20% of the actual values 46% of the time. This was based on 83 data points.
  • The recalibration for COCOMO II.1998 could estimate within 30% of the actual values 75% of the time. This was based on 161 data points.
cocomo ii example


Use the COCOMO II model to estimate the effort required to build software for a simple ATM that produces 12 screens, 10 reports, and will require approximately 80 software components. Assume average complexity and average developer/environment maturity. Use the application composition model with object points.

cocomo ii example solution
COCOMO II Example - Solution


Using the weightings from Table 23.6 the unadjusted object points are:

object point = 12 * 2 + 10 * 5 + 80 * 10 = 874

If we assume 80% reuse

NOP = (object points) * [(100 - %reuse)/100]

= (874) * [(100 – 80)/100]

= 874 * 0.2 = 174.8

Using the weightings from Table 23.7 for nominal developer experience

PROD = 13

The estimated effort in person months is

estimated effort = NOP/PROD = 174.8 / 13 = 13.45

is cocomo the best
Is COCOMO the Best?
  • COCOMO is the most popular method however for any software cost estimation you should really use more then one method.
  • Best to use another method that differs significantly from COCOMO so your project is examined from more then one angle.
  • Even companies that sell COCOMO based products recommend using more then one method. Softstar (creators of Costar) will even provide you with contact information for their competitor’s products.
cocomo conclusions
COCOMO Conclusions
  • COCOMO is the most popular software cost estimation method.
  • Easy to do, small estimates can be done by hand.
  • USC (sunset.usc.edu/research/COCOMOII/cocomo_main.html) has a free graphical version available for download.
  • Many different commercial version based on COCOMO – they supply support and more data, but at a price.
estimation for oo projects i
Estimation for OO Projects-I

Lorenz and Kidd suggest the following approach:

1. Develop estimates using effort decomposition, FP analysis, and any other method that is applicable for conventional applications.

2. Using object-oriented analysis modeling, develop use-cases and determine a count.

3. From the analysis model, determine the number of key classes (called analysis classes).

estimation for oo projects ii
Estimation for OO Projects-II
  • Categorize the type of interface for the application and develop a multiplier for support classes:

Interface type Multiplier

No GUI 2.0

Text-based user interface 2.25

GUI 2.5

Complex GUI 3.0

Multiply the number of key classes (step 3) by the multiplier to obtain an estimate for the number of support classes.

5. Multiply the total number of classes (key + support) by the average number of work-units per class. Lorenz and Kidd suggest 15 to 20 person-days per class.

6. Cross check the class-based estimate by multiplying the average number of work-units per use-case

estimation for agile projects
Estimation for Agile Projects

1. Each user scenario (a mini-use-case) is considered separately for estimation purposes.

2. The scenario is decomposed into the set of software engineering tasks that will be required to develop it.

3. Each task is estimated separately. Note: estimation can be based on historical data, an empirical model, or “experience.”

  • Alternatively, the ‘volume’ of the scenario can be estimated in LOC, FP or some other volume-oriented measure (e.g., use-case count).

4. Estimates for each task are summed to create an estimate for the scenario.

  • Alternatively, the volume estimate for the scenario is translated into effort using historical data.

5. The effort estimates for all scenarios that are to be implemented for a given software increment are summed to develop the effort estimate for the increment.

make buy decision
Make-Buy Decision
  • It may be more cost effective to acquire a piece of software rather than develop it.
  • Decision tree analysis provides a systematic way to sort through the make-buy decision.
  • As a rule, outsourcing software development requires more skillful management than does in-house development of the same product.
make buy decision importance
Make-Buy Decision Importance

Why is the "make-buy" decision and deciding whether or not to outsource software development an important part of the software planning process?

It maybe more cost effective to acquire a piece of software, rather than develop it. Similarly, deciding to outsource software development frees resources for other purposes (or reduces expenses) but it makes outsourcing can make it harder to control and manage delivery times and development costs.

  • The software project planner must estimate three things before a project begins: how long will it take, how much effort will be required, and how many people will be involved.
  • In addition, the planner must predict the resources (hardware and software) that will be required and the risk involved.
basic cocomo model example 2
Basic COCOMO Model: Example 2

Example 2:

A program allows students to add/remove courses to/from his/her schedule. Students can look up the database of courses to see when the course is scheduled.

The university administrator edits the database of courses: adds and removes the courses from the database.

Initial analysis of the problem has identified the use cases shown in Figure 1. Use function point analysis and Basic COCOMO, to estimate the effort and time required to develop this system using Java. Use provided tables. Weight all the use cases high with a weight of 7.

basic cocomo model example 2 figure 1


Remove Course



Look up course

Add Course

Edit database of courses




Add course to database

Remove course from database

Basic COCOMO Model: Example 2: Figure 1
basic cocomo model example 2 solution
Basic COCOMO Model: Example 2 Solution


  • Every use case count as an element of functionality. There are 6 use cases and so 6 function points. Weight all the use cases high with a weight of 7, to give an FP contribution of 42 so far (6x7).
  • We shell now adjust the FP value for complexity using values for the 14 factors as shown in Table below.
basic cocomo model example 2 solution2
Basic COCOMO Model: Example 2 Solution

FPadjusted = FP  (0.65 + 0.01 Fi)

Inserting the total of 25, from table above, into the equation for FPadjusted gives

FPadjusted= 42  (0.65 + 0.01  25) = 42  0.9 = 37.8

Using Table 2, relating function points to lines of code, this then suggests 30 lines of OO code per function point, so that we would need to develop about 1323 lines of OO code to implement this system.

37.8 x 30 = 1134 rounded up gives 1100 LOC (1.1KLOC)

basic cocomo model example 2 solution3
Basic COCOMO Model: Example 2 Solution

Viewing the scheduling system as an exceptionally simple organic system, take the parameters a,b from the basic COCOMO model in Table 3, to obtain Effort.

E = a x (KLOC)b

E = 2.4 x 1.11.05

E = 2.4 x 1.105 = 2.69 rounding up to 2.7, person-months.

The optimal duration is then found using parameters c, d for the Basic COCOMO model in the Table 3.

D = c x E d

D = 2.5 x 2.70.38 = 2.5 x 1.45 = 3.64

rounding up to 3.7, months.



John D Lin - [email protected]


Tues & Thurs 10:00-11:30 RM. 100, Lower Block