Se 325 425 principles and practices of software engineering autumn 2006
This presentation is the property of its rightful owner.
Sponsored Links
1 / 91

SE 325/425 Principles and Practices of Software Engineering Autumn 2006 PowerPoint PPT Presentation


  • 89 Views
  • Uploaded on
  • Presentation posted in: General

SE 325/425 Principles and Practices of Software Engineering Autumn 2006. James Nowotarski 17 October 2006. Today’s Agenda. Topic Duration Testing recap 20 minutes Project planning & estimating60 minutes *** Break Current event reports 30 minutes Software metrics60 minutes.

Download Presentation

SE 325/425 Principles and Practices of Software Engineering Autumn 2006

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Se 325 425 principles and practices of software engineering autumn 2006

SE 325/425Principles and Practices of Software EngineeringAutumn 2006

James Nowotarski

17 October 2006


Today s agenda

Today’s Agenda

Topic Duration

  • Testing recap 20 minutes

  • Project planning & estimating60 minutes

    *** Break

  • Current event reports 30 minutes

  • Software metrics60 minutes


Verification validation

Verification & Validation

  • Testing is just part of a broader topic referred to as Verification and Validation (V&V)

  • Pressman/Boehm:

    • Verification: Are we building the product right?

    • Validation: Are we building the right product?


Se 325 425 principles and practices of software engineering autumn 2006

Error origination

defect

fault

Stage Containment

Planning & Managing

Communication

project initiation

requirements

Modeling

analysis

design

Construction

code

test

Deployment

delivery

support

Error detection

error


V model

Acceptance

Test

System

Test

Requirements

Functional Design

Unit Test

Technical

Design

Detailed

Design

Code

V-Model

Integration

Test

Legend:

Validation

Flow of Work

Testing: Test that the product implements the specification

Verification


Test coverage metrics

Test Coverage Metrics

  • Statement coverage:Goal is to execute each statement at least once.

  • Branch coverageGoal is to execute each branch at least once.

  • Path coverageWhere a path is a feasible sequence of statements that can be taken during the execution of the program.

5/10 = 50%

2/6  33%

¼  25% Where does the 4 come from?

What % of each type of coverage does this test execution provide?

= tested

= not tested


Example of pair programming

Example of pair programming

  • “Since then, [Adobe’s] Mr. Ludwig has adopted Fortify software and improved communication between his team of security experts and programmers who write software. A few years ago, each group worked more or less separately: The programmers coded, then the quality-assurance team checked for mistakes. Now, programmers and security types often sit side by side at a computer, sometimes lobbing pieces of code back and forth several times a day until they believe it is airtight. The result: ‘Issues are being found earlier,’ Mr. Ludwig says. But, he adds, ‘I'm still trying to shift that curve.’ “

    Vara, V. (2006, May 4). Tech companies check software earlier for flaws. Wall Street Journal. Retrieved October 16, 2006, from http://online.wsj.com/public/article/SB114670277515443282-qA6x6jia_8OO97Lutaoou7Ddjz0_20060603.html?mod=tff_main_tff_top


V model1

V-Model

Acceptance

Test

System

Test

Black box

Integration

Test

White box

Unit Test

Code


Flow graph notation

Flow Graph Notation

Sequence

Case

if

while

until

Where each circle represents one or more nonbranching set of source code statements.


Continued

Continued…

2

1

i = 1;total.input = total.valid = 0;sum = 0;DO WHILE value[i] <> -999 AND total.input < 100increment total.input by 1;IF value[i] >= minimum and value[i] <= maximumTHEN increment total.valid by 1;sum = sum + value[i]ELSE skipENDIFincrement I by 1;ENDDO

IF total.valid > 0THEN average = sum / total.valid;ELSE average = -999;ENDIF

END average

3

4

6

5

7

8

9

10

11

12

13


Steps for deriving test cases

1

2

3

10

4

12

11

5

13

6

7

8

9

Steps for deriving test cases

  • Use the design or code as a foundation and draw corresponding flow graph.

2. Determine the cyclomatic complexity of the resultant flow graph.

V(G) = 17 edges – 13 nodes + 2 = 6V(G) = 5 predicate nodes + 1 = 6.


Steps for deriving test cases1

1

2

3

10

4

12

11

5

13

6

7

8

9

Steps for deriving test cases

  • Determine a basis set of linearly independent paths.

Path 1: 1-2-10-11-13Path 2: 1-2-10-12-13Path 3: 1-2-3-10-11-13Path 4: 1-2-3-4-5-8-9-2…Path 5: 1-2-3-4-5-6-8-9-2…Path 6: 1-2-3-4-5-6-7-8-9-2…

  • Prepare test cases that will force execution of each path in the basis set.


Today s agenda1

Today’s Agenda

Topic Duration

  • Testing recap 20 minutes

  • Project planning & estimating60 minutes

    *** Break

  • Current event reports 30 minutes

  • Software metrics60 minutes


People trump process

People trump process

“A successful software methodology (not new, others have suggested it):(1) Hire really smart people(2) Set some basic direction/goals(3) Get the hell out of the wayIn addition to the steps about, there's another key: RETENTION”

http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html


Se 325 425 principles and practices of software engineering autumn 2006

Our focus

Project Management

Planning & Managing

Communication

project initiation

requirements

Modeling

analysis

design

Construction

code

test

Deployment

delivery

support


Planning managing

Planning & Managing

Project Management Institute

  • Scope

  • Time

  • Cost

  • People

  • Quality

  • Risk

  • Integration (incl. change)

  • Communications

  • Procurement


Today s focus

requirements

Users

Negotiate

reqts

work

breakdown

structure

negotiated

requirements

Decom-

pose

productivity rate

Estimate

size

deliverable

size

Estimate

resources

5

3

4

2

1

workmonths

Develop

schedule

Iterate as necessary

schedule

Today’s focus


Se 325 425 principles and practices of software engineering autumn 2006

Work Breakdown Structure

  • Breaks project into a hierarchy.

  • Creates a clear project structure.

  • Avoids risk of missing project elements.

  • Enables clarity of high level planning.


Today s focus1

requirements

Users

Negotiate

reqts

work

breakdown

structure

negotiated

requirements

Decom-

pose

Estimate

size

deliverable

size

Estimate

resources

1

2

5

3

4

workmonths

Develop

schedule

Iterate as necessary

schedule

Today’s focus

productivity rate


Units of size

Units of Size

  • Lines of code (LOC)

  • Function points (FP)

  • Components


Se 325 425 principles and practices of software engineering autumn 2006

LOC

How many physical source lines are there in this C language program?

#define LOWER 0/* lower limit of table */

#define UPPER 300/* upper limit */

#define STEP 20/* step size */

main()/* print a Fahrenheit->Celsius conversion table */

{

int fahr;

for(fahr=LOWER; fahr<=UPPER; fahr=fahr+STEP)

printf(“%4d %6.1f\n”, fahr, (5.0/9.0)*(fahr-32));

}


Se 325 425 principles and practices of software engineering autumn 2006

LOC

Need standards to ensure repeatable, consistent size counts

  • IncludeExclude

  • Executable 

  • Nonexecutable

  • Declarations 

  • Compiler directives 

  • Comments

  • On their own lines 

  • On lines with source 

  • . . .


A case study

A Case Study

  • Computer Aided Design (CAD) for mechanical components.

  • System is to execute on an engineering workstation.

  • Interface with various computer graphics peripherals including a mouse, digitizer, high-resolution color display, & laser printer.

  • Accepts two & three dimensional geometric data from an engineer.

  • Engineer interacts with and controls CAD through a user interface.

  • All geometric data & supporting data will be maintained in a CAD database.

  • Required output will display on a variety of graphics devices.

Assume the following major software functions are identified


Estimation of loc

Estimation of LOC

  • CAD program to represent mechanical parts

  • Estimated LOC = (Optimistic + 4(Likely)+ Pessimistic)/6


Se 325 425 principles and practices of software engineering autumn 2006

LOC

  • “Lines of code is a useless measurement in the face of code that shrinks when we learn better ways of programming” (Kent Beck)


Function points

Function Points

  • A measure of the size of computer applications

  • The size is measured from a functional, or user, point of view.

  • It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application.

  • Can be subjective

  • Can be estimated EARLY in the software development life cycle

  • Two flavors:

    • Delivered size = total application size delivered, including packages, assets, etc.

    • Developed size = portion built for the release


Computing function points

Computing Function Points

5

15

8

32

10

40

8

80

2

10

177


Se 325 425 principles and practices of software engineering autumn 2006

0

1

2

3

4

5

No influence

Incidental

Moderate

Average

Significant

Essential

Calculate Degree of Influence (DI)

3

  • Does the system require reliable backup and recovery?

  • Are data communications required?

  • Are there distributed processing functions?

  • Is performance critical?

  • Will the system run in an existing, heavily utilized operational environment?

  • Does the system require on-line data entry?

  • Does the on-line data entry require the input transaction to be built over multiple screens or operations?

  • Are the master files updated on-line?

  • Are the inputs, outputs, files, or inquiries complex?

  • Is the internal processing complex?

  • Is the code designed to be reusable?

  • Are conversion and installation included in the design?

  • Is the system designed for multiple installations in different organizations?

  • Is the application designed to facilitate change and ease of use by the user?

4

1

3

2

4

3

3

2

1

3

5

1

1


The fp calculation

The FP Calculation:

  • Inputs include:

    • Count Total

    • DI =  Fi (i.e., sum of the Adjustment factors F1.. F14)

  • Calculate Function points using the following formula:FP = UFP X [ 0.65 + 0.01 X  Fi ]

  • In this example:FP = 177 X [0.65 + 0.01 X (3+4+1+3+2+4+3+3+2+1+3+5+1+1)FP = 177 X [0.65 + 0.01 X (36)FP = 177 X [0.65 + 0.36]FP = 177 X [1.01]FP = 178.77

TCF: Technical complexity factor


Reconciling fp and loc

Reconciling FP and LOC

http://www.theadvisors.com/langcomparison.htm


Components

Components

Criteria:

Simple –

Medium –

Hard –


Bottom up estimating

Bottom-up estimating

  • Divide project into size units (LOC, FP, components)

  • Estimate person-hours per size unit

  • Most projects are estimated in this way, once details are known about size units


Se 325 425 principles and practices of software engineering autumn 2006

Project Management

Planning & Managing

Communication

project initiation

requirements

Modeling

analysis

design

Construction

code

test

Deployment

delivery

support

top down estimating

bottom up estimating


Using fp to estimate effort

Using FP to estimate effort:

  • If for a certain project

    • FPEstimated = 372

    • Organization’s average productivity for systems of this type is 6.5 FP/person month.

    • Burdened labor rate of $8000 per month

  • Cost per FP

    • $8000/6.5  $1230

  • Total project cost

    • 372 X $1230 = $457,650


Empirical estimation models

Empirical Estimation Models

  • Empirical data supporting most empirical models is derived from a limited sample of projects.

  • NO estimation model is suitable for all classes of software projects.

  • USE the results judiciously.

  • General model:

    E = A + B X (ev)cwhere A, B, and C are empirically derived constants.E is effort in person monthsev is the estimation variable (either in LOC or FP)


Be sure to include contingency

Be sure to include contingency

The earlier “completed programs” size and effort data points in Figure 2 are the actual sizes and efforts of seven software products built to an imprecisely-defined specification [Boehm et al. 1984]†. The later

“USAF/ESD proposals” data points are from five proposals submitted to the U.S. Air Force Electronic Systems Division in response to a fairly thorough specification [Devenny 1976].

http://sunset.usc.edu/research/COCOMOII/index.html


Some famous words from aristotle

Some famous words from Aristotle

It is the mark of an instructed mind to rest satisfied with the degree of precision which the nature of a subject admits, and not to seek exactness when only approximation of the truth is possible….

Aristotle(384-322 B.C.)


Se 325 425 principles and practices of software engineering autumn 2006

GANTT Schedule

  • View Project in Context of time.

  • Critical for monitoring a schedule.

  • Granularity 1 –2 weeks.


Se 325 425 principles and practices of software engineering autumn 2006

Gantt Example 1:

Suppose a project comprises five activities: A,B,C,D, and E. A and B have no preceding activities, but activity C requires that activity B must be completed before it can begin. Activity D cannot start until both activities A and B are complete. Activity E requires activities A and C to be completed before it can start. If the activity times are A: 9 days; B: 3 days; C: 9 days; D: 5 days; and E: 4 days, determine the shortest time necessary to complete this project.

Identify those activities which are critical in terms of completing the project in the shortest possible time.

http://acru.tuke.sk/doc/PM_Text/PM_Text.doc


Se 325 425 principles and practices of software engineering autumn 2006

Gantt Example 2:

Construct a Gantt chart which will provide an overview of the planned project.

How soon could the project be completed?

Which activities need to be completed on time in order to ensure that the project is completed as soon as possible?

http://acru.tuke.sk/doc/PM_Text/PM_Text.doc


Estimating schedule time

Estimating Schedule Time

  • Rule of thumb (empirical)

Schedule Time (months)

=

3.0 * person-months1/3


People trump process1

People trump process

One good programmer will always outcode 100 hacks in the long run, no matter how good of a process or IDE you give them

http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html


Today s agenda2

Today’s Agenda

Topic Duration

  • Testing recap 20 minutes

  • Project planning & estimating60 minutes

    *** Break

  • Current event reports 30 minutes

  • Software metrics60 minutes


Why measure

Why Measure?

  • “You can’t control what you can’t measure” (Tom Demarco)

  • “Show me how you will measure me, and I will show you how I will perform” (Eli Goldratt)

  • “Anything that can’t be measured doesn’t exist” (Locke, Berkeley, Hume)


Scope of our discussion

Our focus

Director - IS/IT

Manager,

Systems Development & Maintenance

Manager,

Computer Operations

Manufacturing

Systems

Financial

Systems

Customer Fulfillment

Systems

Scope of our discussion

Sample IT Organization


Examples of systems development metrics

Examples of systems development metrics


Example speed of delivery

= Is a single project release (Average elapsed months =14.8, n=33).

Industry Average line is determined from Software Productivity Research

Example: Speed of delivery

70

60

50

40

Elapsed Months

30

20

10

0

0

2000

4000

6000

8000

10000

12000

Developed Function Points


Example schedule reliability

Example: Schedule reliability

60%

50%

40%

30%

Schedule Variance above commitment

20%

= Is a single project release (n=33).

Industry Average line is determined from Software Productivity

Research

10%

0%

2000

4000

6000

8000

10000

12000

Developed Function Points


Example software quality

Faults reported over the first three months in

operations (n=27)

An estimated industry average for faults

found in the first three months of operations.

The assumption is that half the total faults are found

in the first three months in operation. This average is

one half of the industry average of the total faults

from C. Jones, Applied Software Measurement,

1996, p.232.

Example: Software quality

7000

6000

5000

4000

3000

Faults (3 months)

2000

1000

0

0

2000

4000

6000

8000

10000

12000

Developed Function Points


Example productivity

Example: Productivity

12

10

8

Is a single project release (n=33)

Industry Average line is determined from Software

Productivity Research.

6

Function Points per Staff Month

4

2

0

0

2000

4000

6000

8000

10000

12000

Developed Function Points


Objectives of software measurement

Objectives of Software Measurement

  • Improve planning, estimating, and staffing; bring marketing down to reality

  • Understand software quality, productivity of people, ability to deliver on time

  • Identify areas for improvement

  • Compare your performance


Objectives of software measurement1

Objectives of Software Measurement

  • Help a systems development unit understand their performance

  • Evaluate performance relative to goals

  • Allow for comparisons to, e.g.,:

    • Other organizations

    • Alternative development approaches (custom, packaged, outsourced, etc.) and technologies

    • Other standards/targets

  • Improve estimating ability

  • Promote desired behaviors, e.g., reuse


Hawthorne effect

Hawthorne Effect

  • Famous study conducted in the Hawthorne plant of General Electric Corporation

  • Plant managers implemented changes in working conditions and recorded data on the plant’s production output

  • They found that production increased no matter what changes in working conditions they implemented!

What does this example reveal about how people act when they know that an experiment is being conducted?


Goal question metric

Goal Question Metric

Goal 1

Goal 2

Question

Question

Question

Question

Question

Metric

Metric

Metric

Metric

Metric

Metric


Goal question metric1

Goal Question Metric

  • Technique for identifying suitable measurements to collect

    • Assumption: It is only worthwhile measuring things to satisfy goals

  • Goals are desired end states

  • Questions identify the information needs associated with goals, help determine whether or not goals are being met

  • Metrics are specific items that can be measured to answer the questions


Gqm example high level

What is the quality

of our deliverables?

How predictable is our process?

How quickly do we deliver?

How efficient are we?

Fault density

Delivery rate

Productivity rate

Duration variance

percentage

GQM Example (High Level)

Goal

Improve systems delivery performance

Question

Metric


Case study exercise

Case Study Exercise

  • Get team assignment

  • Read the case study

  • Review example exercise

  • Identify 1 goal

  • Identify 2-3 questions pertinent to this goal.

  • Identify at least 1 metric (indicator) per question

  • Brief the class


Measurement and continuous improvement

Measurement and Continuous Improvement

Continuous Improvement

Measurement


Measurement and continuous improvement1

Measurement and Continuous Improvement

  • Clarifies measurement’s purpose and role

  • Clarifies which measures to collect

  • Provides a mechanism for acting on findings

  • Enables top-to-bottom organizational support

  • Focuses program objectives

  • Enables tracking of improvement progress

  • Enables communication of program benefit

Continuous Improvement

Measurement


Se 325 425 principles and practices of software engineering autumn 2006

Continuous Process Improvement

Approach to Quality and Measurement

1. Identify performance standards and goals

Plan

2. Measure project performance

4. Eliminate causes of deficient performance

- fix defects

- fix root causes

Act

Do

Check

3. Compare metrics against goals


Se 325 425 principles and practices of software engineering autumn 2006

Achieve-2

Sustain

Enable

Achieve-1

Change

Change

Change

Change

Metrics Strategy

Commitment / Ownership

Metrics Rollout Education/Training

Ongoing Metrics

Education / Training

Roles & Responsibilities

Metrics Network

Large Project Network

Metrics Awareness Education

Pilot Project Group

Distributed Support Units

System Building Improvement Goals

Dashboard metrics

Implementation

Vital Few Metrics Definitions

Vital Few Metrics Implementation

Measurement Process Definition

Measurement Process

Improvement

Metrics Definition &

Implementation for Delivery Centers

Technology Strategy

Metrics Repository and tools

KM Support for Measurement Community of Practice

Metrics Program Change Plan

Enable Large Projects

and Remaining Centers

Pilot Selected Projects

and Selected Delivery Centers

People

Process

Metrics Embedded in

System Building Methods

Technology

QUALITY MANAGEMENT

PROGRAM MANAGEMENT


Measurement program mortality

Measurement Program Mortality

Most programs fail, usually within 2 years

Number of companies

400

Cumulative starts

Cumulative successes

350

300

250

200

150

100

50

0

1980

1981

1982

1983

1984

1985

1986

1987

1988

1989

1990

1991

Year


Reasons for metric program failure

Reasons for Metric Program Failure

  • Lack of [visible] executive sponsorship

  • Lack of alignment with organizational goals

  • Tendency to collect too much data

  • Measures not calibrated, normalized, or validated

    • Not comparing apples-to-apples

  • Fear of [individual] evaluation

  • Learning curve (e.g., function points)

  • Cost overhead


Key success factors

Key Success Factors

  • Ensure that measurement is part of something larger, typically performance improvement

    • “Trojan Horse” strategy

    • Ensure alignment with organizational goals

  • Start small, iterate

    • Strongly recommend doing a pilot test

  • Automate capture of metrics data

  • Rigorously define a limited, balanced set of metrics

    • “Vital Few”

    • Portfolio approach

    • Comparability

  • Aggregate appropriately

    • Focus should be on processes, not individuals

  • Obtain [visible] executive sponsorship

  • Understand and address the behavioral implications


Other quotes

Other Quotes

“Count what is countable,

measure what is measurable,

and what is not measurable,

make measurable”

Galileo


Other quotes1

Other Quotes

“In God we trust – All others must bring data”

W. Edwards Deming


Some courses at depaul

Some Courses at DePaul

  • SE 468: Software Measurement and Estimation

    • Software metrics. Productivity, effort and defect models. Software cost estimation. PREREQUISTE(S):CSC 423 and either SE 430 or CSC 315 or consent

  • SE 477: Software and System Project Management

    • Planning, controlling, organizing, staffing and directing software development activities or information systems projects. Theories, techniques and tools for scheduling, feasibility study, cost-benefit analysis. Measurement and evaluation of quality and productivity. PREREQUISTE(S):SE 465 or CSC 315


Se 325 425 principles and practices of software engineering autumn 2006

For October 24

  • Read Pressman Chapters 8-9

  • Assignment 3 (see course home page)

  • Current event reports:

    • Alonzo

    • Pon

    • Rodenbostel


Extra slides

Extra slides


Change control process

Changes Needed

In Document

Create

Changes to

Incorporate

Create Initial

Sections

Create/Modify

Draft

Review

Draft

(V&V)

Document

Approved

Approved

Create

Review

Revise

Review

Review

...

Time

Change Control Process

Document Under Development and User Change Control

Document in Production and Under Formal Change Control


Se 325 425 principles and practices of software engineering autumn 2006

Waterfall model

System

requirements

Software

requirements

Analysis

Program

design

Coding

Source: Royce, W.  "Managing the Development of Large Software Systems."

Testing

Operations


Core concepts

… for the delivery of technology-enabled business solutions

People

Process

Technology

Core Concepts

The focus of SE 425 is the process component of software engineering

People

Process

Technology


V model2

V-Model

Acceptance

Test

System

Test

Integration

Test

Unit Test

Code


Se 325 425 principles and practices of software engineering autumn 2006

  • Program Evaluation and Review Technique.

  • Help understand relationship between tasks and project activity flow.


Se 325 425 principles and practices of software engineering autumn 2006

  • List all activities in plan.

  • Plot tasks onto chart. (Tasks = arrows. End Tasks = dots)

  • Show dependencies.

  • Schedule activities – Sequential activities on critical path. Parallel activities. Slack time for hold-ups.

Start week = Resources Available.

1. High level analysis start week 1, 5 days, sequential

2.Selection of hardware platform start week 1, 1 day, sequential, dependent on (1)

3.Installation and commissioning of hardware start week 3, 2 weeks, parallel dependent on (2), any time after

4.Detailed analysis of core modules start week 1, 2 weeks, sequential dependent on (1)


Se 325 425 principles and practices of software engineering autumn 2006

  • Carrying out the example critical path analysis above shows us:

  • That if all goes well the project can be completed in 10 weeks

  • That if we want to complete the task as rapidly as possible, we need:

  • 1 analyst for the first 5 weeks

  • 1 programmer for 6 weeks starting week 4

  • 1 programmer for 3 weeks starting week 6

  • Quality assurance for weeks 7 and 9

  • Hardware to be installed by the end of week 7

  • That the critical path is the path for development and installation of supporting modules

  • That hardware installation is a low priority task as long as it is completed by the end of week 7


Se 325 425 principles and practices of software engineering autumn 2006

Commonly Used Metrics(Taken from http://www.klci.com “Software Metrics “State of the Art” – 2000)

  • Schedule Metrics (55%)

Tasks completed/late/rescheduled.

  • Lines of Code (46%)

KLOC, Function Point – for scheduling and costing.

  • Schedule, Quality, Cost Tradeoffs (38%)

# of tasks completed on schedule / late / rescheduled.

  • Requirements Metrics (37%)

# or % changed / new requirements (Formal RFC)

  • Test Coverage % (36%)

Fraction of lines of code covered. (50/60%  90%).

  • Overall Project Risk % (36%)

Level of confidence in achieving a schedule date.

  • Fault Density

Unresolved faults. (eg: Release at .25 /KNCSS)

  • Fault arrival and close rates

Determine -ready to deploy. Easier to find than solve.

Metrics Strategy:

  • Gather Historical Data (from source code, project schedules, RFC, reports etc)

  • Record metrics.

  • Use current metrics within the context of historical data. Compare effort required on similar projects etc.


Se 325 425 principles and practices of software engineering autumn 2006

Lines of Code

  • From this data we can develop:

  • Errors per KLOC (thousand lines of code)

  • Defects per KLOC

  • $ per LOC

  • Pages of Documentation per KLOC

  • Errors / person-month

  • LOC per person-month

  • $/page of documentation

  • Evaluating LOC metric:

  • +Artifact of ALL software development projects.

  • +Easily countable

  • +Well used (many models)

  • Programming language dependent

  • Penalize well designed – shorter programs

  • Problem with non-procedural langs.

  • Level of detail required is not known early in the project.

Change in Metric usage 1999 – 2000

Lines of Code DOWN 10% Function Points UP 3%


The cocomo model

The COCOMO Model

A hierarchy of estimation models

  • Model 1: BasicComputes software development effort (& cost) as a function of size expressed in estimated lines of code.

  • Model 2: IntermediateComputes effort as a function of program size and a set of “cost drivers” that include subjective assessments of product, hardware, personnel, and project attributes.

  • Model 3: AdvancedIncludes all aspects of the intermediate model with an assessment of the cost driver’s impact on each step (analysis, design, etc).


Three classes of software projects

Three classes of software projects

  • OrganicRelatively small, simple. Teams with good application experience work to a set of less rigid requirements.

  • Semi-detachedIntermediate in terms of size and complexity. Teams with mixed experience levels meet a mix of rigid and less rigid requirements. (Ex: transaction processing system)

  • EmbeddedA software project that must be developed within a set of tight hardware, software and operational constraints. (Ex: flight control software for an aircraft)


Basic cocomo model

Basic COCOMO Model

Basic COCOMO equations

  • Nominal effort in person months: E = abKLOCbb

  • Development time in chronological monthsD = cbEeb


An example of basic cocomo

An example of Basic COCOMO

E = abKLOCbb

E = 2.3(KLOC)1.05

= 2.3(33.2)1.05

= 95 person-months

D = 2.5E0.35

D= 2.5( 95)0.35

= 12.3 months.


Intermediate cocomo model

Intermediate COCOMO Model

Intermediate COCOMO equations

  • Effort in person months: E = abKLOCbb X EAFwhere EAF = an effort adjustment factor

  • Development time in chronological monthsD = cbEeb


The same example in intermediate cocomo

The same example in Intermediate COCOMO

E = abKLOCbb X EAF

E = 3.2(KLOC)1.05 X 1

= 3.2 (33.2)1.05 X 1

EAF is calculated as a product of the multipliers. In this case we set them all to NOMINAL.

http://sern.ucalgary.ca/courses/seng/621/W98/johnsonk/cost.htm#Intermediate


Importance of soft skills

Importance of soft skills

  • Junior developers and senior software engineers must actively develop the soft skills that are becoming crucial to their success. Managers must communicate with staff regarding future opportunities, to keep people engaged. In our experience, there have been job impacts following our move toward using offshore partners. Primarily, we’ve moved toward having more managers, project leads, and architects at the expense of more junior staff engineers.

    - Cusick, J. & Prasad, A. (2006, Sept/Oct). A practical management and engineering approach to offshore collaboration. IEEE Software. Retrieved October 15, 2006, from http://www.computer.org/portal/cms_docs_software/software/homepage/2006/20_29_0506.pdf


Difficult to measure productivity

Difficult to measure productivity

  • But it's exceptionally difficult to measure software developerproductivity, for all sorts of famous reasons.I thought the main reason was that no one tries to do it. Question: how would you evaluate two methodologies if you really wanted to? What would you need? Answer: conduct a social science experiment using teams of volunteers. Undergraduate CS majors might do in a pinch.Question: who is going to do this experiment? Computer Science professors? Ha! CS profs want to scribble equations, Design Languages, maybe, once in a great while, they'll be willing towrite a program or two... but conduct a social science experiment? Feh, that's some other department, isn't? So instead we're stuck with anecdotes, religious fanatics, andhucksters holding seminars...http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html#comment-1481897388024210108


Measures of quality

Measures of quality

  • “The chief lesson is that the number of lines of code is not an indicator of quality. Smaller programs can have plenty of bugs while larger projects, such as the Linux kernel, can be tightly controlled. Quality is more accurately reflected by the ratio of developers to the size of the code base and by the number of users who use the software (and provide feedback).” Study of open source software quality funded by Department of Homeland Security - http://www.washingtontechnology.com/news/1_1/daily_news/28134-1.html


Hackers find use for google code search

Hackers find use for Google code search

  • “The company's new source-code search engine, unveiled Thursday as a tool to help simplify life for developers, can also be misused to search for software bugs, password information and even proprietary code that shouldn't have been posted to the Internet, security experts said Friday.

  • Unlike Google's main Web search engine, Google Code Search peeks into the lines of code whenever it finds source-code files on the Internet. This will make it easier for developers to search source code directly and dig up open source tools they may not have known about, but it has a drawback.

  • ‘The downside is that you could also use that kind of search to look for things that are vulnerable and then guess who might have used that code snippet and then just fire away at it,’ says Mike Armistead, vice president of products with source-code analysis provider Fortify Software. “

    McMillan, R. (2006, October 6). Hackers find use for Google code search. Network World. Retrieved October 16, 2006, from http://www.networkworld.com/news/2006/100606-hackers-find-use-for-google.html


How to measure hours

How to Measure Hours?

  • IncludeExclude

  • Overtime

    • Compensated (paid)

    • Uncompensated (unpaid)

  • Contract

  • Temporary employees

  • Subcontractor

  • Consultant

  • Management

  • Test personnel

  • Software quality assurance

  • . . .


  • Login