slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Improving FP Estimation and tuning for Project Specific Environments PowerPoint Presentation
Download Presentation
Improving FP Estimation and tuning for Project Specific Environments

Loading in 2 Seconds...

play fullscreen
1 / 39

Improving FP Estimation and tuning for Project Specific Environments - PowerPoint PPT Presentation


  • 129 Views
  • Uploaded on

Traditional FP Estimation & Progressive Workflow FP Estimation. Improving FP Estimation and tuning for Project Specific Environments SourceForge : Function Points http://sourceforge.net/projects/functionpoints. FPA for Telecom Environments Reason For FPA Investigation

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Improving FP Estimation and tuning for Project Specific Environments' - astro


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Traditional FP Estimation &

Progressive Workflow FP Estimation

Improving FP Estimation and tuning for Project Specific Environments

SourceForge: Function Points

http://sourceforge.net/projects/functionpoints

slide2
FPA for Telecom Environments

Reason For FPA Investigation

Comparison of Data Function and statistical analysis of data.

Comparison of Transaction Functions and statistical analysis of data.

Problems in Current Estimation Model

Range Issues

FP Counting Issues

Non-FP Scenarios

Internal Boundary

FPA Improvement Measures

Arriving at Platform FPA Ranges

Bottom Up Approach

Top Down Approach

Standard FP vs. Progressive FP

FP Estimation Review

IFPUG vs. Progressive FP

Progressive FP Count

Progressive Workflow Management

Integrated Process Flow (IPF)

Benefits of Progressive Workflow

Progressive IPF

Sample comparison of IFPUG & Progressive FP sheets in OR - Rel 1800.

FP Template.

Agenda

fpa for telecom x environments
FPA for Telecom XEnvironments
  • The Telecom X environment is a multi-homed environment consisting of two distinct environments, a BIE interface provided by Spring Framework and a comprehensive CRM environment. To better understand the work, FPA application and estimation practices a review was conducted independently to analyze the data.
  • Review of FP data for Release P till Release Z, over two years was collected as sample data.
  • Work distribution for DET Ranges.
  • Probability distribution for DET Ranges.
  • Derive various measures to improve and fine tune the estimation process and procedures.
  • Solving Range, distribution, and counting issues associated with the environment with proper analysis of given data.
reasons for fpa investigation
Reasons for FPA Investigation
  • Is there a possibility to derive the man-days required to complete a task based on the FP estimate alone ?
  • FP is used to derive the size of a given project, if the size is accurate, then in theory, it should be possible to derive the effort.
  • To begin with we need aggregate data to determine the average man-hours/FP, as IFPUG forum states that this may be possible by taking into account the historical data for a particular environment.
  • The target was to achieve some concrete evidence that each FP would required X man hours to complete a given task.
  • The FP values for Rel P – Rel Z over a period of two years were used to arrive at the numbers.
  • The study was inconclusive and the results were not comparable to the actual data being used. We had to identify the issues pertaining to the problem and investigate the sample data collected for accuracy.
problems with data functions range issue
Problems with Data Functions – Range Issue
  • When we counted the FP range for Data functions it was evident from the sample data, that the value range for given FP count were too less and unevenly distributed outside the scope, where all data were simply measured as high value data functions, which is accurate as per IFPUG model, but given the DET, RET count the dispersion graph was able to clearly show the disparity that existed in the range, as shown in Work Distribution Chart. The Red Marker shows the given range in the equation takes a maximum of 50 DETs and 20 RETs for evaluation. To further evaluate the model we built a probability graph to evaluate the actual values as depicted in Probability Distribution Chart.
  • It was evident from the graph that most Data FPs were present in the 100 DET Range which was significantly outside the given measure. The probability of values that fall within the given ranges of IFPUG was less than 12%. This would not pose a problem for a rough size estimate and also follows the IFPUG rules of counting perfectly; the issue discovered in this problem was that the effort was not taken into consideration which resulted in poor size estimation, whereby the actual effort is missed.
problems with transaction functions range issue
Problems with Transaction Functions – Range Issue
  • Here we found two problems, being integrated with a workflow environment; the actual FTRs were not accurately measured as shown in Work Distribution chart. Most of the DETs were outside the computational boundary as depicted by the red marker, which are 6 RETs and 20 DETs as per the formula.
  • The probability graph was in favor and showed that 25% probability of the DETs falling in this range, however there was considerable evidence to prove that most of the values fall outside the given boundary. Another cause of this low FP count could be attributed to platform issues. The probability of the size estimation could be severely impaired due to the non-inclusion of all the effort in the given platform due to the standard FP counting practices applied.
problems with fpa counting issues
Problems with FPA – Counting Issues
  • The focus being to arrive at accurate estimations, we found it necessary to revisit some of the counting practices. Simply extending the range would not solve the problem of arriving at an accurate count. The fundamental issue that FP solves is size estimation. There are certain irregularities that we face during the counting process which can be illustrated in the following scenario of counting Data functions.
  • In our first user story, we see that even where the DETs significantly five times more than the upper range of 50 the FP value assigned is that static 4 FP value, in the second story there are 20 DETs and it is calculated as 4 FP whereas the third user story has 21 DETs which makes the count 6 FP. Though the complexity is nearly the same for the second and third story, they are valuated differently and the difference in cost is 1500 pounds as they fall under different ranges. The second problem is the ceiling limit of 20 for Transaction FP, & 50 for Data FP which is good for quick rough size estimation, since we are not required to calculate the entire range, but where an accurate value is required, this estimation technique falls short of arriving at our desired goal.

User Story 1: Case XML Records

1 FTR , 256 DET = (M) = 4 FP

User Story 2: View Details

2 FTR , 20 DET = (M) = 4 FP

User Story 3: View MFL Details

2 FTR, 21 DET = (H) = 6 FP

Difference of cost (2,3)= £1500 pounds

problems with fpa non fp scenarios

IPF

Problems with FPA – Non FP Scenarios
  • The TMW platform has significant number of non-FP requirements such as
    • Change in Business Logic Requirements with no structural changes.
    • Computing of Business Rules effort is completely missed.
    • Workflow process changes and configuration requirements.
    • Work effort related to Internal boundaries that exist within the application.
    • Screen changes in display, layout, new values added in UI controls are missed.
    • Associated project tasks to configure, manage or change settings.

We try to validate the possibility extending FPA, so that it may cover all aspects of accounting for changes in real time project scenarios and in providing fundamental counting solutions and accurate size estimation, for other common occurrences without having to maintain a long non functional list of activities, and applying alternative measuring practices.

problems with fpa internal boundaries

IPF

Problems with FPA – Internal Boundaries

External Boundary

Xml

XSL

Xml

Internal Boundary

Internal Boundary

CRM

External Components

Upstream - Downstream Systems

outbound

  • The X platform has internal boundaries which needs to be addressed, since the work effort is that of two environments. The environment A is based on Java Spring Framework and has specific inbound and outbound flow of information, whereas the CRM environment is a workflow engine has a different set of inbound and outbound messaging constructs, rules, processes and users associated with it. The outbound data store(EIF) from A to CRM will be an inbound message (EI) for CRM and vice versa.
  • The work in both environments differ significantly, the A framework acts as a parser or a BIE interface to external systems, and the CRM system is clearly the workflow engine which orchestrates the trouble management tickets. Every DET/RET-FTR passed from external systems to NEO Framework is passed to CRM Environment and vice versa.

persist

DB

Xml

X

outbound/inbound

save

Inbound/outbound

ActionBean

XMLEngine

SaveBean

Enricher

ProcManager

invoke

xml

invoke

load

Inbound/outbound

fpa improvement measures
FPA Improvement Measures

The following discuss measures to improve and tune FPA for different environments. This will help to build a matrix with FP values close to actual and near real-time estimates.

The scope of this work will look subjectively into the various scenarios, complexity measures, and the specific environment to which the FP count is being derived. This will not only provide proper ranges in terms of FTRs and DETs but also provide a properly defined structure that maps to real world estimates. This does not act contrary to the VAF, but acts in complement to the VAF in determining the scope and count.

To achieve this we look at two possible scenarios, the top-down approach where we have historical data, or established FP practice and where we seek improvement, and the bottom up approach where we try to build a new solution from the ground up based on analogy and user experience in a particular environment.

estimating fp range using top down approach
Estimating FP Range using Top Down Approach
  • X has predefined estimation approach which is T&M, whereby the complexity of a User story is assessed by assigning range values such as
  • To identify the ranges here we will obtain the mean averages for over a dozen previous sprints. So we can arrive at Derived Sample Range values.
  • Derived Sample Range
  • UL = Average of 5 fields or less with 2 tables
  • VL = Average of 10 fields or less with 3 tables
  • L = Average of 25 fields or less with 4 tables
  • SM = Average of 50 fields or less with 5 tables
  • M = Average of 90 fields or less with 6 tables
  • MH = Average of 120 fields or less with 7 tables or more
estimating fp range using top down approach1
Estimating FP Range using Top Down Approach

DET Range

FTR/RET Range

We begin by building a complexity matrix with the DET ranges in the T&M established methodology, where the complexity measure provided by existing FP counting practice. The missing values are determined by calculating mean and creating an Derived Sample FP table, here the ranges are determined from the sample data, and each user story is analyzed to determine the FP assigned and the average number of DETs and RETs that fall under that range.

The complexity ranges must include the effort variance for Data and Transaction functions. This is achieved by determining the man hours required for each of the given scenarios (DET-RET/FTR complexity) for Transaction and Data Function from the Sample Data*(Actual hours for each US from PMO Office). After building truth tables for EI, EO, EQ, ILF and EIF, we normalize the table based on common denominator which is counted as total hours per FP. The final Transaction/Data Function table will be created by normalizing the Derived T&M FP values and effort estimated based on Hours/FP values which will bring us to a real time estimate for the targeted environment.

estimating fp range using bottom up approach
Estimating FP Range using Bottom Up Approach
  • The bottom up approach requires the construction of a Truth Table which will help in determining the total work effort can be demonstrated with a sample scenario. Define “what if” task based scenarios to determine values.
  • For Data functions we identify the database requirements, how much effort is required here. Apart from modeling the database, the additional requirements for developing triggers, functions and procedures are addressed including maintenance and management requirements for ILF. For EIF we need to consider the technology used for storing the data externally using the given interfaces, and the other factors that govern the data integrity, availability, reliability, and scalability needs. We fill in the expected man hours based on “what if“ conditions. WHAT will be the estimated number of hours for a task given IF the number of DETs are x and the number of RETs are y. All tables are normalized by a common denominator which will be used to compute the man hours/FP variable.
  • For Transaction functions, we identify the platform, environment, language, framework and deployment requirements to arrive at feasible function points. This is computed for EI, EO, EQ separately.

Weighted Effort Index

Total Hours for the task computed

= 3hrs Design+8hrs Dev+3hrs Test+2hrs CI

= 16 hours

Given a aggregate of 8hours/FP for the development task we will compute the FP values by dividing the ranges by the common denominator (8).

DET Range

FTR/RET Range

standard fp count vs progressive fp count
Standard FP Count vs. Progressive FP Count

Static Estimation based on Ranges

Low=3, Medium=4, High=6

Progressive Estimation using coefficients

IFPUG estimates High, Medium, Low complexity for data functions and transaction functions.

Scenario 1: Case XML Records

1 FTR , 256 DET = >50 = 256 x0.08= 20.4 FP

Scenario 2: View Details

2 FTR , 49 (DET = 49 x 0.13) = 6.37 FP

Scenario 3: View MFL Details

5 FTR, 19 DET = (19x0.20) = 3.8 FP

We use the same model, but use co-efficient values to derive the FP, the same formulae is refined, but factors in unit of work count. There is no upper ceiling limit and actual FP values can be projected.

Scenario 1: Case XML Records

1 FTR , 256 DET = (>50) = 4 FP

Scenario 2: View Details

2 FTR , 49 DET = (21-50) = 4 FP

Scenario 3: View MFL Details

5 FTR, 19 DET = (1-20) = 4 FP

Total = 16 FP vs. 31.42 using Progressive Estimation.

Since most transactions have over 100-200 DETs, we are only estimating for less than half the actual. The upper ceiling limit baselines all transactions as high FP with static value of 6 FP.

1 6 20 Upper Ceiling Limit

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ……..

1 20 50 Upper Ceiling Limit

progressive fp count
Progressive FP Count

FP Value

FP Value Coefficients

Scenario A: Where FTR <=1

Scenario B: Where FTR <1 & FTR <3

Scenario C: Where FTR=>3

progressive workflow management

IPF

Progressive Workflow Management
  • Workflow management has become a part and parcel of most projects. Many competing technologies exist today that help to seamlessly bind the workflow management, with business rules cutting across multiple segments and departments of an organization.
  • Workflows are transactions operations that solves a business problem through application of business rules and logic in collaboration with other processes, actors and teams across the same segment or multiple segments of the organization to streamline the collaborative work process.
  • Workflows consists of various activities which are measured across collaboration segments or departments which are depicted as swimlanes in model views.
progressive workflow management1

IPF

Progressive Workflow Management
  • Integrated Process Flows (IPF)

The purpose of Integrated Process Flows is to include work effort by introducing the processes and collaboration activities performed within a defined scenario.

This works in complement with existing counting practices, where the “ingredients” of a process are measured as DET/RET-FTR and the “preparation” is measured through PET/LCS. The DET forms the key elements in the equation the PET forms the key process activities. The RET-FTR address groups and sub-groups that exists and the LCS address the collaboration across users, process segments and both internal and external to the application boundary.

progressive workflow management ipf

IPF

Progressive Workflow Management - IPF

One Siebel

X Framework

CRM

GTC

Existing Flow

Send Create Request

Parse XML Message

New/Updated Process

Create XML

DET=32

Existing 30

New DETs 2

+Service Level

+Category

FTR=1

SLA

PET=11+

Existing 6+

New 5

LCS=4

One Siebel, NEO, Amdocs, GTC

FP – IPF

9.00 – 4.20

Changed FP – IPF

9.60 – 7.70

Enhancement FP = 4.1

Reuse FP = 13.20

Invoke Process Manager to save XML data

Create new SaveBean & Store Data.

Create new MSF Request

Get Fault Details from GTC

Get Fault Details & Perform Line Test

Check for new Clear Codes.

Value change without structural change can now be accounted as PET/LCS

Do Manual Fallout for XYZ condition

Proceed with existing MSF flow

progressive workflow management2

IPF

Progressive Workflow Management
  • Process Element Type (PET)

Process Element Type denotes the atomic, non repeating, activities and business rules in a workflow or process within the application boundary. All repeating group of activities are referenced through actions and will be counted once per process.

(OR)

A PET is a unique user recognizable, non-recursive activity which represents an action, rule or task. A dynamic activity adds more information on the processes that apply to the data or transaction function. If a PET contains a recursive logic then that processing logic will be mapped as a separate process and will be called through an action but only referenced once in the given scope.

process element type

IPF

Process Element Type

Create Infrastructure Event Process Flow

Associate Information Process Flow

Get MA Response

Create Request

Action Notation is an activity diagram with references to another sub-process/activity and is represented by an activity diagram with an inverted fork symbol

Create AR

Pass

Fail

Link to existing AR

Associate Information

Inconclusive

Link to AR

Search AR with ref ID

Analyze & Validate

Design Solution

Create AR/Tracker AI

Locate Tracker AI/Tracker MSO

Implement Solution

Link to AR

Commission Solution

PET = 6

PET = 8

progressive workflow management3

IPF

Progressive Workflow Management
  • Logical Collaboration Segments (LCS)

Logical Collaboration Segments are the unique departments, user groups, or work segments across which the rules, process and workflow is managed and referenced from within the application boundary.

(OR)

A LCS is user recognizable group of collaboration segments within an data or transaction function and contains references to both internal and external to the application boundary.

It is best to look at PETs to identify the different collaborators.

progressive workflow management4

IPF

Progressive Workflow Management

One Siebel

X Framework

CRM

GTC

Existing Flow

Send Create Request

Parse XML Message

New/Updated Process

Create XML

Application Boundary

Invoke Process Manager to save XML data

Create new SaveBean & Store Data.

LCS = 4

Seibel,

X Framework,

CRM,

GTC

Create new MSF Request

Get Fault Details from GTC

Get Fault Details & Perform Line Test

Check for new Clear Codes.

Value change without structural change can now be accounted as PET/LCS

Do Manual Fallout for XYZ condition

Proceed with existing MSF flow

integrated process flow ipf

IPF

Integrated Process Flow (IPF)
  • The primary focus is to keep the process simple and uniform, and consistent to IFPUG counting practices. The IPF is evaluated using PET/LCS and is calculated similar to DET/RET count. The final UFP count is the sum total of UFP and IPF Count.
  • Benefits of adding PET/LCS in Data Functions we achieve the following:
    • Detailed view into the procedures, functions and triggers that may be applied.
    • The total work effort with respect to scripts and other activities that may be needed.
    • The different collaborations with respect to actor and application boundaries both within and external to the applications.
integrated process flow ipf1

IPF

Integrated Process Flow (IPF)
  • Benefits of adding PET/LCS in Transaction Functions we achieve the following:
    • Detailed view into the various activities across the internal application(s) and boundaries.
    • Inclusion of process, rules and workflows in different process segments and interactions.
    • Allows the counting of process and rule changes where there may be no changes to the existing structure and address value-change triggered scenarios.

Ensures complete Requirements to FP accountability.

benefits of progressive workflow fp

IPF

Benefits of Progressive Workflow FP

All user stories in X Telecom Company provides activity flow with Use Cases. The basic flow, extensions, business rules, alternate flows are described therein. By counting the activities performed we are able to provide the complete count of the user story and covers both the depth & breath of the work into FPA. By including both the internal and external collaborations we provide an in-depth view into the process, collaboration and complexity and arrive at the true effort required. The concept of counting Activities and referencing existing activities from within another process with an official activity diagram with inverted fork symbol fall under the UML model specifications. The activity diagram with inverted fork symbol is termed Action Notation under UML tools like Enterprise Architect.

  • Change in Business Logic Requirements with no structural changes.
  • Computing of Business Rules effort is completely missed.
  • Workflow process changes and configuration requirements.

Using Progressive Workflows solves the primary issue of having to deal with User Stories/Requirements with no significant changes to the structure but with considerable changes and addition to the business logic and or business rules which are otherwise classified as Non-FP scenario. Similarly user stories with value-change in parameters which trigger new business rules can also be captured. By adding each step in the workflow and operation, we are able to include all aspects of workflow management to the existing FPA model.

benefits of progressive workflow fp1

IPF

Benefits of Progressive Workflow FP
  • The sequence diagram depicted here represents the actual journey which shows the associated workflow with regard to an internal process. It details the Logical Collaboration Segments, with the business rules to be associated with the call, with sequence of actions numbered as they are performed in the document. The task involves the workflow orchestration, and will have only the business logic and rules to be applied. In such situations it will be possible to count each activity as a PET and count the LCS to measure complexity to attain a final FP count.
slide33

IPF

Benefits of Progressive Workflow FP

    • Work effort related to Internal boundaries that exist within the application.
  • With the convergence of new and existing technologies, there will always be multiple internal application boundaries in major enterprise applications and thereby with Progressive Workflows, we have established the near actual size estimate and effort index for the entire application, rather than other solutions like Cosmic FP (FFP) which addresses only the entry/exit criteria. Cosmic FFP could only using double counting in most cases where the same data flows across multiple boundaries as in the case of NEO environments.
  • The benefit of Progressive Workflow model is that it measures the process flow across internal boundaries, thereby providing exact measure of work involved within the application.
slide34

IPF

Benefits of Progressive Workflow FP

    • Cosmetic changes in display, layout, new values added in UI controls are missed.
  • There has always been a debate of where counting practices are applicable and extensible to accommodate all requirements in a project. FPA by far provides the most comprehensive method of estimation, which is measurable, quantifiable and scientific. The above requirements mandate the optional inclusion of trivial screen changes, scripts or tasks under cosmetic design changes, and additional value changes to UI controls.
    • Associated project tasks to configure, manage or change settings.
  • The purpose of including “tasks” under IPF is optional and debatable topic. If the client and the company are in agreement to include certain tasks such as project management under the same accounting practice, we provide an option to extend FP to include the same. This is purely speculative in nature and will require further research.
slide35

IPF

Benefits of Progressive Workflow FP

    • The user benefit for the FP analyst is not having to deal with workflow objects, rules engine, and implementation but in using the standard requirements document to pick up the basic flows from the existing defined process, or follow activity diagrams provided.
    • The IPF elements are user identifiable, and traceable to the requirements manifest. Hence there will be greater ease of adoption. An X Telecom companiesfor instance only accepts Raw FP/Unadjusted FP count for TMW platform. The VAF is highly debatable in certain environments where there is convergence of different technologies, such as workflows, where most of the tasks are automated.
    • The introduction of Integrated Process Flow(IPF) with PET/LCS has been inline with IFPUG counting practices with the addition of progressive FP count, hence termed as Progressive Workflow. Using the provided worksheet to we will be able to measure Total FP count, Reuse FP count and Process FP counts separately if required.
  • By incorporating FPA in a standard UML model, it will be possible for all existing tools to seamlessly incorporate FP count into the UML models, BPM workflows and thereby providing a seamless way to extend FP counting practices in modeling tools such as Enterprise Architect.
progressive ipf

IPF

Progressive IPF

Workflow Measure

Workflow Coefficient Measure

The FP values are calculated by multiplying with the co-efficient indexes which can be used to arrive at the actual FP value for that process.

The PET represents the activities that are in the process. We assign a complexity level and FP values which can be ascertained using bottom up estimation approach.

This process helps to identify the distinct activities and collaboration FP for a given User Story scenario.