Objectives. The aim of this topic is to introduce the concepts of measurement and metrics as well as the practical skills necessary to define and count LOC and FP, the basis for many software metrics. You will:
PowerPoint Slideshow about 'Objectives' - brit
Download NowAn Image/Link below is provided (as is) to download presentation
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Anything that you need to quantify can be measured in some way that is superior to not measuring it at all. Tom Gilb
When you can measure what you are speaking about and can express it in numbers, you know something about it. But when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind. Lord Kelvin
If software development is to be viewed as an engineering discipline, it requires a measurement component that allows us to better understand, evaluate, predict and control the software process and product. Victor Basili, University of Maryland
The GQM approach is based on the idea that in order to measure in a meaningful way we must measure that which will help us to assess and meet our organisational goals.
A Goalis defined for an object, for a variety of reasons, with respect to various models of quality, from various points of view, relative to a particular environment. The objects of measurement are products, processes and resources.
Questions are used to characterize the way the assessment/achievement of a specific goal is going to be performed based on some characterizing model. Questions try to characterize the object of measurement with respect to a selected quality issue and to determine its quality from the selected viewpoint. Questions must be answerable in a quantitative manner.
Metrics are associated with every question in order to answer it in a quantitative way.
Six Sigma is a rigorous and disciplined methodology that uses data and statistical analysis to measure and improve a company's operational performance by identifying and eliminating "defects" in manufacturing and service-related processes.
Commonly defined as 3.4 defects per million opportunities, Six Sigma can be defined and understood at three distinct levels:
In Six Sigmameasurement programs focus on collecting data that will help you to improve your processes in order to better satisfy your customers.
It is a common sense notion that something you do in working on a product (some part of the process) will have an effect in terms of the customer's perception of the quality of that product, and that by identifying problems with the process and making subsequent process improvements will improve the customer's perception of the product.
The measurments that you should be making are, therefore, the ones that will help you to assess and improve processes with the goal of satisfying the needs of the custmer(s).
In order to assess process improvement we must collect data over time.
This should begin with establishing baseline performance. A baseline is the average of historical data over a specified period of time.
In the example above, Module XYZ is unavailable 10% of the time is baseline a statement of the baseline performance.
Then we must then establish a goal.
This goal is stated as a measure of improvement over a period of time in terms of the baseline data.
The Six Sigma rate of improvement goal is often quoted as being a 10X improvement over a period of 2 years. If we take this improvement goal and apply it to the example above we would establish the 10X/2years improvement goal as Module XYZ is unavailable 1% of the time. Or stated more usefully, "Module XYZ is available 99% of the time".
So summarizing the example, we may have the following:
Operational Statement: "Module XYZ is periodically unavailable"
Historical Data: Availability metrics calculated for module XYZ over the past 6 months indicate that it is available 90% of the time.
Process metrics primarily focus on organizational performance and quality achieved as a consequence of a repeatable or managed process. This characterization and evaluation of a process for achieving performance and quality outcomes is known as Quality Assurance.
Process metrics include metrics relating to:
statistical SQA data
defect categorization & analysis
defect removal efficiency (DRE) DRE = E/(E+D) E number of errors before delivery (or before a particular phase) D number of defects after delivery
A measurement program is an effective means for controlling the software process performance (i.e., the actual results a project achieves by using its defined process) and guiding improvements in the software engineering processes.
It has to based on a model in which a comprehensive measurement framework may be constructed, analyzed, and modified as the organization’s goal change.
In order to define such a program, it is presented the underlying life cycle that
Implementing: data are collected in accordance with the operational measurement procedures, validated, and analyzed.
apparently the most easy, but also the most important.
To be effective it must be automated.
one of the easiest and most cost effective way to validate data is simply to observe the data collection process and make sure that data are collected in accordance with the defined procedure (to ensure at least standardization)
to evaluate the status of the project with respect to plans
to improve the software process
In processanalysis allows to evaluate whether the project is drifting off track, so that they can be brought back under control by suitable actions.
A posteriorianalysis permits to controls the trend of the key measures of the whole company, and to refine the baselines to compare against, so that it can be judged whether or not the improvement actions are working as intended and what the side effects may be.
Improving: periodically, the entire measurement program is to be reviewed to ensure that it helps the software projects to control their processes, and the whole organization to improve. Metrics can be revised, redefined following again the metric life-cycle.
In addition, in case there exist metrics with dubious values or not used, they can even be retired.
The Primary Data are the lower level of the information to be collected and documented when prescribed, in order to allow the calculation analysis helpful to monitor the process and evaluate the products.
Furthermore the size represents the starting point for the whole metrics system.
It is involved in both assessment and predictive metrics;
for assessment, it is used to measure an artifact or a system, or also to normalize other metrics
for prediction it serves to give a concrete mean to express the provisions: the strong connection between Product size and the effort needed to develop the Software Product, the size allows to derive accurate estimates of the effort, cost, and schedule of the project
Measures performed at the beginning of the Concept Exploration can only provide indicative information (low confidence level) on the target product size and can be gathered only with indirect measures; the calculation is based on the estimate of the effort; the obtained measure can have a low confidence level,
This method is based on the computation of a quantity corresponding to the effort needed to the specification of the system.
This calculation considers factors as
number of people to be interviewed,
number of days required to the interviews,
number input and output screens, reports, menus and so on.
With a very simple step, it is possible to obtain a first, indicative estimate of the global effort needed for the complete development of the Software Product (under certain hypothesis.)
Considering a reference Productivity Parameter the estimated global effort allows to derive a first estimate of the size.
A direct estimate of the size is possible at the end of the Concept Exploration, using the Function Point Analysis method based upon the artifacts produced, i.e. System Requirements and System Architecture.
The Function Point Analysis method gives a way of sizing software through the analysis of the implemented functionality of a system from the user‘s point of view, considering the application as a black box.
to evaluate the unadjusted Function Points (UFP) count, basing on the number of user functions and on its complexity classification
to evaluate the adjusted Function Points (FP) count, by scoring fourteen general system characteristics.
These steps allow the calculation of the product size keeping into account also its complexity, i.e. the degree of complicatedness of the product owing to the structure of the aggregated components and related relationships.
Count the number of external inputs, external outputs, external inquiries, internal logic files, and external interface files required. The IFPUG provides the following definitions for these:
External Input (EI): An EI is an elementary process in which data crosses the boundary from outside to inside. This data may come from a data input screen or another application. The data may be used to maintain one or more internal logical files. The data can be either control information or business information. If the data is control information it does not have to update an internal logical file.
External Output (EO): An EO is an elementary process in which derived data passes across the boundary from inside to outside. Additionally, an EO may update an ILF. The data creates reports or output files sent to other applications. These reports and files are created from one or more internal logical files and external interface file.
External Inquiry (EQ): An EQ is an elementary process with both input and output components that result in data retrieval from one or more internal logical files and external interface files. The input process does not update any Internal Logical Files, and the output side does not contain derived data.
Internal Logic File (ILF): AN ILF is a user identifiable group of logically related data that resides entirely within the applications boundary and is maintained through external inputs.
External Interface File (EIF): An EIF is a user identifiable group of logically related data that is used for reference purposes only. The data resides entirely outside the application and is maintained by another application. The external interface file is an internal logical file for another application.
Next we calculate a value adjustment factor (VAF) based on 14 general system characteristics (GSC's) that rate the general functionality of the application being counted.
Each characteristic has associated descriptions that help determine the degrees of influence of the characteristics. The degrees of influence range on a scale of zero to five, from no influence to strong influence.
The Object-Oriented Function Points method uses an object-oriented specification, focusing on object, attributes, and operations. Applying this method, the boundary can be moved to surround individual classes; in this way not only delivered functionality are measured, but also the size and the complexity of the application.
When dealing with OO Programming, these steps have to be ensued
Look at each cluster of object classes as a system and count Function Points
Analyze the Object Diagrams and count:
Data Items (Internal Data Items, External Data Items)
Service Requests (Incoming, Outgoing, Status Inquires)
The most general approach to size measurement is to count the number of text lines in a source program. In doing this, we typically ignore blank lines and lines with only comments. All other text lines are counted. This approach has the advantage of being simple and easy to automate.
This LOC counting approach has the disadvantage of being sensitive to formatting. Those programmers who write very open code will get more LOC for the same program than would their peers who used more condensed formats.
Even comments influence this counting, you can strip comments, but commented code is much more maintainable than uncommented one.
In doing this, you should also establish the practice of putting a logical LOC on each physical line of the source program.
Counts of logical statements attempt to characterize size in terms of the number of software instructions, irrespective of their relationship to the physical formats in which they appear
Either the operational definition of LOC and statement is to be provided through tables that explicitly identifies the values for each attribute that is to be included in or excluded from our statement counts.
Examples of such table are from
Robert E. Park “Software Size Measurement: A Framework for Counting Source Statements” CMU/SEI-92-TR-020
The productivity is generally measured as the number of working hours needed for the production of a single unit.
Simple? Yes, but ….
Many productivity indicators exist in literature; these are generally obtained as mean values of a set of different projects, which were developed by people with different skills, working in different development environments.
Many factors which influence these indicators exist. They are not present in every organization at the same time, neither standardizable.
Each organization must collect its own data and use them for the productivity evaluation, concentrating only on factors allowed by the available data.
The level to which these factors are present differ from one organization to another, therefore it is important to collect and adopt a company’s own data in different periods. The best way to handle the productivity function is to base it on a family of factors.
It describes “how” the project is managed. Management characteristic information is recorded in four main categories:
User Participation. Record the level of participation by the user or their representative on the project. It should be considered during the project characterization.
Stability of Product Requirement. It characterize the extend that the requirement remained constant throughout development. It should be measured by the Requirements Change Distribution
Constraining Management Factors.It specify the management or administrative factors that limited the project, e.g. fixed cost, fixed staff size, fixed functionality, fixed quality and reliability, fixed schedule, limited accessibility to development system, limited accessibility to target system.
Not Directly Productive ActivitiesThese are some activities, that although necessary, do not produce a measurable and tangible artifact, like Project Management and Configuration Management. It is possible to collect the values of the effort spent for these activities for each project and these data will be considered, at the end of the project, for the productivity evaluation.
In general a defect is defined as a product anomaly. Examples include omissions and imperfections found during early life cycle phases and faults contained in software sufficiently mature for test or operation
We distinguish between
if not fixed, would cause one or more of the following to occur:
a defect condition in a later inspection phase,
a defect condition during testing,
a field defect,
nonconformance to requirements and specification,
nonconformance to established standards such as performance, national language translation, and usability.
Defects are injected into the product or intermediate deliverables of the product at various phases.
In particular, for the development phases before testing, the development activities themselves are subject to defect injection, and the reviews or inspections at the end of the phase activities are the key vehicle for defect removal
The Duration of an activity is given by the difference between the start and the end date of the activity (elapsed time).
It is possible to distinguish various levels of scheduling: a general level, which concerns the whole project, the level of Concept Exploration and Iteration and a level which concerns a single activity.
Each time a Plan is revised, the duration of the activities can be re-estimated; all these estimates have to be collected.
The Estimated Duration of the activities provides information on the quality of the process, when compared with the actual values.
The collection of all estimations, compiled in the various planning activities, constitutes the complete history of the scheduling, allowing comparison of the various estimations with the actual values and between them.
The comparison between the actual and the planned duration values allows to calculate the Schedule Adherence and helps to understand and evaluate the quality of the adopted process.
These data indicate the number of the staffs lost and of the staffs that are added in order to substitute the lost staffs or for achieving the estimated staff number.
The number of these staff should be recorded per level, and tracked, if there are changes, with a suggested period of a month.
By tracking the staff addition and losses, it is possible to record the actual staff for each time period and compare it with the estimates.
Metric: staff turnover to provide information on the percentage of staff losses
Note that: the staff changes allow the calculation of the ‘turn-over’ affecting the staff stability: the loss of one or more people, even though substituted keeping unchanged the number of staff of a certain level, can affect in significant manner the productivity,