Use of Expanded Input and Output Sets in a General Parametric Cost Model

Use of Expanded Input and Output Sets in a General Parametric Cost Model PowerPoint PPT Presentation


  • 105 Views
  • Uploaded on
  • Presentation posted in: General

Use of Expanded Input and Output Sets in a General Parametric Cost Model. 2. SSP (Subsystem Phase model). Collection of development cost modulesGeneral model type, with substantial logic, calibrated to dataModules: mechanical, electronic, chip, software (COCOMO II))In formulation and use for sev

Download Presentation

Use of Expanded Input and Output Sets in a General Parametric Cost Model

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


1. Use of Expanded Input and Output Sets in a General Parametric Cost Model 1 Use of Expanded Input and Output Sets in a General Parametric Cost Model Abstract This paper describes a collection of hardware development cost modules that have been in formulation and use for several years. The work grew out of a need to improve equation logic, to better utilize available information for definition, and to better integrate resources and schedule in model output. In these models mass is largely replaced as a size parameter by parameters that address the number and complexity of parts, resulting in the equivalent of instruction-count intermediate variables; additionally, some existing parameters are decomposed into phased subsets. The richer input sets aid definition and make possible more logical modeling relationships. They also facilitate the decomposition of project-unique subsystems into common familiar elements, which form a database for the definition of new assemblies and subsystems. Model output includes hours, dollars and staffing levels for functional elements at the WBS input level, with resource output calculated for specific schedule phases and time increments. This granularity of output helps to increase both the overall amount of relevant information for analysis, and, more specifically, the number of potential points of comparison between model output and other sources. The combination of richer input set and greater output depth promotes understanding of model results, increases the usefulness of cost data, and stimulates model development through feedback. Abstract This paper describes a collection of hardware development cost modules that have been in formulation and use for several years. The work grew out of a need to improve equation logic, to better utilize available information for definition, and to better integrate resources and schedule in model output. In these models mass is largely replaced as a size parameter by parameters that address the number and complexity of parts, resulting in the equivalent of instruction-count intermediate variables; additionally, some existing parameters are decomposed into phased subsets. The richer input sets aid definition and make possible more logical modeling relationships. They also facilitate the decomposition of project-unique subsystems into common familiar elements, which form a database for the definition of new assemblies and subsystems. Model output includes hours, dollars and staffing levels for functional elements at the WBS input level, with resource output calculated for specific schedule phases and time increments. This granularity of output helps to increase both the overall amount of relevant information for analysis, and, more specifically, the number of potential points of comparison between model output and other sources. The combination of richer input set and greater output depth promotes understanding of model results, increases the usefulness of cost data, and stimulates model development through feedback.

2. Use of Expanded Input and Output Sets in a General Parametric Cost Model 2 SSP (Subsystem Phase model) Collection of development cost modules General model type, with substantial logic, calibrated to data Modules: mechanical, electronic, chip, software (COCOMO II)) In formulation and use for several (last 12) years Successive versions of model used on variety of projects Including un-manned space projects (GIFTS, GLAST , MRO, SIM, assorted instruments), and manned (Transhab, SS Prop Module) Is an ongoing development project, not static Key distinguishing characteristics: Developed from detailed, sub-project-level data Uses relatively large input set, Including number-of-instructions-type size parameters Produces granular output for analysis: Hours, staffing levels and dollars by function, In discrete phases and uniform time units, Both resource and time output at WBS level of definition Introduction   The current name of the subject model, for lack of a better label, is SSP (Subsystem Phase Model). It is a general-type development cost model, with a moderate level of algorithmic complexity and logic. Separate modules for mechanical, electronics, chip, and software compromise the general-ness to a small extent. The primary subject of the paper is the hardware part of the model, and more emphasis has been given to the mechanical module, because that is seen as the more unique part of it.   The model algorithms have been in development for about 12 years. During that time the author has used successive versions of the model for a variety of development cost estimates and analyses, including un-manned space projects (GIFTS, GLAST, MRO, SIM, and assorted space science instruments), and conceptual manned space projects (Transhab, Propulsion Module, both intended for space station). The model is an on-going development project; while the core equations have not changed substantially for some time, improvements (hopefully) in functionality are occurring more or less on an annual basis.   Key distinguishing characteristics of the model are: (1) It was developed from detailed, sub-project-level data, as opposed to summary project level data. (2) It uses a relatively large input set, including number-of-instructions-type size parameters. (3) It produces a relatively granular output for analysis: hours, staffing levels and dollars by function, in discrete phases and uniform time units, with both resource and schedule output being produced at the work breakdown structure (WBS) level of definition used by the modeler. Introduction   The current name of the subject model, for lack of a better label, is SSP (Subsystem Phase Model). It is a general-type development cost model, with a moderate level of algorithmic complexity and logic. Separate modules for mechanical, electronics, chip, and software compromise the general-ness to a small extent. The primary subject of the paper is the hardware part of the model, and more emphasis has been given to the mechanical module, because that is seen as the more unique part of it.   The model algorithms have been in development for about 12 years. During that time the author has used successive versions of the model for a variety of development cost estimates and analyses, including un-manned space projects (GIFTS, GLAST, MRO, SIM, and assorted space science instruments), and conceptual manned space projects (Transhab, Propulsion Module, both intended for space station). The model is an on-going development project; while the core equations have not changed substantially for some time, improvements (hopefully) in functionality are occurring more or less on an annual basis.   Key distinguishing characteristics of the model are: (1) It was developed from detailed, sub-project-level data, as opposed to summary project level data. (2) It uses a relatively large input set, including number-of-instructions-type size parameters. (3) It produces a relatively granular output for analysis: hours, staffing levels and dollars by function, in discrete phases and uniform time units, with both resource and schedule output being produced at the work breakdown structure (WBS) level of definition used by the modeler.

3. Use of Expanded Input and Output Sets in a General Parametric Cost Model 3 Goals of Cost Modeling General: To provide as much useful information about cost and schedule as possible, in order to support project planning and to maximize return on investment Next level: To maximize use of available definition information To provide detailed cost and schedule output, with many potential points of comparison for analysis To model the process as faithfully as possible To provide useful feedback to the modeler To maximize understanding of model results (Constructiveness) To make the modeling process as systematic as possible To minimize analyst judgment in the process Goals of Cost Modeling At the risk of sounding trite, I will briefly try to list my version of parametric cost modeling goals. The general goal is to provide as much useful information about cost and schedule as possible, in order to support project planning and to maximize return on investment. A little more specifically, the objectives are (1) To maximize the use of available definition information. (2) To provide detailed cost and schedule output, with many potential points of comparison for analysis. (3) To model the resource-consumption process as faithfully as possible, so as to provide useful feedback to the modeler and to maximize the understanding of model results (constructiveness, in the COCOMO sense). (4) To make the modeling process as systematic as possible, and to minimize the need for analyst judgment in the process.   While I doubt that many cost people would question the desirability of these goals in general, I think most of us would agree that, due to the natural limitations of data production and the complexity of the processes, we can only hope to make slow, sporadic progress toward achieving them. Additionally, I think it is difficult to make progress in all of the goal areas at once, but I have generally found that forcing progress in goals 1 and 2 leads to improvement in 3 and 4. Or, in other words, formulating a hypothesis, no matter how crude, stimulates the data collection and discovery process. Goals of Cost Modeling At the risk of sounding trite, I will briefly try to list my version of parametric cost modeling goals. The general goal is to provide as much useful information about cost and schedule as possible, in order to support project planning and to maximize return on investment. A little more specifically, the objectives are (1) To maximize the use of available definition information. (2) To provide detailed cost and schedule output, with many potential points of comparison for analysis. (3) To model the resource-consumption process as faithfully as possible, so as to provide useful feedback to the modeler and to maximize the understanding of model results (constructiveness, in the COCOMO sense). (4) To make the modeling process as systematic as possible, and to minimize the need for analyst judgment in the process.   While I doubt that many cost people would question the desirability of these goals in general, I think most of us would agree that, due to the natural limitations of data production and the complexity of the processes, we can only hope to make slow, sporadic progress toward achieving them. Additionally, I think it is difficult to make progress in all of the goal areas at once, but I have generally found that forcing progress in goals 1 and 2 leads to improvement in 3 and 4. Or, in other words, formulating a hypothesis, no matter how crude, stimulates the data collection and discovery process.

4. Use of Expanded Input and Output Sets in a General Parametric Cost Model 4 Original motivation for change: mass-based equations In practice, observed many cases where cost should be less sensitive to weight than model results indicated Logic, experience tell us that mass is not an ideal predictor of cost: Mass was originally used because it was available In the global (large-mass-range) domain, there is often a positive correlation between mass and cost, but: In the local (small-mass-range) domain, results are inconsistent Correlation may be insignificant (no relationship), or even negative (inverse relationship) The need to use log-log regression to obtain a good fit should be a warning sign Log-log r^2 is misleading Error in original units is the relevant measure Observed that other parameters appeared to be more related to cost than mass Also true for fabrication costs when material cost excluded Motivation for Change   The author’s original training in hardware parametric cost modeling was primarily with mass-based models, including the commercial general model PRICEH ™. Initially the author was tasked mostly with working on model output (post-) processing tools, but in time was drawn into evaluation of model algorithms and (against his better instincts) into performing cost analysis from beginning to end.   It was the process of actually using the models (doing real estimates) that led to discomfort with mass as a primary model variable. It was observed that in numerous cases cost output was more sensitive to weight than seemed reasonable, and that in selected cases there seemed to be no relationship between cost and mass, or even worse, an inverse relationship. In particular, engineering costs appeared to be less related to mass than to other variables.   Gradually, it became clear that mass had become a primary cost model variable partly out of convenience, because there almost always was a mass estimate available at some level of confidence, and because, in the large-mass-range domain, where most of the early parametric regressions were performed, there usually was a positive correlation of some magnitude between mass and cost. But the fact that so many of the regression results were power functions, obtained with log-log regression, was an indication that the results might not hold so well for analysis in the small-mass-range domain. Additionally, when the results of particular regressions were examined more closely, and the regression error was translated into the original linear units, it was evident that there were high levels of uncertainty in many of these regression results.   Examination of particular equipment types indicated not only that cost was poorly related to mass or not at all, but also that there were other parameters that obviously were related to cost, and that many of these were recorded in data sheets and estimated to some level of confidence in conceptual design. (This was more obvious for engineering costs, but also became evident, when material costs were excluded, for fabrication labor costs.) Motivation for Change   The author’s original training in hardware parametric cost modeling was primarily with mass-based models, including the commercial general model PRICEH ™. Initially the author was tasked mostly with working on model output (post-) processing tools, but in time was drawn into evaluation of model algorithms and (against his better instincts) into performing cost analysis from beginning to end.   It was the process of actually using the models (doing real estimates) that led to discomfort with mass as a primary model variable. It was observed that in numerous cases cost output was more sensitive to weight than seemed reasonable, and that in selected cases there seemed to be no relationship between cost and mass, or even worse, an inverse relationship. In particular, engineering costs appeared to be less related to mass than to other variables.   Gradually, it became clear that mass had become a primary cost model variable partly out of convenience, because there almost always was a mass estimate available at some level of confidence, and because, in the large-mass-range domain, where most of the early parametric regressions were performed, there usually was a positive correlation of some magnitude between mass and cost. But the fact that so many of the regression results were power functions, obtained with log-log regression, was an indication that the results might not hold so well for analysis in the small-mass-range domain. Additionally, when the results of particular regressions were examined more closely, and the regression error was translated into the original linear units, it was evident that there were high levels of uncertainty in many of these regression results.   Examination of particular equipment types indicated not only that cost was poorly related to mass or not at all, but also that there were other parameters that obviously were related to cost, and that many of these were recorded in data sheets and estimated to some level of confidence in conceptual design. (This was more obvious for engineering costs, but also became evident, when material costs were excluded, for fabrication labor costs.)

5. Use of Expanded Input and Output Sets in a General Parametric Cost Model 5 Supplementing mass in traditional, mass-based CERs Introduce more variables, multivariable regressions Some improvement (mass has a smaller effect on cost), but still many problems, e.g.: Power function often retained as the functional form Mass often still has a significant effect (large exponent value) Which in turn diminishes effects of other parameters Dependence between parameters conflicts with assumptions of independence, produces poor regression results How frequently do analysts check for dependence between assumed independent variables? Effects of new parameters are not modeled well by multivariable power functions and other convenient functional forms Supplementing mass in traditional, mass-based CERs It is noted that many models, including single-equation-type and general-model-type, appear to have made attempts to address the mass dependence problem. With traditional mass-based cost estimating relationships (CERs), this involves adding more parameters to the analysis. While this practice usually has had the positive impact of lessening the effect of mass in the model, in the author’s opinion the improvement in most of these cases falls short of solving the problem. Some common indications are: (1) In many cases the power function form is retained. (2) Mass often still has a significant effect (large exponent value), which in turn diminishes the effect of other parameters. (3) Dependence between parameters, as measured by correlation, conflicts with assumptions of independence, upon which regression techniques are based. (4) Effects of new parameters are not modeled well by multivariable power functions and other functional forms convenient for regression. Supplementing mass in traditional, mass-based CERs It is noted that many models, including single-equation-type and general-model-type, appear to have made attempts to address the mass dependence problem. With traditional mass-based cost estimating relationships (CERs), this involves adding more parameters to the analysis. While this practice usually has had the positive impact of lessening the effect of mass in the model, in the author’s opinion the improvement in most of these cases falls short of solving the problem. Some common indications are: (1) In many cases the power function form is retained. (2) Mass often still has a significant effect (large exponent value), which in turn diminishes the effect of other parameters. (3) Dependence between parameters, as measured by correlation, conflicts with assumptions of independence, upon which regression techniques are based. (4) Effects of new parameters are not modeled well by multivariable power functions and other functional forms convenient for regression.

6. Use of Expanded Input and Output Sets in a General Parametric Cost Model 6 Supplementing mass in mass-based general models Add relevant parameters, logic. Get "families" of cost-to-mass curves, with complexity or scale factors (intercepts) determined by new parameters Does not solve the mass problem Exponent on weight, typically 0.7 +/- 0.2, makes its effect significant, thereby reducing the effect of other parameters Result: analysts have to do separate calibrations for many cost elements Much more than would be necessary if equations were modeling the process well Effectively reduces “general-ness” model Additional Problem: existing calibrations are mass dependent Because calibration (complexity) values were obtained for a given mass; i.e., have mass/complexity “pairs” Makes calibration values suspect for like equipment with different mass, Inhibits ability to estimate new equipment Supplementing mass in mass-based general models With general mass-based models, additional parameters are added in a slightly more complex algorithmic environment, including in some cases equation forms that are not traditional regression forms. With the addition of “complexity” or calibration parameters, there is effectively a “family” of equations, usually still mass-based, in which the intercept of the equation is, or is determined by, the calibrated complexity value. In some cases the slope of the equation, usually an exponent, is also a function of the complexity parameter.   In the author’s opinion the general models (of which he is aware) that have retained mass as a primary variable have not successfully solved the mass problem. The most significant reason for this is that mass still has a large effect on output, thereby reducing the effect of other parameters. The most noticeable result is that new complexity or calibration values are needed for many types of hardware equipment. This indicates that the model equations with full set of parameters are not modeling the process very well, that perhaps less calibrations would be needed if there were better size parameters, and that the model is not as “general” as it could be. Further adding to the problem is the observation that the calibrations, because of the strong influence of mass in the model, are essentially mass dependent. Those familiar with these models refer to their calibrations as “mass-complexity pairs” because significant fractional changes in mass require a new calibration value. Technically, if the mass changes even a little but the functionality does not, a new complexity or calibration value should be considered. This problem particularly inhibits the ability to estimate new equipment. Supplementing mass in mass-based general models With general mass-based models, additional parameters are added in a slightly more complex algorithmic environment, including in some cases equation forms that are not traditional regression forms. With the addition of “complexity” or calibration parameters, there is effectively a “family” of equations, usually still mass-based, in which the intercept of the equation is, or is determined by, the calibrated complexity value. In some cases the slope of the equation, usually an exponent, is also a function of the complexity parameter.   In the author’s opinion the general models (of which he is aware) that have retained mass as a primary variable have not successfully solved the mass problem. The most significant reason for this is that mass still has a large effect on output, thereby reducing the effect of other parameters. The most noticeable result is that new complexity or calibration values are needed for many types of hardware equipment. This indicates that the model equations with full set of parameters are not modeling the process very well, that perhaps less calibrations would be needed if there were better size parameters, and that the model is not as “general” as it could be. Further adding to the problem is the observation that the calibrations, because of the strong influence of mass in the model, are essentially mass dependent. Those familiar with these models refer to their calibrations as “mass-complexity pairs” because significant fractional changes in mass require a new calibration value. Technically, if the mass changes even a little but the functionality does not, a new complexity or calibration value should be considered. This problem particularly inhibits the ability to estimate new equipment.

7. Use of Expanded Input and Output Sets in a General Parametric Cost Model 7 Cost vs. Mass Curve Figure 1. Cost = a(mass^b); showing effect of exponent in typical mass-based power function. Figure 1 plots relative cost versus mass for the typical power function on a linear scale, and shows the relative effect of different values of the exponent on weight. Figure 1. Cost = a(mass^b); showing effect of exponent in typical mass-based power function. Figure 1 plots relative cost versus mass for the typical power function on a linear scale, and shows the relative effect of different values of the exponent on weight.

8. Use of Expanded Input and Output Sets in a General Parametric Cost Model 8 More on mass-based general models Strong non-linearity of equations with respect to size parameter limits capability to model at lower WBS input levels (see plot) Mass has been eliminated in some general models Example: PRICEM ™ A more subtle problem in general models is that the strong non-linear effect of the size parameter limits WBS input level flexibility. That is, if one attempts to decompose an existing element into smaller elements, which is theoretically a highly desirable feature of the general model approach, the fact that the core equations are non-linear with respect to mass complicates the process. See Figure 2 for an example. This phenomenon in turn limits the potential depth of output. In some cases, mass has been effectively eliminated from electronics models whose precursors were mass-based. Of the well-known models, PRICEM ™ is an example. The author is not aware of any structural or mechanical models, other than SSP, which are non-mass-based, and would like to be informed of such if they exist. A more subtle problem in general models is that the strong non-linear effect of the size parameter limits WBS input level flexibility. That is, if one attempts to decompose an existing element into smaller elements, which is theoretically a highly desirable feature of the general model approach, the fact that the core equations are non-linear with respect to mass complicates the process. See Figure 2 for an example. This phenomenon in turn limits the potential depth of output. In some cases, mass has been effectively eliminated from electronics models whose precursors were mass-based. Of the well-known models, PRICEM ™ is an example. The author is not aware of any structural or mechanical models, other than SSP, which are non-mass-based, and would like to be informed of such if they exist.

9. Use of Expanded Input and Output Sets in a General Parametric Cost Model 9 Simplified effect of varying WBS level Figure 2. If exponent on mass = 0.7 and we model at lower level with average of 5 new elements per 1 old (mass approx 20% of original element), and no other changes, average cost increase is 62%. Effect varies with exponent size. Figure 2. If exponent on mass = 0.7 and we model at lower level with average of 5 new elements per 1 old (mass approx 20% of original element), and no other changes, average cost increase is 62%. Effect varies with exponent size.

10. Use of Expanded Input and Output Sets in a General Parametric Cost Model 10 Inverse Example: Light-weighting Degree of light-weighting (structure, optics) is often a design trade variable and typically has a large effect on cost and schedule that is inversely related to mass With typical mass-based CER, results as mass varies are probably opposite to the observed cost trend Use of non-mass parameters (hog-out, material, etc) may help, but effect of mass is (always?) opposite to observed cost trend, therefore: Effects of non-mass parameters must be exaggerated to compensate, or Calibrations are needed, but are only good for a specified % of light-weighting Worse still if % of light-weighting was not specified in calibration data Material costs may also be off unless separate materials algorithm uses original, un-machined mass Poor equation logic with respect to size limits the value and applicability of sensitivity analysis, ideally a key benefit of parametric modeling Inverse Example: Light-weighting Consider a problem posed by an inverse cost to mass relationship. The degree of light-weighting (in a structure or optical assembly) is often a design trade variable and typically has a large effect on cost and schedule that is inversely related to mass. With a typical mass-based CER, the results as mass varies are probably opposite to the observed cost trend, and there is little recourse in the form of other parameters. With a multivariable or general model, use of non-mass parameters (hog-out, material type, etc) may help, but the effect of mass is always opposite to the observed cost trend. Therefore, either the effects of the non-mass parameters must be exaggerated to compensate, or calibrations are needed, but these are only good for a specified fraction of light-weighting. If the fraction of light-weighting was not specified in the calibration data, the problem is worse. Also, material costs may also be in error unless a separate materials algorithm uses or computes the original, un-machined mass.   Thus, poor equation logic with respect to size limits the value and applicability of sensitivity analysis, ideally a key benefit of parametric modeling. Inverse Example: Light-weighting Consider a problem posed by an inverse cost to mass relationship. The degree of light-weighting (in a structure or optical assembly) is often a design trade variable and typically has a large effect on cost and schedule that is inversely related to mass. With a typical mass-based CER, the results as mass varies are probably opposite to the observed cost trend, and there is little recourse in the form of other parameters. With a multivariable or general model, use of non-mass parameters (hog-out, material type, etc) may help, but the effect of mass is always opposite to the observed cost trend. Therefore, either the effects of the non-mass parameters must be exaggerated to compensate, or calibrations are needed, but these are only good for a specified fraction of light-weighting. If the fraction of light-weighting was not specified in the calibration data, the problem is worse. Also, material costs may also be in error unless a separate materials algorithm uses or computes the original, un-machined mass.   Thus, poor equation logic with respect to size limits the value and applicability of sensitivity analysis, ideally a key benefit of parametric modeling.

11. Use of Expanded Input and Output Sets in a General Parametric Cost Model 11 Brief history of development (1) Identified a potential alternate size parameter Experimented with number of parts (NP) from mechanical data First np-based engineering equations Started with engineer's rule of thumb for hours per drawing, adapted to full parameter set with NP as primary variable New equation, using NP data from mechanical drawing sets, modeled labor hour trend in data well Tested as alternate algorithm in existing models Long period of use, development, refinement Learned to estimate, developed guidelines for NP Expanded use of NP in models Adapted approach to electronics (number of pins) Eventually used NP relationship as core engineering equation, developed general model around it Brief history of development   The point of departure, in the mechanical area first, was a common engineer’s rule of thumb relationship for engineering hours per drawing. Number of parts (NP) was identified as an initial alternate size parameter, because it was relatable to drawings, because there was a small amount of heritage for it in other models, and because it could both be estimated by engineers and collected from actual drawing sets. Initial analysis of a detailed data set (including drawings) showed a strong relationship between engineering hours and NP with number of drawings being an intermediate variable (number of equivalent D-size detail drawings was approximately equal to number of unique parts). An initial core equation was composed from the rule or thumb, including relationships for most relevant general model parameters, which were adapted from other general models and logic. Then this initial equation, with a little additional adaptation, was used to reproduce to an encouraging degree the engineering hours in the initial data set.   This core engineering equation was initially tested as a pre-processor for existing models, and then was incorporated into existing models as an alternate engineering algorithm, and eventually became a primary algorithm. Over time the same general process was used to expand the NP notion into a full equation set and to adapt it to electronics. Early general model prototypes contained substantial amounts of inherited logic from other models, which equations were eventually replaced with NP-type equations. This approach allowed for constant “real time” testing of the model, with results during use providing the direction for development. A substantial amount of effort was expended in developing guidelines for estimation of the size parameters. Brief history of development   The point of departure, in the mechanical area first, was a common engineer’s rule of thumb relationship for engineering hours per drawing. Number of parts (NP) was identified as an initial alternate size parameter, because it was relatable to drawings, because there was a small amount of heritage for it in other models, and because it could both be estimated by engineers and collected from actual drawing sets. Initial analysis of a detailed data set (including drawings) showed a strong relationship between engineering hours and NP with number of drawings being an intermediate variable (number of equivalent D-size detail drawings was approximately equal to number of unique parts). An initial core equation was composed from the rule or thumb, including relationships for most relevant general model parameters, which were adapted from other general models and logic. Then this initial equation, with a little additional adaptation, was used to reproduce to an encouraging degree the engineering hours in the initial data set.   This core engineering equation was initially tested as a pre-processor for existing models, and then was incorporated into existing models as an alternate engineering algorithm, and eventually became a primary algorithm. Over time the same general process was used to expand the NP notion into a full equation set and to adapt it to electronics. Early general model prototypes contained substantial amounts of inherited logic from other models, which equations were eventually replaced with NP-type equations. This approach allowed for constant “real time” testing of the model, with results during use providing the direction for development. A substantial amount of effort was expended in developing guidelines for estimation of the size parameters.

12. Use of Expanded Input and Output Sets in a General Parametric Cost Model 12 Brief history of development (2) Gradual realization of need for a new parameter for mechanical Went back to drawing set data, "counted" instructions in drawings Number instructions = sum of dimensions, parts list entries, and special instructions (“notes”) Developed guidelines for “avg. number of instructions per part” For model input, use guidelines (as complexity categories, or estimate drawing size) database (recorded technical definition) expert judgment (engineers) Use of two parameters simplifies, improves process During this relatively long period (years) of use, development and refinement, it became apparent that the mechanical module needed a better specification of size. The problem was exemplified by the observation that one detail drawing, corresponding to one unique part, could range in size (and effort) from a very small size A or B to several large E drawings. Up until the need to change became overwhelming, this problem was solved by defining a single part as equivalent to a size D drawing, which worked satisfactorily for a long time, but made estimating NP more indirect and complicated than was necessary. Therefore an additional parameter, “average number of instructions per part" (NumInstr) was postulated. Returning to the drawing set data, a set of rules was defined for counting NumInstr in the drawings. The basic definition is: Number instructions = sum of dimensions, parts list entries, and special instructions (“notes”). A remarkably good relationship between “instructions” per drawing and size of drawing was found, and this was adopted as a set of guidelines for estimating NumInstr. (See Figure 3.) For input determination, one may use these category guidelines, existing data, or engineering (expert) judgment. Although the mechanical size information is now communicated in two parameters, the input process, and especially the recording of technical data, is simplified and improved. During this relatively long period (years) of use, development and refinement, it became apparent that the mechanical module needed a better specification of size. The problem was exemplified by the observation that one detail drawing, corresponding to one unique part, could range in size (and effort) from a very small size A or B to several large E drawings. Up until the need to change became overwhelming, this problem was solved by defining a single part as equivalent to a size D drawing, which worked satisfactorily for a long time, but made estimating NP more indirect and complicated than was necessary. Therefore an additional parameter, “average number of instructions per part" (NumInstr) was postulated. Returning to the drawing set data, a set of rules was defined for counting NumInstr in the drawings. The basic definition is: Number instructions = sum of dimensions, parts list entries, and special instructions (“notes”). A remarkably good relationship between “instructions” per drawing and size of drawing was found, and this was adopted as a set of guidelines for estimating NumInstr. (See Figure 3.) For input determination, one may use these category guidelines, existing data, or engineering (expert) judgment. Although the mechanical size information is now communicated in two parameters, the input process, and especially the recording of technical data, is simplified and improved.

13. Use of Expanded Input and Output Sets in a General Parametric Cost Model 13 Brief history of development (3) Decompose output into phases; integrate with schedule calculations Realized that much informal data exists to support allocation of cost to phases, as for software models Started with conceptual prototype, gradually refined; still refining Used standard development phases and simple assumptions to calculate resources per unit time Simple level-of-effort assumptions, applied at low level, were determined to be more appropriate than distribution spreading functions used traditionally for high level calculations Composite sum results vary with input assumptions; sometimes differ significantly from results obtained with traditional higher level functions, but in a reasonable way, according to schedule assumptions Modeling of effect of schedule constraints on cost profile now possible Another problem that gradually became apparent was the need to produce output in time increments. The approach employed for a long time had been to use the standard triple-Beta weighted sum spread function to distribute total development cost over time according to a x % spent by x % time guideline. However, when the standard approach produced a result substantially different from a particular project plan under review, and the difference became an issue, the author decided that the spread function approach was overly simplistic for a serious comparison to a project plan.   It was determined that enough informal data and expertise existed, and that the current model output was detailed enough, to support an initial conceptual allocation of resources to phases. Since the standard WBS level of definition was fairly low, it was decided to start with essentially simple level of effort allocations of hours and dollars to standard development phases at the WBS level of definition, and then, after translating the results to monthly increments at the low level, to sum the monthly costs for accumulation at higher levels. Testing of the new algorithms showed that under ideal assumptions of development schedule, the results were comparable to those obtained with the classic spreading functions, but that when realistic schedule constraints were introduced, the project level output would deviate from the traditional forms in reasonable ways. This was very encouraging, and the basic method was adopted, with a substantial amount of refinement over time. Another problem that gradually became apparent was the need to produce output in time increments. The approach employed for a long time had been to use the standard triple-Beta weighted sum spread function to distribute total development cost over time according to a x % spent by x % time guideline. However, when the standard approach produced a result substantially different from a particular project plan under review, and the difference became an issue, the author decided that the spread function approach was overly simplistic for a serious comparison to a project plan.   It was determined that enough informal data and expertise existed, and that the current model output was detailed enough, to support an initial conceptual allocation of resources to phases. Since the standard WBS level of definition was fairly low, it was decided to start with essentially simple level of effort allocations of hours and dollars to standard development phases at the WBS level of definition, and then, after translating the results to monthly increments at the low level, to sum the monthly costs for accumulation at higher levels. Testing of the new algorithms showed that under ideal assumptions of development schedule, the results were comparable to those obtained with the classic spreading functions, but that when realistic schedule constraints were introduced, the project level output would deviate from the traditional forms in reasonable ways. This was very encouraging, and the basic method was adopted, with a substantial amount of refinement over time.

14. Use of Expanded Input and Output Sets in a General Parametric Cost Model 14 Brief history of development (4) More detail in output leads to more input changes Allocation of prototypes, other associated parameters to phases Probably more to go in this area Beginning of common element database So far a collection of elements from prior work, organized in subsystems as originally used. More sophisticated approach not yet defined; structure needs to be considered carefully because much relevant information is in subsystem context (above element level). The most noticeable of these refinements has been to expand the phases, in particular to decompose the prototype phase into several phases, in order to better model the actual prototype strategy used by a project. This was accompanied by a corresponding decomposition of the prototype inputs, with associated guidelines. Since this has been a fairly recent development, it probably will continue to undergo refinement.   One other noteworthy development, quite immature at this point, is the beginning of a common element database for streamlining and improving the input process. Currently the element data is contained in old project input sets, in the standard subsystem input/output format. More sophisticated ways of storing and accessing the data are under consideration for future application. However, some care will need to be given to the method to be used, because so much of the relevant input information is in the subsystem context (i.e., next higher level above elements). The most noticeable of these refinements has been to expand the phases, in particular to decompose the prototype phase into several phases, in order to better model the actual prototype strategy used by a project. This was accompanied by a corresponding decomposition of the prototype inputs, with associated guidelines. Since this has been a fairly recent development, it probably will continue to undergo refinement.   One other noteworthy development, quite immature at this point, is the beginning of a common element database for streamlining and improving the input process. Currently the element data is contained in old project input sets, in the standard subsystem input/output format. More sophisticated ways of storing and accessing the data are under consideration for future application. However, some care will need to be given to the method to be used, because so much of the relevant input information is in the subsystem context (i.e., next higher level above elements).

15. Use of Expanded Input and Output Sets in a General Parametric Cost Model 15 Model Description – Input Modules: mechanical, electronic, chip, software (COCOMO II) Subsystems: basic unit of module Elements: basic unit of subsystem Subsystem-level parameters all schedule related parameters, by phase labor rates, overheads, global factors for recording data, calibration, schedule definition Use sometimes indicates areas of algorithm weakness Element-level parameters Size parameters num parts/pins/gates, number instructions per part (mechanical) mass for structural material costs Other parameters: engineering and design levels material description, tolerances, other fabrication detail number of units, including prototypes per phase integration and test difficulty, make-buy decisions, design integration Model Description-Input In the integrated higher-level model are four modules: mechanical, electronic, chip, software. The software module is an adaptation of COCOMO II with output in the form used by the other modules.   Within a module, the basic unit of input is the subsystem, which is composed of elements. Subsystem-level parameters include all schedule related parameters by phase, labor rates, overheads, global parameters and global factors for adjusting the effects of model output in functional elements and schedule phases. Global factors are needed for recording data (hopefully in decreasing amounts as the data collection process proceeds), for calibration, and for schedule definition. Sometimes their use indicates poor modeling or areas of algorithm weakness in the model.   Element-level size parameters are number of parts and average number of instructions per part for mechanical, number of pins for board mode, and number of gates or transistors for chip mode. Other parameters at the element level include or address engineering and design levels, material description, tolerances, other fabrication detail, number of units, including prototypes per phase, integration and test difficulty, make-buy decisions, and design integration. Figures 4 and 5 show typical input sets for the mechanical and board modules. Model Description-Input In the integrated higher-level model are four modules: mechanical, electronic, chip, software. The software module is an adaptation of COCOMO II with output in the form used by the other modules.   Within a module, the basic unit of input is the subsystem, which is composed of elements. Subsystem-level parameters include all schedule related parameters by phase, labor rates, overheads, global parameters and global factors for adjusting the effects of model output in functional elements and schedule phases. Global factors are needed for recording data (hopefully in decreasing amounts as the data collection process proceeds), for calibration, and for schedule definition. Sometimes their use indicates poor modeling or areas of algorithm weakness in the model.   Element-level size parameters are number of parts and average number of instructions per part for mechanical, number of pins for board mode, and number of gates or transistors for chip mode. Other parameters at the element level include or address engineering and design levels, material description, tolerances, other fabrication detail, number of units, including prototypes per phase, integration and test difficulty, make-buy decisions, and design integration. Figures 4 and 5 show typical input sets for the mechanical and board modules.

16. Use of Expanded Input and Output Sets in a General Parametric Cost Model 16 Model Description – Input, mechanical Figure 4. Input for mechanical mode Figure 4. Input for mechanical mode

17. Use of Expanded Input and Output Sets in a General Parametric Cost Model 17 Model Description – Input, board Figure 5. Input for electronics mode Figure 5. Input for electronics mode

18. Use of Expanded Input and Output Sets in a General Parametric Cost Model 18 Model Description - Output Basic model output is at the subsystem level and consists of Hours, dollars and staffing levels By functional elements In discrete phases and uniform time units (months) More granularity in input yields more detail in output Detailed comparison by function, phase and time is possible at lowest subsystem level Output is accumulated at higher levels Hours or staffing levels, dollars Summary and by month Model Description-Output Basic model output is at the subsystem level and consists of hours, dollars and staffing levels, by functional elements, in discrete phases and uniform time units (months). The lower the level of input, the more detailed is the output. Since the phasing and monthly distribution is done at the subsystem level, detailed comparison and analysis can be done at a relatively low level. Figures 6 and 7 show typical subsystem-level output by phase and month. Output is accumulated at higher levels by hours (or staffing level) and dollars, in both summary form and by month. Figures 8 and 9 show typical higher-level output. Model Description-Output Basic model output is at the subsystem level and consists of hours, dollars and staffing levels, by functional elements, in discrete phases and uniform time units (months). The lower the level of input, the more detailed is the output. Since the phasing and monthly distribution is done at the subsystem level, detailed comparison and analysis can be done at a relatively low level. Figures 6 and 7 show typical subsystem-level output by phase and month. Output is accumulated at higher levels by hours (or staffing level) and dollars, in both summary form and by month. Figures 8 and 9 show typical higher-level output.

19. Use of Expanded Input and Output Sets in a General Parametric Cost Model 19 Output, subsystem level, by phase Figure 6. Output, subsystem level, by phaseFigure 6. Output, subsystem level, by phase

20. Use of Expanded Input and Output Sets in a General Parametric Cost Model 20 Output, subsystem level, by month Figure 7. Output, subsystem level, by monthFigure 7. Output, subsystem level, by month

21. Use of Expanded Input and Output Sets in a General Parametric Cost Model 21 Output, summary level, by category Figure 8. Output, summary level, by category Figure 8. Output, summary level, by category

22. Use of Expanded Input and Output Sets in a General Parametric Cost Model 22 Output, summary level, staff level by month Figure 9. Output, summary level, staff level by month Figure 9. Output, summary level, staff level by month

23. Use of Expanded Input and Output Sets in a General Parametric Cost Model 23 Model Description – Algorithms (1) Sources Data from a variety of sources Collected at as detailed a level as possible Resources, schedule, definition (including drawing sets) Interviews with project personnel Expert judgment Interviews with engineers in area of expertise Other models, cost analysis literature Secondary and generic algorithms, logic (schedule, integration), etc. Model maturity Mechanical module is most developed and unique Board module less well developed but has long heritage of use Chip module less mature, not a primary tool, used to integrate estimates from other sources Use of COCOMO II for software, with some adaptations for compatibility with other modules General Information for a model of this kind necessarily comes from a large variety of sources. Resource, schedule and definition information was collected, mostly at a sub-project level, over a long period of time (mostly done in spare time). Interviews with project personnel were essential to get the level of detail needed, and sometimes led to unexpected new sources. Sources also included non-project-specific interviews with engineers (expert judgment), for verification of conclusions drawn from data and for logic for which data was unavailable. Information also came from algorithms used in other cost models and from the cost analysis literature, although over time most of these algorithms have been replaced. In sum, no potential source of data was rejected. Of course, the actual value of the data had an enormous range of usefulness, but all was kept, because there was always the possibility of new information coming in that could be used to corroborate or verify formerly un-usable data. Of the hardware modules, the mechanical module is the most mature and unique. The board module has been used with solid results for a long time and is, in the author’s opinion, a significant improvement over the mass-based predecessor, but its input set is less well developed than is desired. The chip module is the least well developed of the three; it is typically not used as a primary estimating tool, but is used as an alternate model and also to import outside estimates into the larger model input/output framework. General Information for a model of this kind necessarily comes from a large variety of sources. Resource, schedule and definition information was collected, mostly at a sub-project level, over a long period of time (mostly done in spare time). Interviews with project personnel were essential to get the level of detail needed, and sometimes led to unexpected new sources. Sources also included non-project-specific interviews with engineers (expert judgment), for verification of conclusions drawn from data and for logic for which data was unavailable. Information also came from algorithms used in other cost models and from the cost analysis literature, although over time most of these algorithms have been replaced. In sum, no potential source of data was rejected. Of course, the actual value of the data had an enormous range of usefulness, but all was kept, because there was always the possibility of new information coming in that could be used to corroborate or verify formerly un-usable data. Of the hardware modules, the mechanical module is the most mature and unique. The board module has been used with solid results for a long time and is, in the author’s opinion, a significant improvement over the mass-based predecessor, but its input set is less well developed than is desired. The chip module is the least well developed of the three; it is typically not used as a primary estimating tool, but is used as an alternate model and also to import outside estimates into the larger model input/output framework.

24. Use of Expanded Input and Output Sets in a General Parametric Cost Model 24 Model Description – Algorithms (2) Core Engineering Equations (Mechanical) Linear function of size parameters (num parts, num instructions per part) Num drawings = num parts * num instructions/part * unique fraction * assembly drawing factor (1 detail drawing per unique part, plus assembly drawings) check on process is engineer’s estimate of drawings Eng base = num drawings * fab cplx fac * platform fac * eng level * calibration fac fabrication complexity factor is much less influential than in mass-based models because size parameters are dominant Board, chip modes are similar board mode uses number of pins for size parameter chip mode uses number of gates or transistors Core engineering equations The core engineering equations are linear with respect to size. In the mechanical module, the basic scheme is as follows: (num = number, fac = factor, cplx = complexity, fab = fabrication)   Num drawings = num parts * (num instructions/part) * (1/instructions per D-drawing) * unique fraction * assembly drawing fac   Eng base = num drawings * fab complexity fac * platform fac * eng level * calibration fac   The estimate of number of drawings essentially amounts to one detail drawing per unique part, with scaling for the part complexity, and an estimate for assembly drawings that is based on the total number of parts. This intermediate result may be compared with independent estimates for drawing count by project engineers. The fabrication complexity factor used here is much less influential than is typical for mass-based models, because of the dominance of the size parameters, which account for much of the design complexity.   The core engineering equations for the board and chip modules are similar in concept to the mechanical, but based on their unique size parameters (number of pins, transistors, or gates). Core engineering equations The core engineering equations are linear with respect to size. In the mechanical module, the basic scheme is as follows: (num = number, fac = factor, cplx = complexity, fab = fabrication)   Num drawings = num parts * (num instructions/part) * (1/instructions per D-drawing) * unique fraction * assembly drawing fac   Eng base = num drawings * fab complexity fac * platform fac * eng level * calibration fac   The estimate of number of drawings essentially amounts to one detail drawing per unique part, with scaling for the part complexity, and an estimate for assembly drawings that is based on the total number of parts. This intermediate result may be compared with independent estimates for drawing count by project engineers. The fabrication complexity factor used here is much less influential than is typical for mass-based models, because of the dominance of the size parameters, which account for much of the design complexity.   The core engineering equations for the board and chip modules are similar in concept to the mechanical, but based on their unique size parameters (number of pins, transistors, or gates).

25. Use of Expanded Input and Output Sets in a General Parametric Cost Model 25 Model Description – Algorithms (3) Core Fabrication Equations (Mechanical) Linear functions of size parameters (num parts, num instructions per part, surface area) Complexity factors (relative cost per size unit) are nonlinear functions of precision, material (machinability index), hogout, assembly difficulty, surface finish, spec level unit fab = num instructions/part * num parts * non-unique learning fac * cplx fac * yield fac * calib fac unit surface finish = surface area * cplx fac * yield fac * calib fac protos fab = unit fab * protos * protos learn fac * eng level fac prod fab = unit fab * mfg qty * mfg learn fac * eng change fac material cost = cplx fac (matl) * mass * hogout fac * yield fac (Board, chip modes are similar in concept) Core fabrication equations The core fabrication equations are also linear with respect to size parameters, including the as yet unmentioned parameter surface area. In the mechanical module, the basic scheme is as follows: (protos = prototype, eng = engineering, mfg = manufacturing)   unit fab = num instructions/part * num parts * non-unique learning fac * cplx fac * yield fac * calibration fac unit surface finish = surface area * cplx fac * yield fac * calibration fac   protos fab = unit fab * protos * protos learn fac * eng level fac prod fab = unit fab * mfg qty * mfg learn fac * eng change fac material cost = cplx fac (material) * mass * hog-out fac * yield fac   The complexity factors (relative cost per size unit) are nonlinear functions of precision, material (machinability index), hog-out, assembly difficulty, surface finish, and specification level.   Again, the core fabrication equations for the board and chip modules are similar in concept to the mechanical, but based on their own size parameters, and a little less well developed than the mechanical equations. Core fabrication equations The core fabrication equations are also linear with respect to size parameters, including the as yet unmentioned parameter surface area. In the mechanical module, the basic scheme is as follows: (protos = prototype, eng = engineering, mfg = manufacturing)   unit fab = num instructions/part * num parts * non-unique learning fac * cplx fac * yield fac * calibration fac unit surface finish = surface area * cplx fac * yield fac * calibration fac   protos fab = unit fab * protos * protos learn fac * eng level fac prod fab = unit fab * mfg qty * mfg learn fac * eng change fac material cost = cplx fac (material) * mass * hog-out fac * yield fac   The complexity factors (relative cost per size unit) are nonlinear functions of precision, material (machinability index), hog-out, assembly difficulty, surface finish, and specification level.   Again, the core fabrication equations for the board and chip modules are similar in concept to the mechanical, but based on their own size parameters, and a little less well developed than the mechanical equations.

26. Use of Expanded Input and Output Sets in a General Parametric Cost Model 26 Model Description – Algorithms (4) Optics algorithms Parameters and algorithms added for optics sub-model within mechanical module Surface area (size), surface finish, number optics elements, optics complexity Core engineering equation derived from optics literature non-linear with respect to surface area moderate maturity Shares common fabrication equations with non-optics mechanical elements linear with respect to surface area Secondary equations Use output from core equations From linear factors to slightly complex Eng and fab sub-categories, tooling, material, learning factors project management, QA, etc. Optics, secondary equations The mechanical module also contains an optics sub-model. It utilizes the already existing parameters surface area and surface finish and additional parameters added specifically for optics: number of optics elements and optics complexity. The core engineering equation is derived from published optics sources, with additional logic and calibration, and is non-linear with respect to surface area. Although it has been used for some time with good results, it needs to be expanded and further developed. The fabrication equations are the same ones used in the regular mechanical module for general fabrication and surface finish, which are linear with respect to surface area.   The model secondary equations, which use output from the core equations, range from simple linear-factor type equations to slightly more complex algorithms. They cover engineering and fabrication sub-categories, tooling, material cost, project management, QA, etc. Learning factors use the standard numerical approximation of the Boeing Curve. Optics, secondary equations The mechanical module also contains an optics sub-model. It utilizes the already existing parameters surface area and surface finish and additional parameters added specifically for optics: number of optics elements and optics complexity. The core engineering equation is derived from published optics sources, with additional logic and calibration, and is non-linear with respect to surface area. Although it has been used for some time with good results, it needs to be expanded and further developed. The fabrication equations are the same ones used in the regular mechanical module for general fabrication and surface finish, which are linear with respect to surface area.   The model secondary equations, which use output from the core equations, range from simple linear-factor type equations to slightly more complex algorithms. They cover engineering and fabrication sub-categories, tooling, material cost, project management, QA, etc. Learning factors use the standard numerical approximation of the Boeing Curve.

27. Use of Expanded Input and Output Sets in a General Parametric Cost Model 27 Model Description – Algorithms (5) Integration and test algorithms I&T calculations done with composite parameter sets drawn from contributing elements similar in concept to early PRICEH I&T, but with more user control through new parameters, to specify test levels for different assemblies and phases (needed to complement level of input granularity and model testing approach of project) Next-level Systems Engineering/PM For higher level SE effort to coordinate requirements and designs of separate project elements coming from different organizational elements Uses composite of lower level elements, similar to I&T May be used at any assembly/WBS level where appropriate Typically used at project system level or at point where products from different organizations are being integrated I&T, Next-level SE/PM The model’s integration and test concept is taken from the approach used in the early (and possibly current) PRICEH ™ model, in which the elements being integrated feed into a composite integration element whose size parameter is scaled. This approach is mathematically very elegant and appealing, in part because it captures the lower level definition and also because it can be repeated at progressively higher WBS levels. In practice it has both advantages and disadvantages.   In this implementation the size parameter is NP, and the size value is determined both by the integration complexity values in the contributing elements (like PRICEH), and (unlike PRICEH), by user input test levels (for all the prototype phases and manufacturing) that (are intended to) allow the user to more flexibly model the testing strategy used by a particular project.   The model also employs an algorithm for next-level systems engineering and project management, which is intended to account for the higher-level effort to coordinate requirements and designs of separate project elements being contributed by different organizational elements. This calculation, like I&T, also uses a composite representation of lower level elements, and may be employed at whatever assembly or WBS level is appropriate. It is typically used at the project system level or where products from different organizations are being integrated. I&T, Next-level SE/PM The model’s integration and test concept is taken from the approach used in the early (and possibly current) PRICEH ™ model, in which the elements being integrated feed into a composite integration element whose size parameter is scaled. This approach is mathematically very elegant and appealing, in part because it captures the lower level definition and also because it can be repeated at progressively higher WBS levels. In practice it has both advantages and disadvantages.   In this implementation the size parameter is NP, and the size value is determined both by the integration complexity values in the contributing elements (like PRICEH), and (unlike PRICEH), by user input test levels (for all the prototype phases and manufacturing) that (are intended to) allow the user to more flexibly model the testing strategy used by a particular project.   The model also employs an algorithm for next-level systems engineering and project management, which is intended to account for the higher-level effort to coordinate requirements and designs of separate project elements being contributed by different organizational elements. This calculation, like I&T, also uses a composite representation of lower level elements, and may be employed at whatever assembly or WBS level is appropriate. It is typically used at the project system level or where products from different organizations are being integrated.

28. Use of Expanded Input and Output Sets in a General Parametric Cost Model 28 Model Description – Algorithms (6) Schedule algorithms core equations based on power function of effort, similar to COCOMO also some logic; real complexity is in number of phases and depth of subsystem definition Resource profiles are determined by accumulation of lower level, discrete phase output not by applying theoretical functional forms to high level output accumulate costs at activity level and bubble up sensitive to nuances of current schedule input set and input depth Probability distributions Model currently has 3-case output, with probability distribution calculation off-line LMH input for selected set of parameters May add Monte-Carlo simulation with inputs in form of probability distributions (reuse code from earlier model versions) Schedule, profiles, probability The basic schedule algorithms used in the model are simple power functions of effort, similar to those used in COCOMO. But there is some additional logic necessitated by the interaction between development phases. The real complexity of the schedule is realized in the number of phases and in the user-controlled depth of subsystem definition.   The model resource profiles, as mentioned earlier, are determined by accumulation of lower level, discrete phase output, as opposed to using high-level theoretical functional forms. This technique causes the output profile to be sensitive to the whole of the project input set, including the depth of subsystem definition and the effects of the current set of schedule inputs and constraints.   Unlike previous versions, the current model implementation does not include a Monte Carlo simulation capability for calculating probability distributions of output; this may be added later. It does, however, allow for three levels of input for selected parameters, such that three-case output can be obtained. Using this output, probability analysis is done off-line with special tools for the purpose, including a program for summing random variables under assumptions of partial dependence. Schedule, profiles, probability The basic schedule algorithms used in the model are simple power functions of effort, similar to those used in COCOMO. But there is some additional logic necessitated by the interaction between development phases. The real complexity of the schedule is realized in the number of phases and in the user-controlled depth of subsystem definition.   The model resource profiles, as mentioned earlier, are determined by accumulation of lower level, discrete phase output, as opposed to using high-level theoretical functional forms. This technique causes the output profile to be sensitive to the whole of the project input set, including the depth of subsystem definition and the effects of the current set of schedule inputs and constraints.   Unlike previous versions, the current model implementation does not include a Monte Carlo simulation capability for calculating probability distributions of output; this may be added later. It does, however, allow for three levels of input for selected parameters, such that three-case output can be obtained. Using this output, probability analysis is done off-line with special tools for the purpose, including a program for summing random variables under assumptions of partial dependence.

29. Use of Expanded Input and Output Sets in a General Parametric Cost Model 29 Potential Advantages for Modeling (1) Rich input set, flexibility in WBS level: Simplifies the input process Maximizes capability to capture definition and data, technical and non-technical Facilitates the decomposition of project-unique subsystems into elements more common and familiar Enables input modifications for differences between historical data and current definition Normal use of input/output set produces framework for data collection and updates common element definition Improved algorithms Increase confidence that modifications to historical data set will result in reasonable change in cost; maximize usefulness of existing data Enhance understanding of modeling results provide lots of feedback for user make training exercises more meaningful Potential advantages for modeling The combination of the relatively large input set and WBS level flexibility simplifies the process of describing the current subject material by serving as a comprehensive checklist for entering relevant information. The goal is to make it as easy as possible for the user to record all of the description and information that has a bearing on the resources and time needed for the project in question. In this way the input process is, as much as possible, self-documenting. The process also produces a framework for data collection; the more depth there is in the input definition, the easier it is to record the cost and schedule data (and also changes in definition) as the various pieces of information are generated incrementally by the project in progress. The concept of a list of elements within a subsystem unit has proven to be very useful for decomposing seemingly project-unique subsystems into lower level components that are more familiar, and for which there may be pre-existing definition and perhaps even cost data. At the lower level it is also easier to account for the differences between the data base elements or subsystems and those in the current study. At the completion of input, updates to common element definition may also have been accomplished. Having improved equations, which hopefully simulate the process reasonably well, helps in modeling by increasing the confidence that parameter changes made to existing data sets will produce a reasonable change in cost. This in turn reduces the need for hard cost data because existing data points are more easily generalized to related elements. Better algorithms also make it easier to understand model results. The more “true” to the process the equations are, the better is the feedback to the user, and the greater is the value of the model as a training tool. Potential advantages for modeling The combination of the relatively large input set and WBS level flexibility simplifies the process of describing the current subject material by serving as a comprehensive checklist for entering relevant information. The goal is to make it as easy as possible for the user to record all of the description and information that has a bearing on the resources and time needed for the project in question. In this way the input process is, as much as possible, self-documenting. The process also produces a framework for data collection; the more depth there is in the input definition, the easier it is to record the cost and schedule data (and also changes in definition) as the various pieces of information are generated incrementally by the project in progress. The concept of a list of elements within a subsystem unit has proven to be very useful for decomposing seemingly project-unique subsystems into lower level components that are more familiar, and for which there may be pre-existing definition and perhaps even cost data. At the lower level it is also easier to account for the differences between the data base elements or subsystems and those in the current study. At the completion of input, updates to common element definition may also have been accomplished. Having improved equations, which hopefully simulate the process reasonably well, helps in modeling by increasing the confidence that parameter changes made to existing data sets will produce a reasonable change in cost. This in turn reduces the need for hard cost data because existing data points are more easily generalized to related elements. Better algorithms also make it easier to understand model results. The more “true” to the process the equations are, the better is the feedback to the user, and the greater is the value of the model as a training tool.

30. Use of Expanded Input and Output Sets in a General Parametric Cost Model 30 Potential Advantages for Modeling (2) Enhanced capability to cost new technology or unconventional elements, for which data is scarce. Better algorithms and capability to decompose subsystems also make it easier to extend historical data sets to new definition Subjects of very low mass may be modeled with a different size parameter Input set allows modeling of projects under different sets of conditions or with alternate strategies, e.g.: Explore effects of funding constraints Compare plans: high reuse with complications vs. new development Input/output depth allows incorporation of results from other sources, to integrate system output Re-model with calibration to result obtained with preferred model Depth of output may improve understanding of results The previously described effects of better algorithms and capability to decompose subsystems also make it easier to extend historical data to new definition. This aids in modeling the development of new technology items, for which existing data is scarce. The fact that the model does not use mass as a primary variable also improves its capability to model items of very low mass, to the extent that the model parameter set can be used to describe the features that impact cost. The flexibility of the model input parameter set facilitates the modeling of projects under different sets of conditions or with alternate strategies. For example, the combination of schedule input flexibility and phase structure creates the capability to explore the effects of funding constraints. Another example would be to investigate the outcome of a development plan with high reuse but many complications, versus a plan with all new development but few complications. Another capability enabled by depth of input and output is the importing of results from other sources. For example, results for a particular subsystem or element obtained with a different model (perhaps more appropriate for that task) can be re-modeled with calibration, to produce an equivalent result. This procedure may be used to integrate the total system output in time, and the output detail, in resources and time, may possibly contribute to the understanding of the modeled result.The previously described effects of better algorithms and capability to decompose subsystems also make it easier to extend historical data to new definition. This aids in modeling the development of new technology items, for which existing data is scarce. The fact that the model does not use mass as a primary variable also improves its capability to model items of very low mass, to the extent that the model parameter set can be used to describe the features that impact cost. The flexibility of the model input parameter set facilitates the modeling of projects under different sets of conditions or with alternate strategies. For example, the combination of schedule input flexibility and phase structure creates the capability to explore the effects of funding constraints. Another example would be to investigate the outcome of a development plan with high reuse but many complications, versus a plan with all new development but few complications. Another capability enabled by depth of input and output is the importing of results from other sources. For example, results for a particular subsystem or element obtained with a different model (perhaps more appropriate for that task) can be re-modeled with calibration, to produce an equivalent result. This procedure may be used to integrate the total system output in time, and the output detail, in resources and time, may possibly contribute to the understanding of the modeled result.

31. Use of Expanded Input and Output Sets in a General Parametric Cost Model 31 Potential Advantages for Analysis (1) Many points of comparison, between model output and project plan, make it: Easier to identify where agreement, differences are Easier to uncover the relevant cost issues that might otherwise be missed or difficult to quantify Easier to measure how well the model is simulating the process helps to determine level of confidence in estimate and analysis, and thus how to report the results More difficult to get agreement by accident (the fewer the comparisons, the easier to get false agreement) This leads to: Stronger findings, less chance of Failing to identify real problems Finding non-existent problems More feedback for project planning (if they want it) Potential advantages for analysis There are numerous advantages of detailed output, but they can mostly be summarized by the observation that, the more points of comparison you have, the easier it is to understand the output. In general, more comparisons help to identify where the agreements and differences are, and to uncover the relevant cost issues that might otherwise be missed or difficult to quantify. More comparisons also help us to measure how well our modeling process is working, and what level of confidence we may have in the results. This in turn has an impact on how we report the results. One of the problems encountered in parametric cost analysis is the possibility of getting agreement by accident. When there are few points of comparison, or worse still just one total cost, and especially if the one cost is viewed as a range or probability distribution, the probability is increased that the results will compare favorably at the high level, but when examined at a lower level (if that is possible), show significant differences. Perhaps closer examination will mitigate some of the lower level differences, but more information is always better. On the other hand, lower level comparisons might yield the valuable finding that significant higher level differences are concentrated in a relatively small number of lower level elements. Overall, more information for analysis leads to a stronger, and therefore more useful, set of findings, no matter whether the findings are negative or positive. A weak finding of problems with a project plan could be deemed useless, but it might also lead to unadvised changes. Likewise, a weak finding of confidence in a project plan could produce false confidence and lead to a failure to address problems. More analysis information can also produce more feedback of a more specific nature to the project, which might be useful for planning. Potential advantages for analysis There are numerous advantages of detailed output, but they can mostly be summarized by the observation that, the more points of comparison you have, the easier it is to understand the output. In general, more comparisons help to identify where the agreements and differences are, and to uncover the relevant cost issues that might otherwise be missed or difficult to quantify. More comparisons also help us to measure how well our modeling process is working, and what level of confidence we may have in the results. This in turn has an impact on how we report the results. One of the problems encountered in parametric cost analysis is the possibility of getting agreement by accident. When there are few points of comparison, or worse still just one total cost, and especially if the one cost is viewed as a range or probability distribution, the probability is increased that the results will compare favorably at the high level, but when examined at a lower level (if that is possible), show significant differences. Perhaps closer examination will mitigate some of the lower level differences, but more information is always better. On the other hand, lower level comparisons might yield the valuable finding that significant higher level differences are concentrated in a relatively small number of lower level elements. Overall, more information for analysis leads to a stronger, and therefore more useful, set of findings, no matter whether the findings are negative or positive. A weak finding of problems with a project plan could be deemed useless, but it might also lead to unadvised changes. Likewise, a weak finding of confidence in a project plan could produce false confidence and lead to a failure to address problems. More analysis information can also produce more feedback of a more specific nature to the project, which might be useful for planning.

32. Use of Expanded Input and Output Sets in a General Parametric Cost Model 32 Potential Advantages for Analysis (2) Enhanced sensitivity analysis To the extent that the model simulates real processes, there is more confidence that as model parameters change, cost and schedule will change in a reasonable manner Input depth helps to describe the differences in alternatives Output depth helps to show the differences in results Since mass is not a cost driver, potential inverse relationship between cost and mass does not cause problems Greater depth of output, especially in schedule, enhances analysis at a set point in time Estimates-to-complete: reflect a project plan up to a point in time, then model the remaining effort in time Potentially can complement or enhance earned value analysis or improve understanding of EV results? (yet to be tried) Another potential strength of the model is in the area of sensitivity analysis and trade studies. To the extent that the model algorithms simulate real processes, the user has more confidence that as model parameters are changed, cost and schedule will change in a reasonable manner. Then, to the extent that the input set allows the user to describe the differences between study alternatives, and the output set is granular enough to show where the resulting differences are or are not, the sensitivity study is enhanced. Also, since mass is not a cost driver in this model, the potential for situations where cost and mass are inversely related does not complicate the sensitivity analysis process. Greater input/output depth, especially schedule granularity, also enhances analysis of results at a certain point in time. Estimates-to-complete may be improved by the ability to model in some depth what has thus far happened in a project up to point in time, and consequently what may happen, both resources- and schedule-wise, after that same point in time. There may also be some way to complement or enhance earned value analysis, although the model thus far has not been used in this way. Another potential strength of the model is in the area of sensitivity analysis and trade studies. To the extent that the model algorithms simulate real processes, the user has more confidence that as model parameters are changed, cost and schedule will change in a reasonable manner. Then, to the extent that the input set allows the user to describe the differences between study alternatives, and the output set is granular enough to show where the resulting differences are or are not, the sensitivity study is enhanced. Also, since mass is not a cost driver in this model, the potential for situations where cost and mass are inversely related does not complicate the sensitivity analysis process. Greater input/output depth, especially schedule granularity, also enhances analysis of results at a certain point in time. Estimates-to-complete may be improved by the ability to model in some depth what has thus far happened in a project up to point in time, and consequently what may happen, both resources- and schedule-wise, after that same point in time. There may also be some way to complement or enhance earned value analysis, although the model thus far has not been used in this way.

33. Use of Expanded Input and Output Sets in a General Parametric Cost Model 33 Example 1: Time-phased output Spacecraft development cost estimate Chart 1: cum probability curve Chart 2: time-phased probability Example 1: Time-phased output This example is of a spacecraft development cost estimate, for which the probability analysis was done in two ways. Before the analysis, three cases of model output were generated using a pre-conceived set of probability assumptions about the input and the output, including a low case (10% probability estimate), a modal case (regular point estimate), and a high case (90% probability estimate). For the first analysis, a standard probability distribution was calculated from the total cost for the three cases, under the original assumptions. For the second analysis, the results of the three cases were accumulated at the total level by month and by year, and then plotted by year. In each case the results were compared with the project plan. Example 1: Time-phased output This example is of a spacecraft development cost estimate, for which the probability analysis was done in two ways. Before the analysis, three cases of model output were generated using a pre-conceived set of probability assumptions about the input and the output, including a low case (10% probability estimate), a modal case (regular point estimate), and a high case (90% probability estimate). For the first analysis, a standard probability distribution was calculated from the total cost for the three cases, under the original assumptions. For the second analysis, the results of the three cases were accumulated at the total level by month and by year, and then plotted by year. In each case the results were compared with the project plan.

34. Use of Expanded Input and Output Sets in a General Parametric Cost Model 34 Probability: Development Cost Figure 10. Probability: Development Cost. The cumulative probability distribution is shown in Figure 10, and the results indicate that there is no significant difference between the project plan and the parametric estimate. Of course, this is a comparison at the total cost level, without regard for time, and implicitly assumes that the expenditure of resources in time is not encumbered.Figure 10. Probability: Development Cost. The cumulative probability distribution is shown in Figure 10, and the results indicate that there is no significant difference between the project plan and the parametric estimate. Of course, this is a comparison at the total cost level, without regard for time, and implicitly assumes that the expenditure of resources in time is not encumbered.

35. Use of Expanded Input and Output Sets in a General Parametric Cost Model 35 Compare Development By Year Figure 11. Compare Development By Year. Figure 11 shows the comparison by year. Now there is a significant difference: the resources available to the project in the early phases are significantly less than the levels suggested by the cost modeling, and the difference is made up in the later phases of development and during I&T. Figure 11. Compare Development By Year. Figure 11 shows the comparison by year. Now there is a significant difference: the resources available to the project in the early phases are significantly less than the levels suggested by the cost modeling, and the difference is made up in the later phases of development and during I&T.

36. Use of Expanded Input and Output Sets in a General Parametric Cost Model 36 Example 1: Time-phased output analysis Chart 1: cum probability curve Shows no significant difference between project plan and parametric estimate Chart 2: time-phased probability Shows significant difference: resources available to project in early phases are significantly less than levels suggested by cost modeling Difference was mostly explained by externally imposed constraints on early funding for project. Review team finding: “Probability of problems resulting from inadequate early definition may be higher than desired.” (Project mission success potentially at increased risk due to inadequate early funding) In this case the review team also found that staffing levels for the first year were low for the intended plan, and that key activities were being postponed by the project until after the start of the second fiscal year. The project manager admitted that the project plan was influenced by the project’s funding constraints, and the project was reportedly seeking ways of securing more funding for the early phases. In this case the review team also found that staffing levels for the first year were low for the intended plan, and that key activities were being postponed by the project until after the start of the second fiscal year. The project manager admitted that the project plan was influenced by the project’s funding constraints, and the project was reportedly seeking ways of securing more funding for the early phases.

37. Use of Expanded Input and Output Sets in a General Parametric Cost Model 37 Example 2: Modeling with different assumptions Instrument (Large-Area Gamma-Ray Space Telescope) Modeled at end of Phase B and after re-plan Chart 1: project modeled with mostly optimal schedule Minimal constraints: primarily development start and end Chart 2: project modeled with schedule reflecting actual expenditures in early phases Constraints on schedule, input at subsystem level, result in subsystem resource consumption close to actual levels experienced by project in early development phase; later phases minimally constrained by current project milestones Chart 3: project modeled with new assumptions resulting from project re-plan Modeled as in chart 2, but reflecting project re-plan in which project negotiated for a later launch date and more funding (note that time distribution is different for L, M, and H cases) Example 2: Modeling with different funding and schedule assumptions This subject of this example is a large space instrument at the time of its Critical Design Review. In the first case, the project was modeled with assumptions of a mostly optimal development schedule. Only minimal constraints (development start and end, I&T start) were levied. For the second case, the project was modeled with assumptions reflecting the actual early development funding. The schedule was stretched such that resource consumption closely tracked actual records for each major subsystem, up to the current time. Later phases were then minimally constrained, as in case 1. For the third case, the project was modeled with assumptions reflecting a project re-plan, in which the project was negotiating for more funding and a later launch date. The assumptions were as for case 2, except that in the modal case the development schedule was allowed to extend 2 months, and the total schedule, including ATLO, 5 months. (Note that time distribution is different for the low, modal, and high cases.)Example 2: Modeling with different funding and schedule assumptions This subject of this example is a large space instrument at the time of its Critical Design Review. In the first case, the project was modeled with assumptions of a mostly optimal development schedule. Only minimal constraints (development start and end, I&T start) were levied. For the second case, the project was modeled with assumptions reflecting the actual early development funding. The schedule was stretched such that resource consumption closely tracked actual records for each major subsystem, up to the current time. Later phases were then minimally constrained, as in case 1. For the third case, the project was modeled with assumptions reflecting a project re-plan, in which the project was negotiating for more funding and a later launch date. The assumptions were as for case 2, except that in the modal case the development schedule was allowed to extend 2 months, and the total schedule, including ATLO, 5 months. (Note that time distribution is different for the low, modal, and high cases.)

38. Use of Expanded Input and Output Sets in a General Parametric Cost Model 38 Project Modeled with Minimal Constraints Figure 12. Project Modeled with Minimal Constraints. Figure 12 shows the high level result for case 1, which illustrates how constrained, in comparison to the independent parametric estimate, the project expenditures were in the first two years.Figure 12. Project Modeled with Minimal Constraints. Figure 12 shows the high level result for case 1, which illustrates how constrained, in comparison to the independent parametric estimate, the project expenditures were in the first two years.

39. Use of Expanded Input and Output Sets in a General Parametric Cost Model 39 Project Modeled with Full Constraints Figure 13. Project Modeled with Full Constraints. Figure 13 shows the high level result for case 2, which illustrates in the independent parametric estimate the effect of constraining expenditures in the first two years. The brunt of the effect is felt in FY04, with a 35% increase in the modal case required to stay on schedule. This reflects the fact that the ATLO begin date has not been allowed to slip. Figure 13. Project Modeled with Full Constraints. Figure 13 shows the high level result for case 2, which illustrates in the independent parametric estimate the effect of constraining expenditures in the first two years. The brunt of the effect is felt in FY04, with a 35% increase in the modal case required to stay on schedule. This reflects the fact that the ATLO begin date has not been allowed to slip.

40. Use of Expanded Input and Output Sets in a General Parametric Cost Model 40 Project Modeled with Full Constraints After Re-Plan Figure 14. Project Modeled with Full Constraints After Re-Plan. Figure 14 shows the high level result for case 3, which illustrates the effect of the project re-plan. Now the independent estimate profile has slipped slightly to the right, with a small decrease in FY04, a substantial increase in FY05, and smaller increases thereafter, reflecting the 2 month slip in instrument development and the extended ATLO period. Notice that the project compares much better now with the independent estimate, due to increases in funding from FY03 forward. Figure 14. Project Modeled with Full Constraints After Re-Plan. Figure 14 shows the high level result for case 3, which illustrates the effect of the project re-plan. Now the independent estimate profile has slipped slightly to the right, with a small decrease in FY04, a substantial increase in FY05, and smaller increases thereafter, reflecting the 2 month slip in instrument development and the extended ATLO period. Notice that the project compares much better now with the independent estimate, due to increases in funding from FY03 forward.

41. Use of Expanded Input and Output Sets in a General Parametric Cost Model 41 Compare Modes by Month, Before Re-Plan Figure 15. Compare Modes by Month, Before Re-Plan. Figures 15-18 show a month-by-month representation of the results for cases 1, 2, and 3, in which the differences are more easily matched with project phases. First the modes are compared, for the project plan versus the modeled optimal and constrained cases, before re-plan (Fig. 15) and after re-plan (Fig. 16). Then the modeled 10%, modal, and 90% cases are compared for the re-plan (Fig. 17), and then to this is added the project plan including reserves for comparison (Fig. 18). Now we can see, from figures 16 and 18, that although the project total after the re-plan is close to the model total for both the modal and 90% (reserve) cases, there are significant differences in the late subsystem development period, where the model results are significantly higher, and in instrument system I&T, where the project is significantly higher.   There are several possible reasons for this difference, but the result certainly suggests a closer examination of the project test plan in order to review modeling assumptions regarding testing at the subsystem and system levels. This was not done at the time of the actual review, partly because the analysis at the time did not go far enough, and partly because at that time the model input set had less flexibility for specifying test levels.Figure 15. Compare Modes by Month, Before Re-Plan. Figures 15-18 show a month-by-month representation of the results for cases 1, 2, and 3, in which the differences are more easily matched with project phases. First the modes are compared, for the project plan versus the modeled optimal and constrained cases, before re-plan (Fig. 15) and after re-plan (Fig. 16). Then the modeled 10%, modal, and 90% cases are compared for the re-plan (Fig. 17), and then to this is added the project plan including reserves for comparison (Fig. 18). Now we can see, from figures 16 and 18, that although the project total after the re-plan is close to the model total for both the modal and 90% (reserve) cases, there are significant differences in the late subsystem development period, where the model results are significantly higher, and in instrument system I&T, where the project is significantly higher.   There are several possible reasons for this difference, but the result certainly suggests a closer examination of the project test plan in order to review modeling assumptions regarding testing at the subsystem and system levels. This was not done at the time of the actual review, partly because the analysis at the time did not go far enough, and partly because at that time the model input set had less flexibility for specifying test levels.

42. Use of Expanded Input and Output Sets in a General Parametric Cost Model 42 Compare Modes by Month, After Re-Plan Figure 16. Compare Modes by Month, After Re-Plan. Figures 15-18 show a month-by-month representation of the results for cases 1, 2, and 3, in which the differences are more easily matched with project phases. First the modes are compared, for the project plan versus the modeled optimal and constrained cases, before re-plan (Fig. 15) and after re-plan (Fig. 16). Then the modeled 10%, modal, and 90% cases are compared for the re-plan (Fig. 17), and then to this is added the project plan including reserves for comparison (Fig. 18). Now we can see, from figures 16 and 18, that although the project total after the re-plan is close to the model total for both the modal and 90% (reserve) cases, there are significant differences in the late subsystem development period, where the model results are significantly higher, and in instrument system I&T, where the project is significantly higher.   There are several possible reasons for this difference, but the result certainly suggests a closer examination of the project test plan in order to review modeling assumptions regarding testing at the subsystem and system levels. This was not done at the time of the actual review, partly because the analysis at the time did not go far enough, and partly because at that time the model input set had less flexibility for specifying test levels. Figure 16. Compare Modes by Month, After Re-Plan. Figures 15-18 show a month-by-month representation of the results for cases 1, 2, and 3, in which the differences are more easily matched with project phases. First the modes are compared, for the project plan versus the modeled optimal and constrained cases, before re-plan (Fig. 15) and after re-plan (Fig. 16). Then the modeled 10%, modal, and 90% cases are compared for the re-plan (Fig. 17), and then to this is added the project plan including reserves for comparison (Fig. 18). Now we can see, from figures 16 and 18, that although the project total after the re-plan is close to the model total for both the modal and 90% (reserve) cases, there are significant differences in the late subsystem development period, where the model results are significantly higher, and in instrument system I&T, where the project is significantly higher.   There are several possible reasons for this difference, but the result certainly suggests a closer examination of the project test plan in order to review modeling assumptions regarding testing at the subsystem and system levels. This was not done at the time of the actual review, partly because the analysis at the time did not go far enough, and partly because at that time the model input set had less flexibility for specifying test levels.

43. Use of Expanded Input and Output Sets in a General Parametric Cost Model 43 Compare Model LMH Range by Month, After Re-Plan Figure 17. Compare Model LMH Range by Month, After Re-Plan. Figures 15-18 show a month-by-month representation of the results for cases 1, 2, and 3, in which the differences are more easily matched with project phases. First the modes are compared, for the project plan versus the modeled optimal and constrained cases, before re-plan (Fig. 15) and after re-plan (Fig. 16). Then the modeled 10%, modal, and 90% cases are compared for the re-plan (Fig. 17), and then to this is added the project plan including reserves for comparison (Fig. 18). Now we can see, from figures 16 and 18, that although the project total after the re-plan is close to the model total for both the modal and 90% (reserve) cases, there are significant differences in the late subsystem development period, where the model results are significantly higher, and in instrument system I&T, where the project is significantly higher.   There are several possible reasons for this difference, but the result certainly suggests a closer examination of the project test plan in order to review modeling assumptions regarding testing at the subsystem and system levels. This was not done at the time of the actual review, partly because the analysis at the time did not go far enough, and partly because at that time the model input set had less flexibility for specifying test levels. Figure 17. Compare Model LMH Range by Month, After Re-Plan. Figures 15-18 show a month-by-month representation of the results for cases 1, 2, and 3, in which the differences are more easily matched with project phases. First the modes are compared, for the project plan versus the modeled optimal and constrained cases, before re-plan (Fig. 15) and after re-plan (Fig. 16). Then the modeled 10%, modal, and 90% cases are compared for the re-plan (Fig. 17), and then to this is added the project plan including reserves for comparison (Fig. 18). Now we can see, from figures 16 and 18, that although the project total after the re-plan is close to the model total for both the modal and 90% (reserve) cases, there are significant differences in the late subsystem development period, where the model results are significantly higher, and in instrument system I&T, where the project is significantly higher.   There are several possible reasons for this difference, but the result certainly suggests a closer examination of the project test plan in order to review modeling assumptions regarding testing at the subsystem and system levels. This was not done at the time of the actual review, partly because the analysis at the time did not go far enough, and partly because at that time the model input set had less flexibility for specifying test levels.

44. Use of Expanded Input and Output Sets in a General Parametric Cost Model 44 Compare Project w/ Reserves, by Month, After Re-Plan Figure 18. Compare Project w/ Reserves to Model LMH, by Month, After Re-Plan. Figures 15-18 show a month-by-month representation of the results for cases 1, 2, and 3, in which the differences are more easily matched with project phases. First the modes are compared, for the project plan versus the modeled optimal and constrained cases, before re-plan (Fig. 15) and after re-plan (Fig. 16). Then the modeled 10%, modal, and 90% cases are compared for the re-plan (Fig. 17), and then to this is added the project plan including reserves for comparison (Fig. 18). Now we can see, from figures 16 and 18, that although the project total after the re-plan is close to the model total for both the modal and 90% (reserve) cases, there are significant differences in the late subsystem development period, where the model results are significantly higher, and in instrument system I&T, where the project is significantly higher.   There are several possible reasons for this difference, but the result certainly suggests a closer examination of the project test plan in order to review modeling assumptions regarding testing at the subsystem and system levels. This was not done at the time of the actual review, partly because the analysis at the time did not go far enough, and partly because at that time the model input set had less flexibility for specifying test levels. Figure 18. Compare Project w/ Reserves to Model LMH, by Month, After Re-Plan. Figures 15-18 show a month-by-month representation of the results for cases 1, 2, and 3, in which the differences are more easily matched with project phases. First the modes are compared, for the project plan versus the modeled optimal and constrained cases, before re-plan (Fig. 15) and after re-plan (Fig. 16). Then the modeled 10%, modal, and 90% cases are compared for the re-plan (Fig. 17), and then to this is added the project plan including reserves for comparison (Fig. 18). Now we can see, from figures 16 and 18, that although the project total after the re-plan is close to the model total for both the modal and 90% (reserve) cases, there are significant differences in the late subsystem development period, where the model results are significantly higher, and in instrument system I&T, where the project is significantly higher.   There are several possible reasons for this difference, but the result certainly suggests a closer examination of the project test plan in order to review modeling assumptions regarding testing at the subsystem and system levels. This was not done at the time of the actual review, partly because the analysis at the time did not go far enough, and partly because at that time the model input set had less flexibility for specifying test levels.

45. Use of Expanded Input and Output Sets in a General Parametric Cost Model 45 Advantages for Model Development Rich input set increases the usefulness of available definition Capability to define at lower WBS levels allows small subsets of projects, for which good data is available, to be modeled, thereby increasing usefulness of data Deep input set allows more historical technical definition to be captured in the model Rich output set increases the usefulness of available cost/resources data Many points of comparison allow better matching with cost data increase the capability to crosscheck, validate, or calibrate results against known quantities Detailed output generates more feedback, exposes modeling errors, speeds up model development Use of improved size parameters has resulted in more linear functional forms, which have numerous advantages for modeling and for development Potential advantages for model development The relatively well-developed input and output sets of this model have not only been useful for modeling and analysis, but also for model development. The depth of the input set helps to capture both technical definition and project-specific variables and, along with the WBS level flexibility, allows small subsets of projects, for which good data is available, to be modeled. This both maximizes the usefulness of available data, and increases the size of the body of data that can be useful. Likewise, the more detailed output, with its many points of comparison that are so useful for cost analysis, allows for a better matching of output with cost data, increasing the capability to crosscheck, validate, or calibrate results against known quantities. This in turn generates more feedback for the modeler, helping to expose weak areas of modeling and therefore to speed up development. Also, the use of improved size parameters has helped to produce more linear functional forms in the model algorithms, which have numerous advantages for modeling and model development. Potential advantages for model development The relatively well-developed input and output sets of this model have not only been useful for modeling and analysis, but also for model development. The depth of the input set helps to capture both technical definition and project-specific variables and, along with the WBS level flexibility, allows small subsets of projects, for which good data is available, to be modeled. This both maximizes the usefulness of available data, and increases the size of the body of data that can be useful. Likewise, the more detailed output, with its many points of comparison that are so useful for cost analysis, allows for a better matching of output with cost data, increasing the capability to crosscheck, validate, or calibrate results against known quantities. This in turn generates more feedback for the modeler, helping to expose weak areas of modeling and therefore to speed up development. Also, the use of improved size parameters has helped to produce more linear functional forms in the model algorithms, which have numerous advantages for modeling and model development.

46. Use of Expanded Input and Output Sets in a General Parametric Cost Model 46 Practical implications for use of model To use the model well, user must become familiar with new size parameters Process is somewhat like instruction count estimating in use of software models As with software instructions, process increases analyst’s knowledge of cause and effect in cost analysis Engineers are familiar with these parameters, but analyst must also become familiar with them, more or less according to level of interaction with engineers Building data base lessens dependence on engineers, increases knowledge Initially requires more time to learn a new technique After learning the process, there is a time trade-off: spend more time on new size input, but less time on calibration or "complexity" values Practical implications for use of model In order to use this model in an effective manner, the user must, of course, become familiar with the new size parameters. The process is, in the author’s opinion, somewhat like estimating instruction counts in software estimating, and more so if function point techniques are used. As in software, going through the process has the beneficial effect of increasing the analyst’s knowledge of cause and effect in the cost analysis. Engineers are familiar with these parameters, but the analyst must also become familiar with them, more or less according to the level of interaction with engineers. Building a database of common elements and subsystems makes the analyst much less dependent, and more knowledgeable. Any new technique requires a startup investment of time for learning, and use of this model is no exception. After becoming familiar with the new size parameters, there is, in the author’s experience, no significant time difference. More time is needed for estimating the size parameters of unfamiliar elements, but less time for other parameters, notably the complexity or calibration parameters.Practical implications for use of model In order to use this model in an effective manner, the user must, of course, become familiar with the new size parameters. The process is, in the author’s opinion, somewhat like estimating instruction counts in software estimating, and more so if function point techniques are used. As in software, going through the process has the beneficial effect of increasing the analyst’s knowledge of cause and effect in the cost analysis. Engineers are familiar with these parameters, but the analyst must also become familiar with them, more or less according to the level of interaction with engineers. Building a database of common elements and subsystems makes the analyst much less dependent, and more knowledgeable. Any new technique requires a startup investment of time for learning, and use of this model is no exception. After becoming familiar with the new size parameters, there is, in the author’s experience, no significant time difference. More time is needed for estimating the size parameters of unfamiliar elements, but less time for other parameters, notably the complexity or calibration parameters.

47. Use of Expanded Input and Output Sets in a General Parametric Cost Model 47 Author Bias: Value of information An analysis is only as good as the information that went into it. You don't get something for nothing. If you want more confidence in cost and schedule estimates, you have to consider more information in the analysis. To the extent that you are able to effectively include relevant information in your analysis, the value of the result increases and the uncertainty decreases. Author Bias: Value of information An analysis is only as good as the information that went into it. You don't get something for nothing. If you want more confidence in cost and schedule estimates, you have to consider more information in the analysis. To the extent that you are able to effectively include relevant information in your analysis, the value of the result increases and the uncertainty decreases. Author Bias: Value of information An analysis is only as good as the information that went into it. You don't get something for nothing. If you want more confidence in cost and schedule estimates, you have to consider more information in the analysis. To the extent that you are able to effectively include relevant information in your analysis, the value of the result increases and the uncertainty decreases.

48. Use of Expanded Input and Output Sets in a General Parametric Cost Model 48 Author Bias: Use of Informal Logic Define informal logic (IL): relationship that has good basis in informal observation but has not been empirically verified Tools that model complex processes necessarily supplement empirical knowledge with IL, in order to model the complexity of the system Complex models are built around prevailing hypotheses Weather, economics, etc. Resource consumption (cost) is a complex process Also,cost data (even the best) is not experimental data, but really is observation (field) data Therefore cost models need to be supplemented with IL, and there is a wealth of informal data available for this purpose. Unofficial data, expert opinion, context logic Unless you are clueless, plugging a gap with your best guess is better than ignoring the issue; ignoring it amounts to making implicit or unknown assumptions about it; including it insures that you know why you got what you got Progress is made by creating and testing hypotheses Author Bias: Use of Informal Logic Define informal logic (IL) as a relationship that has good basis in informal observation but has not been empirically verified. Tools that model complex processes necessarily supplement empirical knowledge with IL, in order to model the complexity of the system. For example, models that predict the weather, or economics processes, are built around prevailing hypotheses, which are tested and refined as model development proceeds. Resource consumption, at least for the development of the complex systems that we analyze, is a complex process. While we probably will never perceive a need for a cost model is as complex as a weather prediction model, it is desirable to have cost models complex enough to model the effects of those known causes of cost variation that we care about. Also, cost data (even the best) is not experimental data, and should not be treated as such. It is observation, or field data, and has many limitations. Therefore cost models need to be supplemented with IL, and there is a wealth of informal data available for this purpose, including unofficial data, expert opinion, context logic, and more. Another way of expressing this is: unless you are clueless, plugging a gap with your best guess is better than ignoring an issue. Ignoring the issue amounts to making implicit or unknown assumptions about it, whereas including it in the model insures that you understand and can explain your result, and that you begin immediately to get feedback about your assumptions. Progress is made by creating and testing hypotheses. Author Bias: Use of Informal Logic Define informal logic (IL) as a relationship that has good basis in informal observation but has not been empirically verified. Tools that model complex processes necessarily supplement empirical knowledge with IL, in order to model the complexity of the system. For example, models that predict the weather, or economics processes, are built around prevailing hypotheses, which are tested and refined as model development proceeds. Resource consumption, at least for the development of the complex systems that we analyze, is a complex process. While we probably will never perceive a need for a cost model is as complex as a weather prediction model, it is desirable to have cost models complex enough to model the effects of those known causes of cost variation that we care about. Also, cost data (even the best) is not experimental data, and should not be treated as such. It is observation, or field data, and has many limitations. Therefore cost models need to be supplemented with IL, and there is a wealth of informal data available for this purpose, including unofficial data, expert opinion, context logic, and more. Another way of expressing this is: unless you are clueless, plugging a gap with your best guess is better than ignoring an issue. Ignoring the issue amounts to making implicit or unknown assumptions about it, whereas including it in the model insures that you understand and can explain your result, and that you begin immediately to get feedback about your assumptions. Progress is made by creating and testing hypotheses.

49. Use of Expanded Input and Output Sets in a General Parametric Cost Model 49 End

  • Login