1 / 36

System Availability Modeling

Systems Reliability, Supportability and Availability Analysis. System Availability Modeling. Availability Modeling. Requirements and Figures of Merit Analytical versus Simulation Modeling Availability Model Development Blue Flame Aircraft Case Study Summary and Discussion.

shanta
Download Presentation

System Availability Modeling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Systems Reliability, Supportability and Availability Analysis System Availability Modeling

  2. Availability Modeling Requirements and Figures of Merit Analytical versus Simulation Modeling Availability Model Development Blue Flame Aircraft Case Study Summary and Discussion

  3. Requirements and Figures of MeritWhy Model?

  4. Availability Analysis provides a mathematical basis for evaluating system design and development decisions based on system level performance measures in order to influence the air vehicle design concurrently with support system design.

  5. System Design Evaluation Categories System/Segment (Type A)Functional Baseline Operational Effectiveness Evaluation Operational Suitability Evaluation • Requirements • MOEs/MOSs • Critical Issues • Test Objectives • Thresholds • Deficiency andFailure Tracking “To what degree does this system satisfactorily support mission accomplishment when used by representative personnel in the expected or potential environment for operational employment of the system considering organization, doctrine, tactics, survivability, vulnerability, and threat?” “To what degree can this system satisfactorily be deployed considering availability, compatibility, transportability, interoperability, reliability, wartime usage rates, maintainability, safety, human factors, manpower supportability, documentation and training requirements?” Functional Effectiveness Evaluation ScheduleEvaluation “How and to what degree will this system satisfactorily contribute to the required mission(s) in the predicted operational environment?” (a combined, system-level assessment) CostEvaluation

  6. RM&S Integration into System Engineering Process TechnicalDisciplines Pre-FSD FSD • Logistics Concept Planning and Development • Life Cycle Cost Goals • Supportability Specifications • Operations Analysis • Life Cycle Cost • Survivability/Vulnerability • Safety • Reliability/Parts Standardization • Maintainability • Human Factors • Maintenance Concept/Plan • Spares Provisioning • Support Equipment • Training equipment • Training • Technical Publications • Packaging, Handling, Storage & Transportation • Facilities • Manpower Requirements & Personnel • Logistics Support Resource Funding • Energy Management • Computer Resources • ILS Test & Evaluation • ILS Planning • ILS Management System SupportSystem Transition toProduction • Update ILS Plans • Quantification of Support Requirements • Integration of Support Studies and Analyses • Design Support Trade Off Studies • Evaluation of Support System Effectiveness TrainingSystem System Engineering, LSA, Integrated Logistics Support Operations • Identification & Resolution of Support Problems • Analyses for Operational/Support Concept ChangesEvaluation of System Mods Impacts on Support

  7. Operational Concept Support Concept Maintenance Concept The System View • Availability • Sortie Generation Rates • Basing Product • Reliability • Maintainability • Supportability • Testability • Organization • Requirements • Schedule Maintenance • Unscheduled Maintenance • Spares • Technical Publications • Training • Support Equipment

  8. Analytical versus Simulation

  9. General Modeling Options Analytical Representations • Mathematical formulas and symbolic models • May use computers to process the formulas Computer Simulations • Imitation of the physical phenomena(movement, war, performance overtime) using computer generated activities and results • human decision making represented by pre-programmed and/or probabilistic decision rules Assemblage of Gaming People and Tools • Human-based “game playing” to achieve insights (e.g. war games) Field Experiments • Replications of a physical situation under controlled and limited scale environments to estimate total system level performance

  10. When simulation models make sense • When mathematical models do not exist, or analytical methods of solving them have not yet been developed • When analytical methods are available, but mathematical solution methods are too complex to use • When analytical solutions exist and are possible, but are beyond the mathematical capabilities of available personnel

  11. When simulation models make sense • When it is desired to observed a simulated history of the process over a period of time in addition to estimating relevant parameters • When it may be the only possibility because of difficulty in conducting experiments and observing phenomena in their actual environment • When time compression may be required for systems over long time frames

  12. Advantages of Simulation • Permits controlled experimentation with: • consideration of many factors • manipulation of many individual units • ability to consider alternative polices • little or no disturbance of the actual system • Effective training tool • Provides operational insight • May dispel operational myths

  13. Advantages of Simulation • May make middle management more effective • May be the only way to solve problem

  14. Disadvantages of Simulation • Costly (very costly?) • Uses scarce and expensive resources • Requires fast, high capacity computers (use of PC’s?) • Takes a long time to develop • May hide critical assumptions • May require expensive field studies • Very much dependent on availability of data and is validity

  15. Typical A/R/S Analysis Models • Analytical Models • Inherent Availability Models • Expected Value Models • Stochastic/Markov Models • SAVE (System Availability Estimator) • Differential Equation Models • Parametric Models • Simulation Models (Mainframe & PC-based) • Top-Level A/R/S Models • Theater Simulation of Air Base Resources (TSAR)-Rand Corp. • Douglas Aircraft Company Availability Model (DACAM) • System Inventory Analysis Model (SIAM) • More detailed A/R/S Models • Modified Logistics Composite Model (LCOM)-USAF • Comprehensive A/C Support Effectivenes Eval.(CASEE)Model -USNavy

  16. When Simulation Models Make Sense(An Analyst’s Checklist) • When mathematical models do not exist, or analytical methods of solving them have not yet been developed • When analytical methods are available, but mathematical solution methods are too complex to use • When analytical solutions exist and are possible, but are beyond the mathematical capabilities of available personnel • When it is desired to observe a simulated history of the process over a period of time in addition to estimating relevant parameters • When it may be the only possibility because of difficulty in conducting experiments and observing phenomena in their actual environment • When time compression may be required for systems or processes over long time frames

  17. System Life Cycle Utility of Models/Analyses * - High level analysis ** - ECP/Changes/Problem Resolution

  18. Thoughts To Remember • The overall objective of availability modeling and analysis is to provide support to the system design, development, and deployment process in order to influence system design by considering all aspects of its reliability, maintainability, and support system characteristics • The objective remains unaffected by the choice of using one model solution technique (e.g. simulation) over the other. • The efficacy of choosing one method over the other will be influenced primarily by outside factors (e.g. cost, schedule, availability of data, personnel and facility capabilities).

  19. Availability Model Development

  20. Model Development Overview • Analysis Objectives • Analysis Planning • Development Approach • Development Considerations • Inputs and Outputs • Data Requirements • Algorithm Development • Implementation Examples

  21. RMS Analysis Objectives • Specification Requirements Evaluation • Requirement Integration - Conflicts? Attainable? • Verify and Demonstrate Compliance • Verify Demonstrate Adequacy of Logistics Support • Support System Design Influence • Evaluate Impacts of Changes to Operation and Maintenance Concepts • Analyze & Evaluate Operational Suitability • Support Functional Trade-off Analyses on Alternative Designs • System Design Assessment • Examine the Total Picture at the System Level • Address Impacts of All Variables at once • Evaluate Impacts of Flight/Scenario/Usage Rate Changes • Management Visibility • Provide Useful Predictions for All Levels of Management • Assist Management in Identification and Resolution of Reliability, Maintainability, and Supportability Issues

  22. RMS Analysis Objectives by Program Phase • Concept Definition • Support Contractual Requirements Analysis • Examine Operations, Maintenance, and Support Concepts • Support Design Concept Trade-off Studies • Identify Cost, Schedule, Risk, and Support Drivers • Demonstration/Validation • Refine Concept Definitions • Support Requirements Allocation Process • Provide Capability to Influence Design • Estimate Fielded System performance Levels • Full-Scale Engineering Development • Support Detailed Trade-off Studies • Establish Support System Requirements Baseline • Assess/Validate Operations, Maintenance, Support Concepts • Production and Deployment • Asses Fielded System Performance Levels • Refine Support Concepts/Levels • Identify System Improvement Requirements

  23. RMS Analysis Planning Considerations Evaluate A/R/S Analysis Reqs. • Where does data come from? • Experiment? • Field tests? • Previous experience? • Simulation? • Other resources? • What will data be used for? • How will data be collected and managed? • What tests/simulations need to be executed, and when? • How will results be dev. and rec? • How does everything fit together to meet the system test & eval. objectives? Develop Test / Analysis Plans • Critical Issues • Objectives • MOEs & MOSs • Success Criteria • Schedule • Test Design • Analysis Plan • Data Collection & Management Plan • Test Execution Plan • Documentation Plan • Test and Evaluation Master Plan

  24. RMS Model Development Approach • Define Model Elements and Specifications • Operational Activity elemanet Specifications • System State Conditions and Attribute Specifications • Operational Activity Demand Generation • System Component Level of Detail Determination • Support System Resource Definition and Specifications • Define Model Structure • Model Processing Definition(s) • System Failure Processing • System Unscheduled Maintenance Processing • Model Inputs • Model Outputs • Implement Model Structure on the Computer • Model Activities • Model Output Measure Calculation Implementation • Perform Full Model Test & Eval. Using Sample Data • Install Model at User Site and Perform Checkout, Train Users

  25. Probabilistic Modeling(probabilistic analysis) • Purpose: To simulate probabilistic situations using a random number generator and the cumulative probability distribution of interest. • Example: Distribution of unscheduled maintenance times:no action required (none), repair in place (RIP), remove and replace (R&R), and cannot duplicate (CND)

  26. Analysis/Model Development Considerations • Data Input/Output Formats • Data and Output Result Configuration Management & Control • Input/Output Data Approval by Management • Baseline and Excursion Data Definitions/Conditions • Data Screening/Editing Capabilities • Model Restart Capabilities • Ease of Development and Modification • Transparency to the Users (changes to system and data) • Degree of integration with other models and Analyses • Convenient Man-in-the-Loop Interfaces • Growth/Flexibility/Change Capabilities • Others

  27. Typical RMS Model Requirements • Work Unit Code (WUC) Structure • Total system WUC structure • Two Digit level definitions (or to levels of interest) • Probability Distributions fro Activity Times (by WUC) • Mission durations and types • Trouble-shooting times • On/off aircraft repair times • Remove, replace, checkout times • Delay times (spares, personnel, equipment) • Service and turnaround times • Preflight and return service times • Probabilities (by WUC) • Probability of in-flight failure (gripe) • Spares, personnel, equipment availability – when called • On equipment vs. off-equipment repair rates • No defect found rates

  28. Input Data Sources & Parameters Model Data Element Definitions derived from Air Force Terms Air Force RAM DataSources • AFR 66-1 (Maintenance Data Collection System- MDCS) Data Elements • CORE Automated Maintenance System (CAMS) • AFR 65-110 (air Vehicle Inventory Status and Reporting System (AVISURS) • Others • “Reliability” in terms of MTBM • Types 1,2, & 6 • “On-equipment” & “Off-equipment” Maintenance Action Definitions: • Repair in Place • Cannot Duplicate • Bench Check –- Repair • Bench Check –- Serviceable • Not Repairable in this Station • Work Unit Code Definitions • Others

  29. Typical RMS Output Parameters(for sensitivity analysis) • Availability Parameters • Average mission capable rates (full, partial, not capable) • Instantaneous mission capability status at any time in the simulation/analysis period • System Level Performance Parameters • Average downtime per sortie • Average unscheduled maintenance time • Percent of scheduled sorties accomplished (over time) • Number of sorties cancelled due to pre-sortie failure • Number of unscheduled maintenance actions required • Maintenance Resource Utilization Statistics • Total resource hours used during simulated period (by resource type) • Maximum number in use at any time during simulation • Total number of subsystem spare parts used

  30. Characteristics of PC-based Modeling • Can provide stochastic network processing with discrete events using simulation languages implemented on PC’s (SLAM II) • Can simulate system operational environments: • Basic operations and maintenance processing defined by established input networks • Specific task information (times, required resources, task attributes, etc.) supplied through input data • Will treat system maintenance simulated at line replaceable unit (LRU) level of detail with input and output data aggregated at the subsystem level of detail • Provides real-time system capability assessment over a wide range of design and development parameters with relatively small set of input data required • Use of real-time graphics capabilities promotes model understanding and display of results of different execution conditions and constraints • Portability permits use in remote and dispersed locations for examining impacts of local environmental and support conditions

  31. What Talents are Required? • System Operators • Develop operational and support requirements and concepts • Develop measures of effectiveness (MOEs) and supportability (MOSs) • System Modelers • Develop system-specific modeling and analysis requirements, parameter definitions, input/output requirements • Translate requirements into algorithmic definitions • Applications Programmers • Implement the model(s) in the appropriate media solution • Systems Analyst • Perform the required analyses and interpret results in terms of system level impacts

  32. Previous Availability Model Applications

  33. Summary and Conclusion

  34. The Benefits of Availability Modeling & Analysis • Availability Modeling and Analysis provides the “glue” which ties system RMS performance evaluation together: • Considers operational environments/stresses • Identifies dominant failure modes’ • Balances overall support system performance • It provides one of the few methods capable of estimating fielded system performance levels during the design and development process. • Applies to Commercial as well as DoD systems

  35. Availability Analysis: A value-added process 1) Availability analysis provides the “glue” which ties system RMS performance evaluation together: • Considers operational environments and stresses • Identifies dominant failure modes • Incorporates repair and replace times estimates • Evaluates overall support system performance

  36. Availability Analysis: A value-added process 2) It provides a rational structure for evaluating system design and development decisions based on system level performance measures.

More Related