1 / 19

Reliability Block Diagram Modeling – A Comparison of Three Software Packages

Reliability Block Diagram Modeling – A Comparison of Three Software Packages. Aron Brall, SRS Technologies, Mission Support Division William Hagen, Ford Motor Company, Powertrain Manufacturing Engineering Hung Tran, SRS Technologies, Mission Support Division. THE SOFTWARE PACKAGES - 1.

telma
Download Presentation

Reliability Block Diagram Modeling – A Comparison of Three Software Packages

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliability Block Diagram Modeling – A Comparison of Three Software Packages Aron Brall, SRS Technologies, Mission Support Division William Hagen, Ford Motor Company, Powertrain Manufacturing Engineering Hung Tran, SRS Technologies, Mission Support Division

  2. THE SOFTWARE PACKAGES - 1 • ARINC RAPTOR 7.0.07 • From RAPTOR web site: • “Raptor is a software tool that simulates the operations of any system.” • “Sophisticated Monte Carlo simulation algorithms are used to achieve these results.” • Our Take: • Pure Monte Carlo simulation tool to solve reliability block diagrams.

  3. THE SOFTWARE PACKAGES - 2 • Reliasoft BlockSim 6.5.2 • From BlockSim web site: • “Flexible Reliability Block Diagram (RBD) creation.” • “Exact reliability results/plots and optimum reliability allocation.” • “Repairable system analysis via simulation (reliability, maintainability, availability) plus throughput, life cycle cost and related analyses.” • Our Take: • Monte Carlo simulation with algorithms used to speed the processing time. • Also provides analytical calculation of reliability.

  4. THE SOFTWARE PACKAGES - 3 • Relex Reliability Block Diagram • From Relex web site: • “At the core of Relex RBD is a highly intelligent computational engine.” • “First, each diagram is analyzed to determine the best approach for problem solving using pure analytical solutions, simulation, or a combination of both.” • “Once a methodology is determined, the powerful Relex RBD calculations are engaged to produce fast, accurate results.” • Our Take: • Relex RBD appears to be a hybrid tool that uses algorithms and simulation in varying combinations to solve reliability block diagrams.

  5. Why Compare Reliability Software • Analysts (especially new analysts) tend to report reliability software results as exact values • Engineering judgment, caution and experience are being supplanted by software analysis • Error checking is often absent • Number of runs; confidence limits; garbage in, garbage out all impact value of software analysis

  6. One Block Model

  7. Simple Model

  8. Large Model

  9. Complex Model

  10. Results of Simulations

  11. What Do the Results Tell Us • If precision is required, it isn’t there • One to two significant figure agreement at best between packages • Confidence limits are necessary for data • Some parameters are either defined differently, or calculated using such diverse algorithms or methodologies that they aren’t comparable • Errors in modeling or application of the software can go undiscovered when only one software package and one analyst are used • The complexity of large models and different issues with each software interface opens up many opportunities for human failure • Checking a model for errors can be more time intensive than creating the original model

  12. Cautions - 1 • Use of a single model, especially a highly complex model, to demonstrate compliance with a requirement is error prone and risky • Many times the results of these simulations are used to demonstrate compliance with a specified reliability or availability requirement. • A result that would show a Reliability of 0.85 when the requirement was 0.90 might cause redesign, request for waiver, or other action to address the shortfall. • The shortfall may be due to the parameters used for the simulation, the algorithms used by the software, a lack of understanding of how long to simulate, how many independent random number streams to use, and/or how many runs to use. • Analytical solutions for highly complex models are based on approximations.

  13. Cautions - 2 • The programs do not necessarily describe variables in the same manner. • i.e.When using the Lognormal distribution, there was a difference in terminology between Raptor and BlockSim. • Raptor allows the Lognormal to be entered as Mean and Std Dev. or Mu and Sigma. • BlockSim only uses Mean and Std. Dev., but this is the same as Raptor’s Mu and Sigma. • A novice could waste a great deal of time clarifying what needs to be entered as data.

  14. Cautions - 3 • Modeling special cases can be difficult because of the way the programs handle standby (which was in our models) and phasing (which was not in our models). • Output parameters were not consistently labeled. The user should understand the difference between MTTF, MTTFF, MTBDE, and MTBF for reliability and MDT and MTTR for maintainability.

  15. Cautions - 4 • The products provide reliability and availability results with various adjectives such as “mean”, “point”, “conditional”, etc. • A review of the literature provided with the packages is necessary to understand these terms and relate them to those found in specifications, handbooks, references, and texts. • It is a serious issue that there doesn’t appear to be standard and/or consistent terminology and notation from one program to another as well as to standard literature in the field.

  16. Cautions - 5 • Flexibility • Each package has tabs, checkboxes, preferences, defaults, multiple random number streams, selectable seeds for random numbers, etc to facilitate the modeling, analysis, and simulation process. • Flexibility can provide huge pitfalls to the analyst. • Care in modeling, and use of support services provided by the software supplier is a good practice. • Numerous runs and reruns may be necessary due to idiosyncrasies of the software, • Beware of errors in modeling, confusion of parameter definition, etc. • Problems compound as a variety of failure distributions are intermixed with a similar grouping of repair distributions. • As a model becomes more complex, simulation becomes mandatory

  17. Observations - 1 • The models can run quickly even on old Pentium II PCs, or they can take hours to run. • Length of simulation time, number of runs, and failure rate of the system can all contribute to lengthening of simulation time. • One of the models took in excess of 1 hour on a 3 GHz Pentium IV. • Convergence of the results is heavily dependent on how consistent the block failure rates are. • For example, one block with an MTBF of 1000 hours, can double or triple simulation time. • The display during simulation on some of the packages shows the general trend, but there can be a lot of outliers. • One model failed to converge on one of the packages – again this may have been due to a subtle preference selection (or non-selection).

  18. Observations - 2 • The display of Availability and or Reliability during simulation can be useful for seeing how the simulation is behaving. • For most models, this rapidly stabilizes to the first decimal place, then the second decimal place tends to bounce around. • Usually you get the first 2 significant figures in a hundred runs. • We have the impression that most of the user interfaces were designed by software designers, working with R&M engineers. • The problem is that we seem to have gotten what an R&M engineer would tell someone never having used the product. • For example, it’s really annoying that you have to double click and work through tabs to put data into blocks in the block diagrams; the alternative is to use the Item Properties Table, which doesn't let you create blocks and in some cases change probability distributions.

  19. Recommendations • When demonstrating compliance to a requirement is required • Model system using one of the following approaches to reduce human error • Have one analyst model in two different software packages • Software methodologies are sufficiently different to avoid repeating errors • Have second analyst perform detailed audit of model and data entry • Have two analysts independently model and enter data • Compare results • Results should agree within +/- 3 Standard Errors of the Mean • Make detailed notes of assumptions, methods, simulation values, etc. to provide an audit trail

More Related