1 / 59

Travis V. Anderson July 26, 2011 Graduate Committee: Christopher A. Mattson

Efficient, Accurate, and Non-Gaussian Statistical Error Propagation Through Nonlinear System Models. Travis V. Anderson July 26, 2011 Graduate Committee: Christopher A. Mattson David T. Fullwood Kenneth W. Chase. Presentation Outline. Section 1: Introduction & Motivation

farhani
Download Presentation

Travis V. Anderson July 26, 2011 Graduate Committee: Christopher A. Mattson

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Efficient, Accurate, and Non-Gaussian Statistical Error Propagation Through Nonlinear System Models Travis V. Anderson July 26, 2011 Graduate Committee: Christopher A. Mattson David T. Fullwood Kenneth W. Chase

  2. Presentation Outline Section 1: Introduction & Motivation Section 2: Uncertainty Analysis Methods Section 3: Propagation of Variance Section 4: Propagation of Skewness & Kurtosis Section 5: Conclusion & Future Work 2

  3. Section 1: Introduction & Motivation 3

  4. Engineering Disasters Tacoma Narrows Bridge Space Shuttle Challenger Hindenburg Chernobyl

  5. F-35 Joint-Strike Fighter 5

  6. Research Motivation • Allow the system designer to quantify system model accuracy more quickly and accurately • Allow the system designer to verify design decisions at the time they are made • Prevent unnecessary design iterations and system failures by creating better system designs 6

  7. Section 2: Uncertainty Analysis Methods 7

  8. Uncertainty Analysis Methods • Error Propagation via Taylor Series Expansion • Brute Force Non-Deterministic Analysis (Monte Carlo, Latin Hypercube, etc.) • Deterministic Model Composition • Error Budgets • Univariate Dimension Reduction • Interval Analysis • Bayesian Inference • Response Surface Methodologies • Anti-Optimizations 8

  9. Brute Force Non-Deterministic Analysis • Fully-described, non-Gaussian output distribution can be obtained • Simulation must be executed again each time any input changes • Computationally expensive 9

  10. Deterministic Model Composition • A compositional system model is created • Each component’s error is included in an error-augmented system model • Component error values are varied as the model is executed repeatedly to determine max/min error bounds 10

  11. Error Budgets • Error in one component is perturbed at a time • Each perturbation’s effect on model output is observed • Either errors must be independent or a separate model of error interactions is required 11

  12. Univariate Dimension Reduction • Data is transformed from a high-dimensional space to a lower-dimensional space • In some situations, analysis in reduced space may be more accurate than in the original space 12

  13. Interval Analysis • Measurement and rounding errors are bounded • Arithmetic can be performed using intervals instead of a single nominal value • Many software languages, libraries, compilers, data types, and extensions support interval arithmetic • XSC, Profil/BIAS, Boost, Gaol, Frink, MATLAB (Intlab) • IEEE Interval Standard (P1788) 13

  14. Bayesian Inference • Combines common-sense knowledge with observational evidence • Meaningful relationships are declared, all others are ignored • Attempts to eliminate needless model complexity 14

  15. Response Surface Methodologies • Typically uses experimental data and design of experiments techniques • An n-dimensional response surface shows the output relationship between n-input variables 15

  16. Anti-Optimizations • Two-tiered optimization problem • Uncertainty is anti-optimized on a lower level to find the worst-case scenario • The overall design is then optimized on a higher-level to find the best design 16

  17. Section 3: Propagation of Variance 17

  18. Central Moments • 0th Central Moment is 1 • 1st Central Moment is 0 • 2nd Central Moment is variance • 3rd Central Moment is used to calculate skewness • 4th Central Moment is used to calculate kurtosis 18

  19. First Order Taylor Series 19

  20. First-Order Formula Derivation Square and take the Expectation of both sides: Covariance Term • Assumption: • Inputs are independent 20

  21. First-Order Error Propagation • Formula for error propagation most-often cited in literature • Frequently used “blindly” without an appreciation of its underlying assumptions and limitations 21

  22. Assumptions and Limitations • The approximation is generally more accurate for linear models  This Section • Only variance is propagated and higher-order statistics are neglected  Section 4 • All inputs are assumed be Gaussian Section 4 • System outputs and output derivatives can be obtained • Taking the Taylor series expansion about a single point causes the approximation to be of local validity only • The input means and standard deviations must be known • All inputs are assumed to be independent 22

  23. First-Order Accuracy Function: y = 1000sin(x) Input Variance: 0.2 100% Error Unacceptable! 23

  24. Second-Order Error Propagation Just as before: • Subtract the expectation of a second-order Taylor series from a second-order Taylor series • Square both sides, and take the expectation  • Assumption: • Inputs are Gaussian  Odd moments are zero 24

  25. Second-Order Error Propagation • Second-order formula for error propagation most-often cited in literature • Like the first-order approximation, the second-order approximation is also frequently used “blindly” without an appreciation of its underlying assumptions and limitations

  26. Second-Order Accuracy Function: y = 1000sin(x) Input Variance: 0.2 26

  27. Higher-Order Accuracy Function: y = 1000sin(x) Input Variance: 0.2 27

  28. Computational Cost 28

  29. Predicting Truncation Error • How can we achieve higher-order accuracy with lower-order cost? 29

  30. Predicting Truncation Error • Can Truncation Error Be Predicted? 30

  31. Adding A Correction Factor Trigonometric (2nd Order): y = sin(x) or y = cos(x) 31

  32. Trigonometric Correction Factor 32

  33. Correction Factors Natural Log (1st Order): y = ln(x) Exponential (1st Order): y = exp(x) 33

  34. Correction Factors Exponential (1st Order): y = bx where: 34

  35. So What Does All This Mean? • We can achieve higher-order accuracy with lower-order computational cost Average Error Computational Cost 35

  36. Kinematic Motion of Flapping Wing 36

  37. Accuracy of Variance Propagation Order 2nd: 3rd: 4th: CF: RMS Rel. Err. 40.97% 11.18% 1.32% 1.96% 37

  38. Computational Cost Execution time was reduced from ~70 minutes to ~4 minutes  A computational cost reduction by 1750% Fourth-order accuracy was obtained with only second-order computational cost 38

  39. Section 4: Propagation of Skewness & Kurtosis 39

  40. Non-Gaussian Error Propagation Predicted Gaussian Output Actual System Output Predicted Non-Gaussian Output Actual System Output 40

  41. Skewness • Measure of a distribution’s asymmetry • A symmetric distribution has zero skewness 41

  42. Propagation of Skewness • Based on a second-order Taylor series 42

  43. Kurtosis & Excess Kurtosis • Measure of a distribution’s “peakedness” or thickness of its tails Kurtosis Excess Kurtosis 43

  44. Propagation of Kurtosis • Based on a second-order Taylor series 44

  45. Flat Rolling Metalworking Process Maximum change in material thickness achieved in a single pass Roller Radius Coefficient of Friction 45

  46. Input Distribution 46

  47. Gaussian Error Propagation • Probability Overlap: 53% Predicted Gaussian Output Actual System Output 47

  48. Non-Gaussian Error Propagation • Probability Overlap: 93% Predicted Non-Gaussian Output Actual System Output 48

  49. Benefits of Higher-Order Statistics That’s a 263% reduction in the number of passes! Gaussian Non-Gaussian Accuracy: Max ΔH: (99.5% success rate) 53% 3.0 cm 93% 7.9 cm 49

  50. Section 5: Conclusion & Future Work 50

More Related