1 / 35

Variance vs Entropy Base Sensitivity Indices

Variance vs Entropy Base Sensitivity Indices. Julius Harry Sumihar. Outline. Background Variance-based Sensitivity Index Entropy-based Sensitivity Index Estimates from Samples Results Conclusions. Background.

ronan-huff
Download Presentation

Variance vs Entropy Base Sensitivity Indices

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Variance vs Entropy BaseSensitivity Indices Julius Harry Sumihar

  2. Outline • Background • Variance-based Sensitivity Index • Entropy-based Sensitivity Index • Estimates from Samples • Results • Conclusions

  3. Background • Applications of computational models to complex real situations are often subject to uncertainty • The aim of sensitivity analysis is to quantitatively express the degree of impact of the uncertainty from the specific sources on the resulting uncertainty of final model output

  4. Variance Base Sensitivity Index • Result from the principle of “expected reduction in variance” • This principle leads to the expression: • Interpreted as “the amount of variance of output Y that is expected to be removed if the true value of parameter Xi will become known” varY – E[var(Y|Xi)]

  5. Main characteristic: it considers the variance of a probability distribution as an overall scalar measure of the uncertainty represented by this distribution • Intuitively, over a bounded interval, the highest possible degree of uncertainty is expressed by the uniform distribution

  6. p=1/3 p=1/3 p=1/4 p=1/4 p=1/4 p=1/4 p=1/6 p=1/6 0 1/3 2/3 1 0 1/3 2/3 1 • A scalar measure of uncertainty should attain its maximum value for uniform distribution • Inconsistency: This is not the case for variance Var(X) = 19/108 Var(X) = 15/108 H(X) = 1.32966 H(X) = 1.38629

  7. Entropy Base Sensitivity Index* • Entropy: an overall scalar uncertainty measure maximized by the uniform distribution • ‘a measure of the total uncertainty of Y coming from all parameters’ *Bernard Krzykacz-Hausmann,”Epistemic Sensitivity Analysis Based On The Concept Of Entropy”

  8. ‘a measure of uncertainty of Y coming from the other parameters if the value of parameter X is known to be x’: • ‘expected uncertainty of Y if the true value of parameter X will become known’:

  9. ‘the amount of entropy of output Y that is expected to be removed if the true value of parameter X will become known’: • By some manipulations:

  10. Estimates From Samples a1 a2 ai-1 ai aimax X bj-1 b1 b2 b3 bj bjmax Y

  11. Entropy Base: Variance Base:

  12. Results • Model: • Y = U1 + U2 • Y = U1 + 2U2 • Y = N1 + N2 • Y = N1 + 2N2 • U1,U2 ~ U[0,1], N1,N2 ~ N(0.5, 0.3) • Number of samples: 1,000 and 10,000 (@10 times) • Grid Size: 0.025, 0.05, 0.1, 0.2

  13. Effect of Sample Number • 10,000 samples is better than 1,000 samples • use 10,000 samples from now on

  14. Model Xi 0.025 0.05 0.1 0.2 Analytical Y = U1 + U2 U1 0,5629084 0,4847560 0,4381850 0,3740526 0,5 U2 0,5618695 0,4838189 0,4378267 0,3759858 0,5 Y = U1 + 2U2 U1 0,4104407 0,2723420 0,2263539 0,1890477 0.25 U2 0,9948456 0,9062422 0,83651837 0,7288735 0.943147 Y = N1 + N2 N1 0,5493945 0,4147645 0,35574700 0,3236304 0,346573 N2 0,5524757 0,4150046 0,35787368 0,3255338 0,346573 Y = N1 + 2N2 N1 0,5112249 0,2500562 0,15475538 0,1155110 0.111572 N2 0,9992094 0,8611518 0,79986998 0,7280601 0.804719 Effect of Grid Size H(Y)-H(Y|Xi)

  15. Model Xi 0.025 0.05 0.1 0.2 Analytical Y = U1 + U2 U1 0,08324767 0,0829774 0,0822966 0,0796876 0,083333 U2 0,08371050 0,0833495 0,0825837 0,0802683 0,083333 Y = U1 + 2U2 U1 0,08395208 0,0832341 0,0823905 0,0794308 0,083333 U2 0,33438259 0,3335535 0,3309643 0,3212748 0,333333 Y = N1 + N2 N1 0,08959163 0,0892577 0,0884411 0,0856991 0,09 N2 0,08977242 0,0894751 0,0887430 0,0860685 0,09 Y = N1 + 2N2 N1 0,09116694 0,0901768 0,0886759 0,0859255 0,09 N2 0,35729517 0,3565734 0,3537290 0,3437929 0,36 Var(Y)-E[Var(Y|Xi)]

  16. Entropy base is very sensitive to grid size • No rule exist for choosing grid size

  17. Model Xi H(Y)-H(Y|Xi) Analytical var(E[Y|Xi]) Analytical Y = U1 + U2 U1 0.484756 0.5 0.083248 0.083333 U2 0.483819 0.5 0.083350 0.083333 Y = U1 + 2U2 U1 0.272342 0.25 0.083234 0.083333 U2 0.906242 0.943147 0.333553 0.333333 Y = N1 + N2 N1 0.355747 0.346573 0.089591 0.09 N2 0.357874 0.346573 0.089772 0.09 Y = N1 + 2N2 N1 0.115511 0.111572 0.090177 0.09 N2 0.799870 0.804719 0.357295 0.36 Best Estimates

  18. H(Y)-H(Y|Xi) Var(Y)-E[Var(Y|Xi)]

  19. H(Y)-H(Y|Xi) Var(Y)-E[Var(Y|Xi)]

  20. Conclusions • Entropy-based sensitivity index is difficult to estimate • Variance-based sensitivity index is better than the Entropy-based one

  21. Model: Y = U1 + U2

  22. Model: Y = U1 + U2

  23. Model: Y = U1 + U2

  24. Model: Y = U1 + 2U2

  25. Model: Y = U1 + 2U2

  26. Model: Y = U1 + 2U2

  27. Model: Y = U1 + 2U2

  28. Model: Y = U1 + 2U2

  29. Model: Y = U1 + 2U2

  30. Model: Y = a1N1+a2N2+a3N3+… Bernard Krzykacz-Hausmann:

  31. Derivation of H(Y)-H(Y|X) ; ;

  32. Estimate of H(Y)-H(Y|X)

  33. Estimate of Var(Y)-E(Var(Y|X))

  34. Estimate of Var(Y)-E(Var(Y|X))

More Related