1 / 47

Stochastic Linear Programming by Series of Monte-Carlo Estimators

Stochastic Linear Programming by Series of Monte-Carlo Estimators. Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail: <sakal;@ktl.mii.lt>. CONTENT. Introduction Monte-Carlo estimators Stochastic differentiation Dual solution approach ( DS )

najwa
Download Presentation

Stochastic Linear Programming by Series of Monte-Carlo Estimators

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail: <sakal;@ktl.mii.lt>

  2. CONTENT • Introduction • Monte-Carlo estimators • Stochastic differentiation • Dual solution approach (DS) • Finite difference approach (FD) • Simulated perturbation stochastic approximation (SPSA) • Likelihood ratio approach (LR) • Numerical study of stochastic gradient estimators • Stochastic optimization by series of Monte-Carlo estimators • Numerical study of stochastic optimization algorithm • Conclusions

  3. Introduction We consider the stochastic approach for stochastic linear problems which distinguishes by • adaptive regulation of the Monte-Carlo estimators • statistical termination procedure • stochastic ε–feasible direction approach to avoid “jamming” or “zigzagging” in solving a constraint problem

  4. Two-stage stochastic programming problem with recourse where subject to the feasible set W, T, h are random in general and defined by absolutely continuous probability density

  5. Monte-Carlo estimators of objective function Let the certain number N of scenarios for some is provided: and the sampling estimator of the objective function as well as the sampling variance are computed

  6. Monte-Carlo estimators of stochastic gradient The gradient as well as the sampling covariance matrix: are evaluated using the same random sample, where

  7. Statistical testing of optimality hypothesis under asymptotic normality Optimality hypothesis is rejected if 1) the statistical hypothesis of equality of gradient to zero is rejected 2) or confidence interval of the objective function exceeds the admissible value

  8. Stochastic differentiation • We examine several estimators for stochastic gradient: • Dual solution approach (DS); • Finite difference approach (FD); • Simulated perturbation stochastic approach (SPSA); • Likelihood ratio approach (LR).

  9. Dual solution approach (DS) The stochastic gradient is expressed as using the set of solutions of the dual problem

  10. Finite difference (FD) approach In this approach the each ith component of the stochastic gradient is computed as: is the vector with zero components except ith one, equal to 1, is certain small value

  11. Simulated perturbation stochastic approximation (SPSA) where is the random vector, which components obtain values 1 or -1 with probabilities p=0.5, is some small value (Spall (2003))

  12. Likelihood ratio (LR) approach Rubinstein, Shapiro (1993), Sakalauskas (2002)

  13. Methods for stochastic differentiation have been explored with testing functions here Numerical study of stochastic gradient estimators (1)

  14. Numerical study of stochastic gradient estimators (2) Stochastic gradient estimators from samples of size (number of scenarios) N was computed at the known optimum point X (i.e. ) for test functions, depending on n parameters. This repeated 400 times and the corresponding sample of Hotelling statistics was analyzed according to and criteria

  15. criteria on variable number nand Monte Carlo sample sizeN (critical value 0,46)

  16. criteria on variable number nand Monte Carlo sample sizeN (critical value 2,49)

  17. Statistical criteria on Monte Carlo sample size N for number of variable n=40(critical values 0,46 ir2,49)

  18. Statistical criteria on Monte Carlo sample size N for number of variable n=60 (critical values 0,46 ir2,49)

  19. Statistical criteria on Monte Carlo sample size N for number of variable n=80(critical values 0,46 ir2,49)

  20. Conclusion: T2-statistics distribution may be approximated by Fisher law, when number of scenarios: Numerical study of stochastic gradient estimators (8)

  21. Frequency of optimality hypothesis on the distance to optimum (n=2)

  22. Frequency of optimality hypothesis on the distance to optimum (n=10)

  23. Frequency of optimality hypothesis on the distance to optimum (n=20)

  24. Frequency of optimality hypothesis on the distance to optimum (n=50)

  25. Frequency of optimality hypothesis on the distance to optimum (n=100)

  26. Numerical study of stochastic gradient estimators (14) Conclusion: stochastic differentiation by Dual Solution and Finite Difference approaches enables us to reliably estimate the stochastic gradient, when: . SPSA and Likelihood Ratio works when

  27. Gradient search procedure Let some initial point be chosen, the random sample of a certain initial size N0 be generated at this point, and Monte-Carlo estimators be computed. The iterative stochastic procedure of gradient search is: where the projection of to ε - feasible set:

  28. The rule to choose number of scenarios We propose a following rule to regulate number of scenarios: Thus, the iterative stochastic search is performed until statistical criteria don’t contradict to optimality conditions

  29. Linear convergence Under some conditions on finiteness and smooth differentiability of the objective function the proposed algorithm converges a.s. to the stationary point: with linear rate where K, L, C, l are some constants (Sakalauskas (2002), (2004))

  30. Linear Convergence Since the Monte-Carlo sample size increases with geometric progression rate it follows: Conclusion: the approach proposed enables us to solve SP problems by computing a finite number times of expected objective function

  31. Numerical study of stochastic optimization algorithm Test problems have been solved from the Data Base of two-stage stochastic linear optimisation problems: http://www.math.bme.hu/~deak/twostage/ l1/20x20.1/. Dimensionality of the tasks from n=20 to n=80 (30 to 120 at the second stage) All solutions given in data base are achieved and in a number of that we succeeded to improve the known decisions, especially for large number of variables

  32. Two stage stochastic programming problem (n=20) • The estimate of the optimal value of the objective function given in the database is 182.94234  0.066 (improved to 182.59248  0.033 ) • N0=Nmin=100, Nmax=10000 • Maximal number of iterations , generation of trials was broken when the estimated confidence interval of the objective function exceeds admissible value . • Initial data as follows: • Solution repeated 500 times

  33. Frequency of stopping under number of iterations and admissible confidence interval

  34. Change of the objective function under number of iterations and admissible interval

  35. Change of confidence interval under number of iterations and admissible interval

  36. Change of the Hotelling statistics under admissible interval

  37. Change of the Monte-Carlo sample size under number of iterations and admissible interval

  38. Ratio under admissible interval (1)

  39. Ratio under admissible interval (2)

  40. Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 649.604  0.053. Solution by developed algorithm: 646.444  0.999. Solving DB Test Problems (1)

  41. Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 6656.637  0.814. Solution by developed algorithm: 6648.548  0.999. Solving DB Test Problems (2)

  42. Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 586.329  0.327. Solution by developed algorithm: 475.012  0.999. Solving DB Test Problems (3)

  43. Comparison with Benders decomposition

  44. Conclusions • The stochastic iterative method has been developed to solve the SLP problems by a finite sequence of Monte-Carlo sampling estimators • The approach presented is reasoned by the statistical termination procedure and the adaptive regulation of size of Monte-Carlo samples • The computation results show the approach developed provides estimators for a reliable solving and testing of optimality hypothesis in a wide range of dimensionality of SLP problems (2<n<100). • The approach developed enables us generate almost unbounded number of scenarios and solve SLP problems with admissible accuracy • Total volume of computations solving SLP exceeds only several times the volume of scenarios needed to evaluate one value of the expected objective function

  45. References • Rubinstein, R, and Shapiro, A. (1993). Discrete events systems: sensitivity analysis and stochastic optimization by the score function method. Wiley & Sons, N.Y. • Shapiro, A., and Homem-de-Mello, T. (1998). A simulation-based approach to two-stage stochastic programming with recourse. Mathematical Programming, 81, pp. 301-325. • Sakalauskas, L. (2002). Nonlinear stochastic programming by Monte-Carlo estimators. European Journal on Operational Research, 137, 558-573. • Spall G. (2003) Simultaneous Perturbation Stochastic Approximation. J.Wiley&Sons • Sakalauskas, L. (2004). Application of the Monte-Carlo method to nonlinear stochastic optimization with linear constraints. Informatica, 15(2), 271-282. • Sakalauskas L. (2006) Towards implementable nonlinear stochastic programming. In Eds K.Marti et al. Coping with uncertainty, Springer Verlag

  46. Announcements Welcome to the EURO Mini Conference “Continuous Optimization and Knowledge Based Technologies (EUROPT-2008)” May 20-23, 2008, Neringa, Lithuania http://www.mii.lt/europt-2008

More Related