html5-img
1 / 25

Nonlinear Stochastic Programming by the Monte-Carlo method

Nonlinear Stochastic Programming by the Monte-Carlo method. Lecture 4. Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous Optimization. Content. Stochastic unconstrained optimization Monte Carlo estimators

len-david
Download Presentation

Nonlinear Stochastic Programming by the Monte-Carlo method

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous Optimization

  2. Content • Stochastic unconstrained optimization • Monte Carlo estimators • Statistical testing of optimality • Gradient-based stochastic algorithm • Rule for Monte-Carlo sample size regulation • Counterexamples • Nonlinear stochastic constrained optimization • Convergence Analysis • Counterexample

  3. Stochastic unconstrained optimization Let us consider the stochastic unconstrained optimization problem • elementary event in the probability space , - random function: - the measure, defined by probability density function:

  4. Monte-Carlo samples We assume here that the Monte-Carlo samples of a certain size N are provided for any and the sampling estimator of the objective function is computed : and sampling variance also computed that is useful to evaluate the accuracy of estimator

  5. Gradient The gradient is evaluated using the same random sample:

  6. Covariance matrix The sampling covariance matrix is computed later on for normalising the gradient estimator.

  7. Gradient search procedure Let some initial point be given and the random sample of a certain initial size N0 be generated at this point, as well as, the Monte-Carlo estimates be computed. The iterative stochastic procedure of gradient search can be used further:

  8. Monte-Carlo sample size problem There is no a great necessity to compute estimators with a high accuracy on starting the optimisation, because then it suffices only to approximately evaluate the direction leading to the optimum. Therefore, one can obtain not so large samples at the beginning of the optimum search and, later on, increase the size of samples so as to get the estimate of the objective function with a desired accuracy just at the time of decision making on finding the solution to the optimisation problem.

  9. Monte-Carlo sample size problem The following version for regulating the sample size is proposed:

  10. Statistical testing of the optimality hypothesis The optimality hypothesis could be accepted for some point xt with significance, if the following condition is satisfied Next, we can use the asymptotic normality again and decide that the objective function is estimated with a permissible accuracy, if its confidence bound does not exceed this value:

  11. Importance sampling Let consider an application of SP to estimation of probabilities of rare events: where:

  12. Importance sampling Assume a be the parameter that should be chosen. The second moment is :

  13. Importance sampling Select the parameter a in order to minimize the variance:

  14. Importance sampling

  15. Manpower-planning problem The employer must decide upon a base level of regular staff at various skill levels. The recourse actions available are regular staff overtime or outside temporary help in order to meet unknown demand of service at minimum cost (Ermolyev and Wets (1988)).

  16. Manpower-planning problem base level of regular staff at skill level j = 1, 2, 3 amount of overtime help amount of temporary help, cost of regular staff at skill level j = 1, 2, 3 cost of overtime, cost of temporary help demand of services at period t, absentees rate for regular staff at time t, ratio of amount of skill level j per amount of j-1 required,

  17. Manpower-planning problem The problem is to choose the number of staff on three levels in order to minimize the expected costs: s.t. the demands are normal: ,

  18. Manpower-planning problem Manpower amount and costs in USD with confidence interval 100 USD

  19. Nonlinear Stochastic Programming • Constrained continuous (nonlinear )stochastic programming problem is in general

  20. Nonlinear Stochastic Programming Let us define the Lagrange function

  21. Nonlinear Stochastic Programming Procedure for stochastic optimization: - initial values • parameters of • optimization

  22. Conditions and testing of optimality

  23. Analysis of Convergence In general the sample size is increased as geometric progression:

  24. Wrap-Up and conclusions • The approach presented in this paper is grounded by the stopping procedure and the rule for adaptive regulation of size of Monte-Carlo samples, taking into account the statistical modeling accuracy. • Several stochastic gradient estimators were compared by computer simulation studying workability of estimators for testing of optimality hypothesis by statistical criteria. • It was demonstrated that the minimal size of the Monte Carlo sample necessary to approximate the distribution of Hotelling statistics, computed using gradient estimators, by Fisher distribution depends on approximation approach and dimensionality of the task .

  25. Wrap-Up and Conclusions • The computation results show that the approach developed provides us the estimator for a reliable checking of optimality hypothesis in a wide range of dimensionality of stochastic optimization problem (2<n<100) • Termination procedure proposed allows us to test the optimality hypothesis and to evaluate reliably the confidence intervals of the objective and constraint functions

More Related