1 / 86

2. C o nstrained Optimization

2. C o nstrained Optimization. 2. Constrained Optimization. Unconstraint Optimization 1.1 Introduction to Unconstraint Optimization 1.2 Ackley’s Function 1.3 Genetic Approach for Ackley’s Function Nonlinear Programming 2.1 Handling Constraints 2.2 Penalty Function 2.3 Genetic Operators

maile-dixon
Download Presentation

2. C o nstrained Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

  2. 2. Constrained Optimization • Unconstraint Optimization 1.1 Introduction to Unconstraint Optimization 1.2 Ackley’s Function 1.3 Genetic Approach for Ackley’s Function • Nonlinear Programming 2.1 Handling Constraints 2.2 Penalty Function 2.3 Genetic Operators 2.4 Numerical Examples • Stochastic Optimization 3.1 Stochastic Programming Model 3.2 Monte Carlo Simulation 3.3 Genetic Algorithm for Stochastic Optimization Problems WASEDA UNIVERSITY , IPS

  3. 2. Constrained Optimization • Nonlinear Goal Programming 4.1 Formulation of Nonlinear Goal Programming 4.2 Genetic Approach for Nonlinear Goal Programming • Interval Programming 5.1 Interval Programming 5.2 Interval Inequality 5.3 Order Relation of Interval 5.4 Transforming Interval Programming 5.5 Pareto Solution for Interval Programming 5.6 GA Procedure for Interval Programming WASEDA UNIVERSITY , IPS

  4. 2. Constrained Optimization • Unconstraint Optimization 1.1 Introduction to Unconstraint Optimization 1.2 Ackley’s Function 1.3 Genetic Approach for Ackley’s Function • Nonlinear Programming • Stochastic Optimization • Nonlinear Goal Programming • Interval Programming WASEDA UNIVERSITY , IPS

  5. 1.1 Introduction to Unconstraint Optimization • For a real function of several real variables, we want to find an argument vector which corresponds to a minimal function value: • The optimization problem: Find x* = argminx { f(x)| xL≤ x≤ xR} where, the function f is called the objective function and x* is the minimum • In some cases we want a maximum of a function. This is easily determined if we find a minimum of the function with opposite sign. • Optimization model plays an important role in many branches of science and engineering: • Economics • Operations Research • Network Analysis • Engineering Design • Electrical Systems WASEDA UNIVERSITY , IPS

  6. 1.1 Introduction to Unconstraint Optimization Example • In this example we consider functions of one variable. • The function f(x) = (x-x*)2 has one unique minimumx*. • Periodical optimization: the function f(x) =-2cos(x-x*) has many minimums. • x = x* + 2pπ, where p is an integer. = f(x) f(x) = f(x) f(x) x* x* x* -4 ≤ x ≤ 4 WASEDA UNIVERSITY , IPS

  7. 1.1 Introduction to Unconstraint Optimization • Multi-modal optimization: • the functionf(x) = 0.015(x-x*)2-2cos(x-x*)has a uniqueglobal minimum x*. Besides that, it also has several so-calledlocal minimums such as x1 and x2, each giving the minimal function value inside a certain region. = f(x) f(x) x2 x* x1 WASEDA UNIVERSITY , IPS

  8. 1.1 Introduction to Unconstraint Optimization • The ideal situation for optimization computations is that the objective function has a unique minimum. We call this the global minimum. • In some cases the objective function has several (or even infinitely many) minimums. In such problems it may be sufficient to find one of these minimums. • In many objective functions from applications we have a global minimum and several local minimums. It is very difficult to develop methods which can find the global minimum with certainty in this situation. • The methods described here can find a local minimum for the objective function. • When a local minimum has been discovered, we do not knowwhether it is a global minimum or one of the local minimums. • We cannot even be sure that our optimization method will find the local minimum closest to the starting point. • In order to explore several local minimums we can try several runs with different starting points, or better still examine intermediate results produced by a global minimum. WASEDA UNIVERSITY , IPS

  9. 1.1 Introduction to Unconstraint Optimization • In general, an unconstrained optimization problem can be mathematically represented as follows: • Luenberger, D. : Linear and Nonlinear Programming, 2nd ed., Addison-Wesley, Reading, MA, 1984. where, f is a real-valued function and , the feasible set, is a subset of En. When attention is restricted to the case where  = En, it corresponds to the completely unconstrained case En. • A point x*  is said to be a local minima of f over  if there is an > 0 such that f(x)f(x*) for all x  within a distance  of x*. • A pointx*  is said to be a global minima of f over if f(x)f(x*) for all x. min f (x) s. t. x  WASEDA UNIVERSITY , IPS

  10. 1.2 Ackley’s Function • Ackley’s function is a continuous and multimodal test function obtained by modulating an exponential function with a cosine wave of moderate amplitude. • Ackley, D. :A Connectionist Machine for Genetic Hillclimbing, Kluwer Academic Publishers, Boston, 1987. • As Ackley pointed out, this function causes moderate complications to the search, since though a strictly local optimization algorithm that performs hill-climbing would surely get trapped in a local optimum, a search strategy that scans a slightly bigger neighborhood would be able to cross intervening valleys towards increasingly better optima. • Therefore, Ackley's function provides a reasonable test case for genetic search. WASEDA UNIVERSITY , IPS

  11. 1.3 Genetic Approach for Ackley’s Function • To minimize Ackley’s function, we simply use the following implementation of the Genetic Algorithm (GA): • Real Number Encoding • Arithmetic Crossover • Nonuniform Mutation • Top popSize Selection • Real Number Encoding v= [x1, x2, …, xn] xi : real number, i = 1, 2, …, n • Arithmetic Crossover • The arithmetic crossover is defined as the combination of two chromosomes v1 and v2: v1’= v1+(1-)v2 v2’ = v2+(1-)v1 where (0, 1). WASEDA UNIVERSITY , IPS

  12. 1.3 Genetic Approach for Ackley’s Function • Nonuniform Mutation • For a given parent v, if the element xk of its selected for mutation. • The resulting: offspring is v’=[x1, …, xk, …, xn] where xk is randomly selected from two possible choices: where xkUand xkL are the upper and lower bounds forxk. The function (t, y) returns a value in the range [0, y] such that the value of (t, y) approaches to 0 as t increases (t is the generation number) as follows: where r is a random number from [0, 1], T is the maximal generation number, and b is a parameter determining the degree of nonuniformity. x1’ = xk+(t, xkU - xk) or x2’ = xk -(t, xk - xkL) WASEDA UNIVERSITY , IPS

  13. 1.3 Genetic Approach for Ackley’s Function • Top popSize Selection • Top popSize selection produces the next generation by selecting the best popSize chromosome from parents and offspring. • For this case, we can simply use the values of objective as fitness values chromosomes according to these values. • Fitness Function eval(v) = f(x) • Parameter setting of Genetic Algorithm (GA) • The parameters of GA are set as follows: • Population size: popSize = 10 • Crossover probability: pC = 0.20 • Mutation probability: pM = 0.03 • Maximum generation: maxGen =1000 WASEDA UNIVERSITY , IPS

  14. 1.3 Genetic Approach for Ackley’s Function • GA for Unconstraint Optimization procedure: GA for Unconstraint Optimization(uO) input: uO data set, GA parameters output: best solution begin t  0; initialize P(t) by real number encoding; fitness eval(P); while (not termination condition) do crossover P(t) to yield C(t) by arithmetic crossover; mutation P(t) to yield C(t) by nonuniform mutation; fitness eval(C); select P(t+1) from P(t)and C(t) by top popSize selection; t  t + 1; end output best solution; end WASEDA UNIVERSITY , IPS

  15. 1.3 Genetic Approach for Ackley’s Function f = -20 Exp[ -0.2 Sqrt[ 0.5 [x1^2 + x2^2 ] ] ] - Exp[ 0.5 [Cos[ 2Pi x1 ] + Cos[ 2Pi x2 ] ] + 20 + 2.71282; Plot3D[f, {x1,-5, 5}, {x2, -5, 5}, PlotPoints ->1, AxesLabel ->{x1, x2, “f(x1,x2)”}]; f(x1, x2) Optimal solution: (x1*, x2*)=(0, 0), f(x1*, x2*)=0 WASEDA UNIVERSITY , IPS

  16. 1.3 Genetic Approach for Ackley’s Function • Initial Population • The initial population v is randomly created within [-5, 5]: WASEDA UNIVERSITY , IPS

  17. 1.3 Genetic Approach for Ackley’s Function • The corresponding fitness function valueseval(v) are calculated: WASEDA UNIVERSITY , IPS

  18. 1.3 Genetic Approach for Ackley’s Function • Crossover • The sequence of random numbersfor crossover was generated: • This means that the chromosomes v2, v6, v8 and v9 were selected for crossover. • The offspring by arithmetical operation were generated as follows: WASEDA UNIVERSITY , IPS

  19. 1.3 Genetic Approach for Ackley’s Function • A sequence of random numbers rk (k=1, 2, …, 20) generates from the range [0, 1]. The corresponding gene to be mutated is • The resulting offspring is • The fitness value for each offspring is bitPos chromNum variable randomNum 11 6 x1 0.081393 WASEDA UNIVERSITY , IPS

  20. 1.3 Genetic Approach for Ackley’s Function • Selection • The best 10 chromosomes among parents and offspring and the corresponding fitness values are calculated as follows: WASEDA UNIVERSITY , IPS

  21. 1.3 Genetic Approach for Ackley’s Function • After the 1000th generation, we have the following chromosomes: • The fitness value: WASEDA UNIVERSITY , IPS

  22. 1.3 Genetic Approach for Ackley’s Function • Evolutional Process • Simulation WASEDA UNIVERSITY , IPS

  23. 2. Constrained Optimization • Unconstraint Optimization • Nonlinear Programming 2.1 Handling Constraints 2.2 Penalty Function 2.3 Genetic Operators 2.4 Numerical Examples • Stochastic Optimization • Nonlinear Goal Programming • Interval Programming WASEDA UNIVERSITY , IPS

  24. 2. Nonlinear Programming • Nonlinear programming (or constrained optimization) deals with the problem of optimizing an objective function in the presence of equality and/or inequality constraints. • Many practical problems can be successfully modeled as nonlinear program (NP). The general NP model may be written as follows: where f is objective function, gi(x)  0 is inequality constraint, and each of the constraints hi(x) = 0 is equality constraint, the set X is domain constraint which include lower and upper bounds on the variables. • The nonlinear programming problem is to find a feasible point y such that f(y) ≥f(x) for any feasible point x. WASEDA UNIVERSITY , IPS

  25. 2.1 Handling Constraints • Several techniques have been proposed to handle constraints with Genetic Algorithms (GAs): • Rejecting Strategy • Rejecting strategydiscards all infeasible chromosomescreated throughout the evolutionary process. • Repairing Strategy • Repairing a chromosomeinvolves taking an infeasible chromosome and generating a feasible one through some repairing procedure. • Repairing strategydepends on the existence of a deterministic repair procedure to convert an infeasible offspring into a feasible one. • Modifying Genetic Operator Strategy • One reasonable approachfor dealing with the issue of feasibility is to invent problem-specific representation andspecialized genetic operatorsto maintain the feasibility of chromosomes. • Penalty Strategy • These strategies above have the advantage that theynever generate infeasible solutionsbut have the disadvantage that they consider no points outside the feasible regions. WASEDA UNIVERSITY , IPS

  26. 2.2 Penalty Function • Penalty Methods Gen, M. and R. Cheng, Genetic Algorithms and Engineering Design, John Wiley & Sons, New York, 1997. • Penalty techniques transform a constrained problem into an unconstrained problem by penalizing infeasible solutions, in which a penalty term is added to the objective function for any violation of the constraints. • The basic idea of the penalty technique is borrowed from conventional optimization. Genetic Algorithm • How to determine the penalty term so as to strike a proper balance between the information preservation and the selection pressure for the infeasible solutions and avoid both under-penalty and over-penalty • Keep some infeasible solution in population so as to force genetic search toward to optimal solution from both side of feasible and infeasible region. Conventional Optimization • How to choose a proper value of the penalty term so as to get a fast convergence and avoid premature termination. • Use penalty method to generate a sequence of feasible points whose limit is an optimal solution to the original problem. WASEDA UNIVERSITY , IPS

  27. 2.2 Penalty Function • Evaluation Function with Penalty Term • Two possible method to construct the evaluation function with penalty term. • One method is to take the addition form expressed as follows: where x represents a chromosome, f(x) the objective function of problem, and p(x) the penalty term. For maximization problems, we usually require that • Let | p(x)|max and | f(x)|min be the maximum of | p(x)| and minimum of | f(x)| among infeasible solutions in current population, respectively. We also require that to avoid negative fitness value. eval (x) = f (x) + p(x) p(x) = 0 if x is feasible p(x) < 0 otherwise | p(x) |max | f (x) |min WASEDA UNIVERSITY , IPS

  28. 2.2 Penalty Function • The second method is to take the multiplication form expressed as follows: • In this case, for maximization problems we require that and for minimization problems we require that • Note that for the minimization problems, the fitter chromosome has the lower value of eval(x). eval (x) = f (x) · p(x) p(x) = 1 if x is feasible 0 p(x) < 1 otherwise p(x) = 1 if x is feasible 1 < p(x) otherwise WASEDA UNIVERSITY , IPS

  29. 2.2 Penalty Function • Several techniques for handling infeasibility have been proposed in the area of genetic algorithms. In general, we can classify them into two classes: • constant penalty approach is known to be less effective for complex problems, and most recent research works put the attention on the variable penalty. • variable penalty approach contains two components: • variable penalty ratio: it can be adjusted according to the degree of violation of constraints and the iteration number of genetic algorithms. • penalty amount for the violation of constraints. • Essentially, the penalty is a function of the distance from the feasible area. This can be given in the following three possible ways: • absolute distance of a single infeasible solution. • relative distance of all infeasible solution in the current population. • the adaptive penalty term. • The penalty approaches can be further distinguished by the dependence of the problem or the existence of parameter. WASEDA UNIVERSITY , IPS

  30. 2.3 Genetic Operators • Real Coding Technique • Each chromosome is encoded as vector of real numbers as follows: • Such coding is known as floating point representation, real number representation, or continuous representation. • Several genetic operators have been proposed for such coding, which can be roughly classified into three classes: • Conventional Operatorsfor binary representation, extend the operators into the real coding case. • Arithmetic Operatorsborrow the concept of linear combination of vectors from the area of convex sets theory. • Direction-based Operatorsintroduce the approximate gradient (sub-gradient) direction or negative direction into genetic operators. x= [x1, x2, …, xn] WASEDA UNIVERSITY , IPS

  31. 2.3 Genetic Operators • Conventional Operators • Simple Crossover: One-cut point crossover is basic one. • Davis, L. :Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. • Generations of one-cutpoint crossover are two-cutpoint, multi-cutpoint, and uniform crossover. • Spears, W. & K. Jong:“On the virtues of parameterized uniform crossover,” Proc. of the 4th Inter. Conf. on GA, pp.230-236, 1991. • Syswerda, G. :“Uniform crossover in genetic algorithms,” Proc. of the 3rd Inter. Conf. on GA, pp.2-9, 1989. crossing point at kth position parents offspring WASEDA UNIVERSITY , IPS

  32. 2.3 Genetic Operators • Conventional Operators • RandomCrossover: • Essentially, this kinds of crossover operators create offspring randomly within a hyper-rectangle defined by the parent points. • Flat Crossover: which makes an offspring by uniformly picking a value for each gene from the range formed by the values of two corresponding parents' genes. • Blend Crossover: to introduce more variance. It uniformly picks values that lie between two points that contain the two parents. WASEDA UNIVERSITY , IPS

  33. 2.3 Genetic Operators • Conventional Operators • Mutation • Mutation operators are quite different from the traditional one:a gene, being a real number, is mutated in a context-dependent range. • Uniform Mutation • This one simply replaces a gene (real number) with a randomly selected real number within a specified range. • This kind variation is called boundary mutation when the range of xk’ is formed as [xkL, xkR], while it is called plain mutation when the range of xk’ is formed as [xk-1, xk+1]. mutating point at kth position parent offspring WASEDA UNIVERSITY , IPS

  34. 2.3 Genetic Operators x 2 linear hull = R 2 • Convex crossover solution space If 1+2=1, 1 >0, 2 >0 • Affine crossover x 2 affine hull x If 1+2=1 1 convex hull • Linear crossover If 1+2  2, 1 >0, 2 >0 x 1 Fig. 2.1 Illustration showing convex, affine, and linear hull • Arithmetical operators • Crossover • Suppose that these are two parents x1 and x2, the offspring can be obtained by 1x1+ 2x2 with different multipliers 1and 2 . x1’=1x1+ 2x2 x2’=1x2+ 2x1 WASEDA UNIVERSITY , IPS

  35. 2.3 Genetic Operators or • Arithmetical operators • Nonuniform Mutation (Dynamic mutation) • For a given parent x,if the element xkof it is selected for mutation, the resulting offspring is x' = [x1 … xk' … xn], wherexk' is randomly selectedfrom two possible choice: • wherexkU and xkLare theupper and lowerboundsfor xk . • The function Δ(t, y) returns a value in the range [0, y] such that the value of Δ(t, y) approaches to 0 as t increases (t is the generation number): where r is a random number from [0, 1], T is the maximal generation number, and b is a parameter determining the degree of nonuniformity. WASEDA UNIVERSITY , IPS

  36. 2.3 Genetic Operators + D - f ( x , , x x , , x ) f ( x , , x , , x ) L L L L 1 i i n 1 i n = d D x i • Direction-based operators • This operation use the values of objective function in determining the direction of genetic search: • Direction-based crossover • Generate a single offspring x' from two parents x1 and x2 according to the following rules: where 0< r1. • Directional mutation • The offspring after mutation would be: x' = r · (x2 - x1)+x2 x' = x + r ·d where r= a random nonnegative real number WASEDA UNIVERSITY , IPS

  37. 2.3 Genetic Operators • GA for Nonlinear Programming Problem procedure: GA for Nonlinear Programming (NP) Problem input: NP data set, GA parameters output: best solution begin t 0; initialize P(t) by real number encoding; fitness eval(P) by penalty function method; while (not termination condition) do crossover P(t) to yield C(t) by convex crossover; mutation P(t) to yield C(t) by nonuniform mutation; fitness eval(C) by penalty function method; select P(t+1) from P(t)and C(t) by top popSize selection; t  t + 1; end output best solution; end WASEDA UNIVERSITY , IPS

  38. 2.4 Numerical Examples • Example 1: This problem was originally given by Bracken and McCormick. • Bracken, J. & G. McCormick: Selected Applications of Programming, John Wiley & Sons, New York, 1968. • Homaifar, Qi and Laisolved it with genetic algorithms. • The penalty function p(x)=r1g1(x)+r2g2(x), where r1 and r2 are penalty factors. • Homaifar, A., C. Qi & S. Lai:“Constrained optimization via genetic algorithms,”Simulation, Vol.62, No.4, pp.242-254, 1994. WASEDA UNIVERSITY , IPS

  39. 2.4 Numerical Examples Table 2.1: Solutions of Numerical Experimentation Reference GRG GA Items Solution Solution Solution f ( x ) 1 . 3930 1 . 3934 1 . 0021 0 . 8230 0 . 8229 x 0 . 9989 1 0 . 9110 0 . 9115 = - + = 0 . 9994 x g ( x ) x 2 x 1 0 1 1 2 2 - - 3 4 2 1 . 0 10 1 . 00 10 0 . 0000 g ( x ) x × × 2 = - + ³ 1 g ( x ) x 1 0 1 2 2 4 - - - 0 . 2500 4 5 7 . 46 10 5 . 18 10 × g ( x ) × 2 • Solutions of Numerical Experimentation • The GA's parameters are chosen as follows: • popSize=400 • pC= 0.85 • pM= 0.02 • The results show that both constraints are satisfied by the GA solutions, where GRG stands for generalized reduced gradient method. • Gabriete,D. & K. Ragsdell,“Large scale nonlinear programming using the generalized reduced gradient method,” ASME J. of Mechanical Design, Vol.102, pp.566-573, 1980. • Simulation WASEDA UNIVERSITY , IPS

  40. 2.4 Numerical Examples • Evolutional Process z ×100 generation WASEDA UNIVERSITY , IPS

  41. 2.4 Numerical Examples • Example 2: This problem is an interesting nonlinear optimization problem provided by Himmelblau. • M. Himmelblau: Applied Nonlinear Programming, McGraw-Hill, New York, 1972. • Homaifar, Qi and Laisolved it with genetic algorithms. • Homaifar, A., C. Qi & S. Lai:Constrained optimization via genetic algorithms, Simulation, Vol.62, No.4, pp.242-254, 1994. WASEDA UNIVERSITY , IPS

  42. 2.4 Numerical Examples Reference GRG GA Solution Solution Items Solution - 30665.500 . -30373 950 78≤x1≤102 -30876.964 ( ) f x 78.000 78 . 620 78.632 33≤x2≤45 x1 33.000 33 . 440 33.677 x2 29.995 27≤x3≤45 31 . 070 27.790 x3 45.000 44 . 180 27≤x4≤45 43.554 x4 36.776 35 . 220 27≤x5≤45 43.574 x5 90.715 90.521 0≤g1(x)≤92 g1(x) 91.898 98.841 98.893 90≤g2(x)≤110 g2(x) 100.595 g3(x) 24.073 20.047 20.132 20≤g3(x)≤25 • Solutions of Numerical Experimentation • The GA's parameters are chosen as follows: • popSize = 400 • pC = 0.8 • pM = 0.088 • The results indicate that the solution based on local reference is slightly better than the global reference. • Simulation Table 2.2: Solutions of Numerical Experimentation WASEDA UNIVERSITY , IPS

  43. 2.4 Numerical Examples • Evolutional Process Objective Function: f(x)=-30876.964609730574 z WASEDA UNIVERSITY , IPS

  44. 2. Constrained Optimization • Unconstraint Optimization • Nonlinear Programming • Stochastic Optimization 3.1 Stochastic Programming Model 3.2 Monte Carlo Simulation 3.3 Genetic Algorithm for Stochastic Optimization Problems • Nonlinear Goal Programming • Interval Programming WASEDA UNIVERSITY , IPS

  45. 3. Stochastic Optimization • In the practical application of mathematical programming, it is difficult to determine the proper values of model parameters. Liu, B. :Uncertainty Theory, Springer-Verlag, New York, 2004. • Some or all of them may be random variables, that is, they are often influenced by random events that are impossible to predict. • It is needed to formulate the problem so that the optimization will directly take the uncertainty into account. • Stochastic Programming • A approach for mathematical programming under uncertainty WASEDA UNIVERSITY , IPS

  46. 3.1 Stochastic Programming Model • Let f (x,) be a random function for any given x and , where x=[x1, x2, …, xn]is a vector of real variables and =[1, 2, …,l]is a vector of random variables. Since it is meaningless to maximize a random variable, it is natural to use its expected valueE[f (x,)] to replace it. Thus the stochastic programming problem can be generally written as follows: • If the distribution and density functions of stochastic vector  are () and (), respectively, then we have max E [ f (x,)] s.t. E [ gi(x,)]  0, i = 1, 2, …, m1 E [ hi(x,)] = 0, i = m1 + 1, …, m1+m2 WASEDA UNIVERSITY , IPS

  47. 3.2 Monte Carlo Simulation • Monte Carlo simulation is a scheme employing independent and identically distributed random variables to approximate the solutions of problems. • One of the standard applications is to evaluate the integral. • where g(x) is a real-valued function that is not analytically integral. • To see this deterministic problem can be approached by Monte Carlo simulation, let Y be the random variable (b-a)g(X), where X is a continuous random variable distributed uniformly on [a, b], denoted by U(a, b). • Then the expected value of Y is as follows: WASEDA UNIVERSITY , IPS

  48. 3.2 Monte Carlo Simulation • where fX(x)=1/(b-a) is the probability density function of a U(a, b) random variable. • Thus the problem of evaluation of the integral has been reduced to one of estimating the expected value of E(Y). • In particular, we can estimate the value by the sample mean: • where X1, X2, …, Xnare independent and identically distributed U(a ,b) random variables. Furthermore, it can be shown that Y(n) is an unbiased estimator of I. WASEDA UNIVERSITY , IPS

  49. 3.2 Monte Carlo Simulation • How to evaluate a stochastic integral • For a fixed vector x, let Y be the random variable |D|f (x,) (), where  is a random vector uniformly distributed over a bounded domain DRmand |D| is the volume of the bounded domain. E(y) is estimated by the sample mean as follows: where 1,2, …, n are independent random vectors uniformly distributed over the bounded domain. Y(n) is an unbiased estimator of the value of integral andVar[Y(n)]=Var[Y(n)]/n. WASEDA UNIVERSITY , IPS

  50. 3.2 Monte Carlo Simulation • Procedure of Monte Carlo Simulation procedure: Monte Carlo Simulation input: the number of simulation ns output: the objective obj begin obj  0; forj  1 to nsdo j  a random vector vr(); obj  obj + f (x, j )·(j ); end obj  obj  |D |/ ns; output the objective obj; end WASEDA UNIVERSITY , IPS

More Related