Stochastic Differentiation. Lecture 3. Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous Optimization. Content. Concept of stochastic gradient Analytical differentiation of expectation
Institute of Mathematics and Informatics
EURO Working Group on Continuous Optimization
The stochastic programming deals with the objective and/or constraint functions defined as expectation of random function:
- the measure, defined by probability density function:
The methods of nonlinear stochastic programming are built using the concept of stochastic gradient.
The stochastic gradient of the function
is the random vector such that:
Assume, density of random variable doesn’t depends on the decision variable.
Thus, the analytical stochastic gradient coincides with the gradient of random integrated function:
Let consider the two-stage SLP:
vectors q, h, and matrices W, T
can be random in general
The stochastic analytical gradient is defined as
by the a set of solutions of the dual problem
Let us approximate the gradient of the random function by finite differences.
Thus, the each ith component of the stochastic gradient is computed as:
is the vector with zero components except ith one, equal to 1, is some small value.
where is the random vector obtaining values 1 or -1 with probabilities p=0.5, is some small value
Let consider the integral on the set given by inclusion
The gradient of this function is defined as
where is defined through derivatives of p and f(see, Uryasev (1994), (2002))
We assume here that the Monte-Carlo sample of a certain size N are provided for any
are independent random copies of
i.e., distributed according to the density
the sampling estimator of the objective function:
and the sampling variance are computed
The gradient is evaluated using the same random sample:
The sampling covariance matrix is applied
later on for normalising of the gradient estimator.
Say, the Hotelling statistics can be used for testing the value of the gradient: