Loading in 2 Seconds...
Loading in 2 Seconds...
Engineering Optimization Chapter 3 : Functions of Several Variables (Part 1). Presented by: Rajesh Roy Networks Research Lab, University of California, Davis June 18, 2010. Introduction. x is a vector of design variables of dimension N No constraints on x
Networks Research Lab,
University of California, Davis
June 18, 2010
Test candidate points to see whether they are (or are not) minima, maxima, saddlepoints, or none of the above.
Given x(0), a point that does not satisfy the above-mentioned optimality criteria, what is a better estimate x(1) of the solution x*?
The nonlinear objective ƒ will typically not be convex and therefore will be multimodal.
We examine optimality criteria for basically two reasons:
(1) because they are necessary to recognize solutions
(2) because they provide motivation for most of the useful methods
Consider the Taylor expansion of a function of several variables:
The methods can be classified into three broad categories:
Motivation behind different methods:
Set up a regular simplex* in the space of the independent variables and evaluate the function at each vertex.
The vertex with highest functional value is located.
This ‘‘worst’’ vertex is then reflected through the centroid to generate a new point, which is used to complete the next simplex
Jump to Step 2 if the performance index decreases smoothly
Suppose x(j) is the point to be reflected. Then the centroid of the
remaining N points is
All points on the line from x( j) through xc are given by
New Vertex Point:
*In N dimensions, a regular simplex is a polyhedron composed of N+1 equidistant points, which form its vertices.
Given a specified step size the exploration proceeds from an initial point by the specified step size in each coordinate direction.
If the function value does not increase, the step is considered successful.
Otherwise, the step is retracted and replaced by a step in the opposite direction, which in turn is retained depending upon whether it succeeds or fails.
Single step from the present base point along the line from the previous to the current base point.
If a quadratic function in N variables can be transformed so that it is just the sum of perfect
squares, then the optimum can be found after exactly N single-variable searches, one with respect to each of the transformed variables.
All of the methods considered here employ a similar iteration procedure:
Taylor expansion of the objective about x:
The greatest negative scalar product results from the choice
This is the motivation for the simple gradient method:
Consider again the Taylor expansion of the objective:
We form a quadratic approximation to ƒ(x) by dropping terms of order 3
forcing x(k1), the next point in the sequence, to be a point where the gradient of the approximation is zero. Therefore,
So according to Newton’s optimization method: