1 / 8

Some Zero-th Order / Direct Search Algorithms

Some Zero-th Order / Direct Search Algorithms. Basically, each coordinate axis is searched and a descent is only made along a unit vector. The Cyclic Coordinate Descent method minimizes a function ƒ(x1, x2, ..., xn)

ping
Download Presentation

Some Zero-th Order / Direct Search Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Some Zero-th Order / Direct Search Algorithms

  2. Basically, each coordinate axis is searched and a descent is only made along a unit vector. The Cyclic Coordinate Descent method minimizes a function ƒ(x1, x2, ..., xn) cyclically with respect to the coordinate variables. That is, first x1 is searched, then x2, etc. Various variations are possible. Can you think of some? Plus: Generally attractive because of their easy implementation. Minus: Generally their convergence properties are poorer than steepest descent. Coordinate Descent Methods (see section 7.2)

  3. 1. Define: Starting point x0 Increment Di for all variables (i = 1, .., n) Step reduction factor a Termination parameter e 2. Perform Exploratory search If exploratory move successful, Go to 5. If not, continue Check for termination: Is ||Di||< e ? Yes: Stop. Current point is x* No: Reduce increments Di= (Di / a) for i = 1, .., n. Go to 2 Perform pattern move: xpk+1 = xk + (xk – xk-1) Perform exploratory move with xp as the base point. Let result be xk+1. Is f(xk+1) < f(xk)? Yes: Set xk-1 = xk and xk = xk+1. Go to 5. No: go to 4. Hook and Jeeves Pattern Search (see section 7.3)

  4. Another Pattern Search Algorithm

  5. Pattern Search Graphically

  6. Advantages: Simple Robust (relatively) No gradients necessary. Disadvantages: Can get stuck and special "tricks" are needed to get the search going again. May take a lot of calculations Nonlinear multi-objective multiplex implementations using pattern search algorithms have been made. Pattern Search

  7. 18 18 l l l l l l l l l l 17 17 l l l l l l l l l l 16 16 l l l l l l l l l l 1 15 l l l l l l l l l l 15 l l l l l l l l l l 14 14 l l l l l l l l l l BEAM (meters) BEAM (meters) 13 13 l l l l l l l l l l l l l l l l l l l l 12 12 1 2 3 l l l l l l l l l l 4 5 11 11 6 7 8 l l l l l l l l l l 10 10 116 118 120 122 124 126 128 130 132 134 116 118 120 122 124 126 128 130 132 134 LBP (meters) LBP (meters) (a) Points evaluated (b) Contours of function generated Exploratory Search

  8. Two options currently exist to identify the vector of design variables, x. 1) Randomly select n points in the bounded space. A uniform distribution is assumed for the variables between the defined bounds. (An enhancement could be to allow for alternative distributions.) Monte Carlo methods are based upon this. 2) Use a systematic search algorithm such as from Aird and Rice (1977). While they claim that their systematic algorithm provides a better way of searching a region (and provides better starting points) than a random algorithm, their method uses a small number of variable values repeatedly on each axis. This can be a problem when considering a large number of variables and attempting to “visualize” the effect of changes. Considering a problem in 15 variables, 2 to the power 15 or 32,768 candidate designs would need to be evaluated to ensure just two sample values on each axis. Exploratory Search Algorithms

More Related