1 / 8

Introduction to Non-Linear Optimization

Introduction to Non-Linear Optimization. Note: Unlike for linear problems, a global optimum for a nonlinear problem cannot be guaranteed, except for special cases, e.g., if you know the space is unimodal, or convex, or monotonicity exists. Two standard heuristics that most people use:

kenyon
Download Presentation

Introduction to Non-Linear Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Non-Linear Optimization

  2. Note: Unlike for linear problems, a global optimum for a nonlinear problem cannot be guaranteed, except for special cases, e.g., if you know the space is unimodal, or convex, or monotonicity exists. Two standard heuristics that most people use: 1) find local extrema starting from widely varying starting points of the variables and then pick the most extreme of these extrema (if they are not the same) 2) perturb a local extremum by taking a finite amplitude step away from it, and then see whether your routine returns you to a better point or "always" to the same one. Question:How would you "automate" a search for a global extremum? Nonlinear Optimization

  3. In its simplest form, a numerical search procedure consists of four steps when applied to unconstrained minimization problems: 1) Selection of an initial design in the n-dimensional space, where n is the number of design variables 2) A procedure for the evaluation of the function (objective function) at a given point in the design space. 3) Comparison of the current design with all of the preceding designs. 4) A rational way to select a new design and repeat the process. Constrained minimization requires step for evaluation of constraints as well. Same applies for evaluating multiple objective functions. Basic Steps in Nonlinear Optimization

  4. Simplistically, most design tasks seek to find a perturbation to an existing design which will lead to an improvement. Thus, we seek a new design which is the old design plus a change : new old X = X + d X • Optimization algorithms use the same formula, but apply a two step process: k k-1 k X = X + a S • You (the engineer) have to provide an initial design X0. k • The optimization will then determine a search direction S that will improve the design. k • Next question is how far we can move in direction S before we must find a new search direction. This is a one-dimensional search since we only have to determine to improve the design as much as possible. the value of the scalar a Nonlinear Optimization Process

  5. A good algorithm is (among others): Robust – algorithm must be reliable for general design applications and (thus) must theoretically converge to the solution point starting from any given starting point. General – Should not impose restrictions on the model's constraints and objective functions. Accurate – Ability to converge to precise mathematical optimum point is important, though it may not be required in practice. Easy to use – by both experienced and inexperienced users. Should not have problem dependent tuning parameters. Efficient – To be efficient, the number of repeated analyses should be kept to a minimum. Hence, an efficient algorithm has 1) a faster rate of convergence requiring fewer iterations, and 2) least number of calculations within one (design) iteration. Note: Tradeoffs have to be made A Good Algorithm

  6. You often must choose between algorithms which need only evaluations of the objective function or methods that also require the derivatives of that function. Algorithms using derivatives are generally more powerful, but do not always compensate for the additional calculations of derivatives. Note that you may not be able to compute the derivatives. Zero and first order algorithms

  7. Basic descent methods are the basic techniques for iteratively solving unconstrained minimization problems. Important for practical situations because they offer the simplest and most direct alternatives for obtaining solutions. Also good as a benchmark. Basic Descent Methods

  8. Basic steps: start at an initial point; determine according to a fixed rule a direction of movement; and move in that direction to a (relative) minimum of the objective function on that line. At the new point, a new direction is determined and the same process is repeated. The primary difference between algorithms (steepest descent, Newton's method, etc) is the rule by which successive directions of movement are selected. The process of determining the minimum point on a line is called line search. General Basic Descent Method Algorithm

More Related