1 / 72

CVEN 5393 Apr 11, 2011

Water Resources Development and Management Optimization (Nonlinear Programming & Time Series Simulation). CVEN 5393 Apr 11, 2011. Acknowledgements Dr. Yicheng Wang (Visiting Researcher, CADSWES during Fall 2009 – early Spring 2010) for slides from his Optimization course during Fall 2009

nvandoren
Download Presentation

CVEN 5393 Apr 11, 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Water Resources Development and ManagementOptimization(Nonlinear Programming & Time Series Simulation) CVEN 5393 Apr 11, 2011

  2. Acknowledgements Dr. Yicheng Wang (Visiting Researcher, CADSWES during Fall 2009 – early Spring 2010) for slides from his Optimization course during Fall 2009 Introduction to Operations Research by Hillier and Lieberman, McGraw Hill

  3. Today’s Lecture • Nonlinear Programming • Time Series Simulation • R-resources / demonstration

  4. Nonlinear Programming

  5. x1+x2+x3 ≤ Q x1,x2,x3 ≥ 0 IV.Nonlinear Programming A nonlinear programming example

  6. In one general form, the nonlinear programming problem is to find x = (x1, x2, …, xn) so as to Many types of nonlinear programming problems. Different algorithms for different programming problems.

  7. C 1.Graphical Illustration of Nonlinear Programming C

  8. E D

  9. C B A D E F A local maximum need not be a global maximum Nonlinear Programming algorithms generally can not distinguish between a local optimal solution and a global optimal solution.

  10. 2.Convexity Convex or concave functions of a single variable B H A L

  11. To be more precise, if f (x) possesses a second derivative everywhere, then f (x) is convex if and only if for all possible values of x In terms of the second derivative of the function, the convexity test is summarized below. The geometric interpretation indicates that f (x) is convex if it “bend upward”.

  12. A strictly convex function T A concave function A strictly concave function M N A convex function

  13. A function that is neither convex nor concave A function that is both convex and concave

  14. Convex or concave functions of several variables

  15. g(x1, x2, ---, xn) = -f (x1, x2, ---, xn)is a concave function, and vice versa Two important properties of convex or concave functions (1) If f (x1, x2, ---, xn) is a convex function, then (2) The sum of convex functions is a convex function, and the sum of concave functions is a concave function.

  16. Convex Sets

  17. Condition for a local optimal solution to be a global optimal solution • Nonlinear programming problems without constraints • If a nonlinear programming problem has no constraints, the objectivefunction being concave guarantees that a local maximum is a global maximum (Similarly, the objective function being convex ensures that a local minimum is a global minimum). (2) Nonlinear programming problems with constraints If a nonlinear programming problem has constraints, both the objective function being concave and the feasible region being a convex set guarantees that a local maximum is a global maximum (Similarly, both the objective function being convex and the feasible region being a convex setensures that a local minimum is a global minimum). For any linear programming problem, its linear objective function is both convex and concave and its feasible region is a convex set, so its optimal solution is certainly a global optimal solution.

  18. B C D A E

  19. If (If f (x) is strictly convex in the vicinity of x*, is the necessary and sufficient condition for the solution x = x* to be a local minimum) Global minimum: point A Global maximum: point E 3.Classical Optimization Methods Unconstrained optimization of a function of a single variable Unconstrained optimization of a function of a single variable E D C B If f (x) is strictly convex everywhere, there is only one local minimum, which is the global minimum A

  20. Unconstrained optimization of a function of several variables

  21. For any feasible solution, we have Constrained optimization with equality constraints

  22. Example: Solutions: f = 2 f = -2

  23. (1) Unconstrained optimization 4.Types of Nonlinear Programming Problems Nonlinear programming problems come in many different shapes and forms. No single algorithm can solve all these different types of problems. Algorithms have been developed for various special types of nonlinear programming problems.

  24. (3) Linearly constrained optimization ( a special case of convex programming) (4) Quadratic Programming ( a special case of linearly constrained optimization) (2) Convex programming The assumptions in convex programming are These assumptions are enough to ensure that a local maximum is a global maximum.

  25. (4) Separable programming ( a special case of convex programming) Separable programming is a special case of convex programming, where the one additional assumption is

  26. (5) Nonconvex programming

  27. The idea behind the one-dimensional search procedure is a very intuitive one. This procedure checks whether the slope is positive or negative at a trial solution. As shown in the figure, x* is the optimum. If the first derivative at a particular value of x is positive, then x* must be larger than this x, so this x becomes a lower bound. Conversely, if the first derivative at a particular value of x is negative, then x* must be smaller than this x, so this x becomes a upper bound. 5.One-Variable Unconstrained Optimization One-Dimensional Search Procedure A number of efficient one-dimensional search procedure are available. For example, Sequential-Search Techniques, Three-Point Interval Search, Fibonacci Search, Golden-Mean Search, etc. Next, only one of the sequential-search techniques, the Midpoint Method, is introduced.

  28. The entire process of the midpoint method is summarized next, given the notation.

  29. Example:

  30. 5. Multivariable Unconstrained Optimization One-variable unconstrained optimization : the first derivative of the objective function is used to select one of the just two possible directions (increase x or decrease x) in which to move from the current trial solution to the next one. The goal is to reach a point eventually where the first derivative is 0. Multivariable unconstrained optimization : There are innumerable possible directions in which to move. The gradient of the objective function is used to select the specific direction in which to move. The goal is to reach a point eventually where all the partial derivatives are 0. The gradient at a specific point x=x' is the vector whose elements are the respective partial derivatives at x=x' , so that

  31. Gradient Search Procudure The gradient search procedure is to keep moving in the direction of the gradient from the current trial solution, not stopping until f (x) stops increasing. This stopping point would be the next trial solution, so the gradient then would be recalculated to determine the new direction in which to move. With this approach, each iteration involves changing the current trial solution x' as follows.

  32. Summary of the Gradient Search Procedure

  33. Example: It can be verified that f (x) is concave by the convexity test. To begin the gradient search procedure, x = (0,0) is selected as the initial trial solution. The gradient at x = (0,0) is To begin the first iteration, set and then substitute these expressions into f (x) to obtain

  34. By continuing in this way, the subsequent solutions can be obtained as shown in the figure. Because these points are converging to x*=(1,1), this solution is the optimal solution, as verified by the fact that However, because this converging sequence of trial solutions never reaches its limit, the procedure actually will stop somewhere slightly below (1, 1) as its final approximation of x*.

  35. 6. Karush-Kuhn-Tucker (KKT) Conditions for Constrained Optimization The necessary and sufficient conditions that must be satisfied by an optimal solution for a nonlinear programming problem.

  36. What are Karush-Kuhn-Tucker (KKT) Conditions ?

  37. Example:

  38. The necessary conditions if x and y are to be nonnegative are

  39. Karush-Kuhn-Tucker (KKT) conditions for general case These conditions are sufficient if f is concave and gi are all convex.

More Related