Introduction To Non–linear Optimization
Download
1 / 45

- PowerPoint PPT Presentation


  • 110 Views
  • Updated On :

Introduction To Non–linear Optimization. PART I. Optimization Tree. Figure 1: Optimization tree. What is Optimization?. Optimization is an iterative process by which a desired solution (max/min) of the problem can be found while satisfying all its constraint or bounded conditions.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '' - syshe


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Slide2 l.jpg

Optimization Tree

Figure 1: Optimization tree.


What is optimization l.jpg
What is Optimization?

  • Optimization is an iterative process by which a desired solution

  • (max/min) of the problem can be found while satisfying all its

  • constraint or bounded conditions.

Figure 2: Optimum solution is found while satisfying its constraint (derivative must be zero at optimum).

  • Optimization problem could be linear or non-linear.

  • Non –linear optimization is accomplished by numerical ‘Search

    Methods’.

  • Search methods are used iteratively before a solution is achieved.

  • The search procedure is termed as algorithm.


What is optimization cont l.jpg
What is Optimization?(Cont.)

  • Linear problem – solved by Simplex or Graphical methods.

  • The solution of the linear problem lies on boundaries of the feasible

    region.

Figure 3: Solution of linear problem

Figure 4: Three dimensional solution of non-linear problem

  • Non-linear problem solution lies within and on the boundaries of the

    feasible region.


Slide5 l.jpg

Maximize X1 + 1.5 X2

Subject to:X1 + X2 ≤ 1500.25 X1 + 0.5 X2 ≤ 50X1 ≥ 50X2 ≥ 25X1 ≥0, X2 ≥0

Fundamentals of Non-Linear Optimization

  • Single Objective function f(x)

    • Maximization

    • Minimization

  • Design Variables, xi , i=0,1,2,3…..

  • Constraints

    • Inequality

    • Equality

Figure 5: Example of design variables and constraints used in non-linear optimization.

  • Optimal points

    • Local minima/maxima points: A point or Solution x* is at local point

      if there is no other x in its Neighborhood less than x*

    • Global minima/maxima points: A point or Solution x** is at global

      point if there is no other x in entire search space less than x**


Slide6 l.jpg

Fundamentals of Non-Linear Optimization (Cont.)

Figure 7: Local point is equal to global point if

the function is convex.

Figure 6: Global versus local optimization.


Slide7 l.jpg

Fundamentals of Non-Linear Optimization (Cont.)

  • Function f is convex if f(Xa) is less than value of the corresponding

  • point joining f(X1) and f(X2).

  • Convexity condition – Hessian 2nd order derivative) matrix of

  • function f must be positive semi definite ( eigen values +ve or zero).

Figure 8: Convex and nonconvex set

Figure 9: Convex function


Slide8 l.jpg

Mathematical Background

  • Slop or gradient of the objective function f – represent the

  • direction in which the function will decrease/increase most rapidly

  • Taylor series expansion

  • Jacobian – matrix of gradient of f with respect to several variables


Slide9 l.jpg

Mathematical Background (Cont.)

  • First order Condition (FOC)

  • Hessian – Second derivative of f of several variables

  • Second order condition (SOC)

    • Eigen values of H(X*) are all positive

    • Determinants of all lower order of H(X*) are +ve


Slide10 l.jpg

Optimization Algorithm

  • Deterministic - specific rules to move from one iteration to next ,

  • gradient, Hessian

  • Stochastic – probalistic rules are used for subsequent iteration

  • Optimal Design – Engineering Design based on optimization algorithm

  • Lagrangian method – sum of objective function and linear

    combination of the constraints.


Slide11 l.jpg

Optimization Methods

  • Deterministic

  • Direct Search – Use Objective function values to locate minimum

  • Gradient Based – first or second order of objective function.

  • Minimization objective function f(x) is used with –ve sign –

  • f(x) for maximization problem.

  • Single Variable

  • Newton – Raphson is Gradient based technique (FOC)

  • Golden Search – step size reducing iterative method

  • Multivariable Techniques ( Make use of Single variable Techniques

  • specially Golden Section)

  • Unconstrained Optimization

  • a.) Powell Method – Quadratic (degree 2) objective function polynomial is

  • non-gradient based.

  • b.) Gradient Based – Steepest Descent (FOC) or Least Square minimum

  • (LMS)

  • c.) Hessian Based -Conjugate Gradient (FOC) and BFGS (SOC)


Slide12 l.jpg

Optimization Methods …Constrained

  • Constrained Optimization

  • a.) Indirect approach – by transforming into unconstrained

  • problem.

  • b.) Exterior Penalty Function (EPF) and Augmented Lagrange

  • Multiplier

  • c.) Direct Method Sequential Linear Programming (SLP), SQP and

  • Steepest Generalized Reduced Gradient Method (GRG)

  • Figure 10: Descent Gradient or LMS


Slide13 l.jpg

Optimization Methods (Cont.)

  • Global Optimization – Stochastic techniques

  • Simulated Annealing (SA) method – minimum energy principle of cooling metal crystalline structure

  • Genetic Algorithm (GA) – Survival of the fittest

  • principle based upon evolutionary theory


Slide14 l.jpg

Optimization Methods (Example)

Multivariable Gradient based optimization

J is the cost function to be minimized in two

dimension

The contours of the J paraboloid shrinks as it is

decrease

function retval = Example6_1(x)

% example 6.1

retval = 3 + (x(1) - 1.5*x(2))^2 + (x(2) - 2)^2;

>> SteepestDescent('Example6_1', [0.5 0.5], 20, 0.0001, 0, 1, 20)

Where

[0.5 0.5] -initial guess value

20 -No. of iteration

0.001 -Golden search tol.

0 -initial step size

1 -step interval

20 -scanning step

>> ans

2.7585 1.8960

Figure 11: Multivariable Gradient based optimization

Figure 12: Steepest Descent



Presentation outline l.jpg
Presentation Outline

  • Introduction

    • Function Optimization

    • Optimization Toolbox

    • Routines / Algorithms available

  • Minimization Problems

    • Unconstrained

    • Constrained

      • Example

      • The Algorithm Description

  • Multiobjective Optimization

    • Optimal PID Control Example


Slide17 l.jpg

Function Optimization

  • Optimization concerns the minimization or maximization of

    functions

  • Standard Optimization Problem:

Subject to:

Equality Constraints

Inequality Constraints

Side Constraints

Where:

is the objective function, which measure and evaluate the performance of a system. In a standard problem, we are minimizing the function. For maximization, it is equivalent to minimization of the –ve of the objective function.

is a column vector of design variables, which can

affect the performance of the system.


Slide18 l.jpg

Function Optimization (Cont.)

  • Constraints – Limitation to the design space. Can be linear or

    nonlinear, explicit or implicit functions

Equality Constraints

Inequality Constraints

Most algorithm require less than!!!

Side Constraints


Slide19 l.jpg

Optimization Toolbox

  • Is a collection of functions that extend the capability of MATLAB.

  • The toolbox includes routines for:

  • Unconstrained optimization

  • Constrained nonlinear optimization, including goal attainment

  • problems, minimax problems, and semi-infinite minimization

  • problems

  • Quadratic and linear programming

  • Nonlinear least squares and curve fitting

  • Nonlinear systems of equations solving

  • Constrained linear least squares

  • Specialized algorithms for large scale problems






Slide24 l.jpg

Implementing Opt. Toolbox

  • Most of these optimization routines require the definition of an M-

    file containing the function, f, to be minimized.

  • Maximization is achieved by supplying the routines with –f.

  • Optimization options passed to the routines change optimization

    parameters.

  • Default optimization parameters can be changed through an

    options structure.


Slide25 l.jpg

Unconstrained Minimization

  • Consider the problem of finding a set of values [x1 x2]T that

    solves

  • Steps:

  • Create an M-file that returns the function value (Objective

  • Function). Call it objfun.m

  • Then, invoke the unconstrained minimization routine. Use fminunc


Slide26 l.jpg

Step 1 – Obj. Function

function f = objfun(x)

f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);

Objective function


Slide27 l.jpg

Step 2 – Invoke Routine

Starting with a guess

x0 = [-1,1];

options = optimset(‘LargeScale’,’off’);

[xmin,feval,exitflag,output]=

fminunc(‘objfun’,x0,options);

Optimization parameters settings

Output arguments

Input arguments


Slide28 l.jpg

Results

xmin =

0.5000 -1.0000

feval =

1.3028e-010

exitflag =

1

output =

iterations: 7

funcCount: 40

stepsize: 1

firstorderopt: 8.1998e-004

algorithm: 'medium-scale: Quasi-Newton line search'

Minimum point of design variables

Objective function value

Exitflag tells if the algorithm is converged.

If exitflag > 0, then local minimum is found

Some other information


Slide29 l.jpg

More on fminunc– Input

[xmin,feval,exitflag,output,grad,hessian]=

fminunc(fun,x0,options,P1,P2,…)

fun : Return a function of objective function.

x0 : Starts with an initial guess. The guess must be a vector

of size of number of design variables.

Option : To set some of the optimization parameters. (More after few slides)

P1,P2,… : To pass additional parameters.


Slide30 l.jpg

More on fminunc– Output

[xmin,feval,exitflag,output,grad,hessian]=

fminunc(fun,x0,options,P1,P2,…)

  • xmin : Vector of the minimum point (optimal point). The size is the number of design variables.

  • feval : The objective function value of at the optimal point.

  • exitflag : A value shows whether the optimization routine is terminated successfully. (converged if >0)

  • Output : This structure gives more details about the optimization

  • grad : The gradient value at the optimal point.

  • hessian : The hessian value of at the optimal point


Slide31 l.jpg

Options Setting – optimset

Options =

optimset(‘param1’,value1, ‘param2’,value2,…)

  • The routines in Optimization Toolbox has a set of default

    optimization parameters.

  • However, the toolbox allows you to alter some of those

    parameters, for example: the tolerance, the step size, the gradient

    or hessian values, the max. number of iterations etc.

  • There are also a list of features available, for example: displaying

    the values at each iterations, compare the user supply gradient or

    hessian, etc.

  • You can also choose the algorithm you wish to use.


Slide32 l.jpg

Options Setting (Cont.)

Options =

optimset(‘param1’,value1, ‘param2’,value2,…)

  • Type help optimset in command window, a list of options

    setting available will be displayed.

  • How to read? For example:

LargeScale - Use large-scale algorithm if possible [ {on} | off ]

The default is with { }

Value (value1)

Parameter (param1)


Slide33 l.jpg

Options Setting (Cont.)

Options =

optimset(‘param1’,value1, ‘param2’,value2,…)

LargeScale - Use large-scale algorithm if possible [ {on} | off ]

Since the default is on, if we would like to turn off, we just type:

Options = optimset(‘LargeScale’, ‘off’)

and pass to the input of fminunc.


Slide34 l.jpg

Useful Option Settings

Highly recommended to use!!!

  • Display - Level of display [ off | iter | notify | final ]

  • MaxIter - Maximum number of iterations allowed [ positive integer ]

  • TolCon - Termination tolerance on the constraint violation [

    positive scalar ]

  • TolFun - Termination tolerance on the function value [ positive

    scalar ]

  • TolX - Termination tolerance on X [ positive scalar ]


Slide35 l.jpg

fminuncandfminsearch

  • fminunc uses algorithm with gradient and hessian information.

  • Two modes:

  • Large-Scale: interior-reflective Newton

  • Medium-Scale: quasi-Newton (BFGS)

  • Not preferred in solving highly discontinuous functions.

  • This function may only give local solutions..

  • fminsearch is generally less efficient than fminunc for

    problems of order greater than two. However, when the problem

    is highly discontinuous, fminsearch may be more robust.

  • This is a direct search method that does not use numerical or

    analytic gradients as in fminunc.

  • This function may only give local solutions.


Slide36 l.jpg

Constrained Minimization

Vector of Lagrange

Multiplier at optimal point

[xmin,feval,exitflag,output,lambda,grad,hessian] = fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)


Slide37 l.jpg

Example

function f = myfun(x)

f=-x(1)*x(2)*x(3);

Subject to:


Slide38 l.jpg

Example (Cont.)

For

Create a function call nonlcon which returns 2 constraint vectors [C,Ceq]

function [C,Ceq]=nonlcon(x)

C=2*x(1)^2+x(2);

Ceq=[];

Remember to return a null

Matrix if the constraint does

not apply


Slide39 l.jpg

Example (Cont.)

Initial guess (3 design variables)

x0=[10;10;10];

A=[-1 -2 -2;1 2 2];

B=[0 72]';

LB = [0 0 0]';

UB = [30 30 30]';

[x,feval]=fmincon(@myfun,x0,A,B,[],[],LB,UB,@nonlcon)

CAREFUL!!!

fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)


Slide40 l.jpg

Warning: Large-scale (trust region) method does not currently solve this type of problem, switching to medium-scale (line search).

> In D:\Programs\MATLAB6p1\toolbox\optim\fmincon.m at line 213

In D:\usr\CHINTANG\OptToolbox\min_con.m at line 6

Optimization terminated successfully:

Magnitude of directional derivative in search direction less than 2*options.TolFun and maximum constraint violation is less than options.TolCon

Active Constraints:

2

9

x =

0.00050378663220

0.00000000000000

30.00000000000000

feval =

-4.657237250542452e-035

Const. 1

Const. 2

Const. 3

Const. 5

Const. 4

Const. 6

Const. 7

Const. 8

Const. 9

Sequence: A,B,Aeq,Beq,LB,UB,C,Ceq

Example (Cont.)


Slide41 l.jpg

Multiobjective Optimization currently solve this type of problem, switching to medium-scale (line search).

  • Previous examples involved problems with a single

    objective function.

  • Now let us look at solving problem with multiobjective

    function by lsqnonlin.

  • Example is taken for data curve fitting

  • In curve fitting problem the the error is reduced at each time step producing multiobjective function.


Lsqnonlin in matlab curve fitting l.jpg
lsqnonlin in Matlab currently solve this type of problem, switching to medium-scale (line search). – Curve fitting

  • clc; %recfit.m

  • clear;

  • global data;

  • data= [ 0.6000 0.999

    • 0.6500 0.998

    • 0.7000 0.997

    • 0.7500 0.995

    • 0.8000 0.982

    • 0.8500 0.975

    • 0.9000 0.932

    • 0.9500 0.862

    • 1.0000 0.714

    • 1.0500 0.520

    • 1.1000 0.287

    • 1.1500 0.134

    • 1.2000 0.0623

    • 1.2500 0.0245

    • 1.3000 0.0100

    • 1.3500 0.0040

    • 1.4000 0.0015

    • 1.4500 0.0007

    • 1.5000 0.0003 ]; % experimental data,`1st coloum x, 2nd coloum R

  • x=data(:,1);

  • Rexp=data(:,2);

  • plot(x,Rexp,'ro'); % plot the experimental data

  • hold on

  • b0=[1.0 1.0]; % start values for the parameters

  • b=lsqnonlin('recfun',b0) % run the lsqnonlin with start value b0, returned parameter values stored in b

  • Rcal=1./(1+exp(1.0986/b(1)*(x-b(2)))); % calculate the fitted value with parameter b plot(x,Rcal,'b'); % plot the fitted value on the same graph

Find b1 and b2

>>recfit

>>b =0.0603 1.0513

%recfun.m

function y=recfun(b)

global data;

x=data(:,1);

Rexp=data(:,2);

Rcal=1./(1+exp(1.0986/b(1)*(x-b(2))));

% the calculated value from the model

%y=sum((Rcal-Rexp).^2);

y=Rcal-Rexp;

% the sum of the square of the difference

%between calculated value and experimental value

  • Link to this Page

    • Short tutorial Model Fitting last edited on 26 October 2003 at 7:22 pm by westlake.che.gatech.edu


Slide43 l.jpg

Simulink Example currently solve this type of problem, switching to medium-scale (line search).

Jeff_fly basket.mdl

Shooting a flying box

Eq. of ball motion in z horz. direction

Eq. of ball motion in h vert. direction

Aerodynamic drag force

Angle of ball


Slide44 l.jpg

Simulink example – shooting ball currently solve this type of problem, switching to medium-scale (line search).

%% Start_flyBasketBall.m

InitialGuess= pi/2.5 ;

X = fminsearch('Distflysim', InitialGuess)*180/pi;

fprintf('\nShoot at %f deg \n', X);

function P = Distflysim(theta_0)

F0=25.0; %N

cart_mass=2; %kg

x_dot_max=50; %m/sec

ro_air=1.224; %kg/m^3

h0=0.5; %m

z0=0;

Cd=1;

r_ball=0.05; %m

A_ball=pi*r_ball^2;

ball_mass=0.1; %kg

g=-9.8; %m/sec^2

theta_0; %rad

V0=50; %m/secF0=15.0; %N

AeroFac=Cd*A_ball*ro_air/2;

theta_0

assignin('base','F0',F0);

assignin('base','cart_mass',cart_mass);

assignin('base','x_dot_max',x_dot_max);

assignin('base','AeroFac',AeroFac);

assignin('base','ball_mass',ball_mass);

assignin('base','g',g);

assignin('base','V0',V0);

assignin('base','theta_0',theta_0);

% Newrtp=rsimgetrtp('jeff_basket');

% save ShotParams.mat Newrtp;

% !jeff_basket -p ShotParams.mat

% load jeff_basket;

[t,x,y]=sim('jeff_flybasket',[0 10]);

np=max(size(y));

xf=y(np,1);

zf=y(np,2);

%hf=y(np,3);

P=(xf-zf)^2;%+(hf-25)^2;

% BasketflyBallnit1.m

F0=25.0; %N

cart_mass=1; %kg

x_dot_max=50; %m/sec

ro_air=1.224; %kg/m^3

Cd=1;

r_ball=0.05; %m

A_ball=pi*r_ball^2;

ball_mass=0.05; %kg

g=-9.8; %m/sec^2

theta_0=pi/2.5; %rad

V0=50; %m/sec

AeroFac=Cd*A_ball*ro_air/2;


Slide45 l.jpg

Optimization toolbox for use with MATLAB currently solve this type of problem, switching to medium-scale (line search)., User Guide, The MathWorks Inc. 2006 2. Applied Optimization with MATLAB Programming, P. Venkataraman, Wiley InterScience, 2002 3. Optimization for Engieering Design, Kalyanmoy Deb, Prentice Hall, 1996.4. http://mathdemos.gcsu.edu/mathdemos/maxmin/max_min.html5. http://www.math.ucdavis.edu/~kouba/CalcOneDIRECTORY/maxmindirectory /MaxMin.html6. http://users.powernet.co.uk/kienzle/octave/optim.html7. http://www.cse.uiuc.edu/eot/modules/optimization/SteepestDescent/

REFERENCES