1 / 33

Chapter 6 Method of Successive Quadratic Programming

Chapter 6 Method of Successive Quadratic Programming. Professor Shi-Shang Jang National Tsing-Hua University Chemical Engineering Department. 6-1 Quadratic Programming Problems. Problem:. Classical Constrained Regression Problem.

Download Presentation

Chapter 6 Method of Successive Quadratic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6 Method of Successive Quadratic Programming Professor Shi-Shang Jang National Tsing-Hua University Chemical Engineering Department

  2. 6-1 Quadratic Programming Problems • Problem:

  3. Classical Constrained Regression Problem • Where β is an m×1 vector of regression coefficients to be estimated, e is an n×1 vector of error variables, Y is an n×1 observations on the dependent variable and X is an n×m matrix of observations on the independent variables. It is clear that the classical regression problem is a quadratic programming problem.

  4. Example- A numerical problem

  5. Example- A numerical problem

  6. Example-A scientist’s model • A scientist has observed a certain quantity Q as a function of t. She has a good reason to believe that there is a physical law relating t and Q that takes the form: Q(t)=asint+bcost+c She wants to have the “best” possible idea of the value of the coefficients a,b and c using the results of her n experiments:

  7. Example-A scientist’s model • (t1,q1; t2,q2;…, tn,qn). Taking into account that her experiments are not perfect and also perhaps that the model is not rigorous enough, she does not expect to find a perfect fit. So defines an error term: Ei=qi-q(ti) Her idea is to minimize:

  8. Example-A scientist’s modelWhat if a=3;b=2;c=5 with 15 experimental data?

  9. Classical Least Square Fitting • Problem: Given a set of data (x1,y1; x2,y2;…, xn,yn). Find a set of parameters ={a, b, c,…} such that: • ei2=yi-(axi+bxi2+..) • And iei2 is minimized • The solution of the unconstrained case: • =(XTX)-1XTY=quasi_inverse(X)Y

  10. Example: Curve Fitting >> x=linspace(0,5); >> for i=1:100 y(i)=3*x(i)^2+2*x(i)+5; y(i)=y(i)+randn(1,1); xx(i,1)=x(i)*x(i);xx(i,2)=x(i);xx(i,3)=1; end >> theta=inv(xx'*xx)*xx'*y' theta = 2.9983 2.1215 4.5839 >> for i=1:100 yy(i)=theta(1)*x(i)^2+theta(2)*x(i)+theta(3); end >> plot(x,yy)

  11. Example: Curve Fitting-Continued

  12. Program QUADPROG • QUADPROG Quadratic programming. • X=QUADPROG(H,f,A,b) solves the quadratic programming problem: • min 0.5*x'*H*x + f'*x subject to: A*x <= b • x • X=QUADPROG(H,f,A,b,Aeq,beq) solves the problem above while additionally • satisfying the equality constraints Aeq*x = beq. • X=QUADPROG(H,f,A,b,Aeq,beq,LB,UB) defines a set of lower and upper • bounds on the design variables, X, so that the solution is in the • range LB <= X <= UB. Use empty matrices for LB and UB • if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; • set UB(i) = Inf if X(i) is unbounded above.

  13. Curve Fitting Using Quadprog x=linspace(0,5); for i=1:100 y(i)=3*x(i)^2+2*x(i)+5; y(i)=y(i)+randn(1); xx(i,1)=x(i)*x(i);xx(i,2)=x(i);xx(i,3)=1; end a=eye(100); a1=zeros(1,100); H=[a1; a1; a1; a]; f=zeros(103,1); HH=[f f f H]; Aeq=[xx eye(100)];beq=y'; A=[];b=[]; X=QUADPROG(HH,f,A,b,Aeq,beq) ; XX=X'; XX(1) XX(2) XX(3)

  14. The Solution Method-Complementary Pivot Problem • K-T Condition for a QP:

  15. The Solution Method-Complementary Pivot Problem • Find vectors w and z such that

  16. The Solution Method-Complementary Pivot Problem • The solution of the complementary pivot problem is equivalent to find a K-T point of the original QP problem and hence the solution is sufficiently found since the QP is a convex objected (quadratic function) and qualified constrained (linear constraints).

  17. 6-2 The Successive QP Approach • Consider a quadratic function: • The objective is to find d=(x-x0) such that the following problem can be minimized

  18. Example

  19. Example

  20. Algorithm • Step 1: Formulate the QP problem • Step 2: Solve the QP problem, and set x(t+1)=x(t)+*d • Step 3: Check Convergency

  21. Remarks • Original SQP is subjected to the following two problems:(1) The second derivative of the objective function is generally difficult to obtain. (2) There is no guarantee that the second derivative of any function at any point is positive definite. In case the hessian is not positive definite, then the QP is failed.

  22. 6-3 Quadratic Approximation of the Lagrangian Function

  23. Algorithm-The Variable Metric Method • Given initial estimates x0,u0,v0, and a symmetric positive definitive matrix H0 • Step 1: Solve the problem: • Step 2: Select the step size α along d(t), and set x(t+1)=x(t)+ α d(t)

  24. Algorithm-The Variable Metric Method (VMCON) • Step 3: Check convergence • Step 4: Update H(t) using the gradient difference • BSF update formulation:

  25. Line Search on the SQP direction • Given d from QP, instead of using the original objective function, it is more useful to implement the following penalty function:

  26. Numerical Example

  27. MATLAB Program - VMCON % DEFINE GLOBAL VARIABLES global mu global sigma global vv global uu global dd global X0 % % INITIALIZE THE PROBLEM H=eye(2);X0=[2;1];uu=0;vv=0; dx=100;mu=0;sigma=0;iter=0; % % THE WHILE LOOP % while abs(dx)>0.000001 iter=iter+1; % % FIND THE GRADIENT df % [df,dlgn]=sqp_Lagrangian(X0); % % DEFINE Aq,beq such that Aq*d=beq % A, b such that Ad<b

  28. MATLAB Program – VMCON-Continued A=[-1 -1];b=(X0(1)+X0(2)-1); Aeq=[X0(2) X0(1)]; beq=(X0(1)*X0(2)-2); % % SOLVE THE SUB-QUADRATIC PROBLEM FOR d % [ss,FVAL,EXITFLAG,OUTPUT,LAMBDA]=QUADPROG(H,df,A,b,Aeq,beq); dd=ss'; % % GET THE LARAGNGE=MULTIPLIERS % uu=LAMBDA.ineqlin; vv=LAMBDA.eqlin; % % PREPARE THE LINE SEARCH % mu=max(abs(vv),0.5*(mu+abs(vv))); sigma=max(abs(uu),0.5*(sigma+abs(uu))); a0=0.1; % % LINE SEARCH ALONG dd WITH THE PENALTY FUNCTION AS THE OBJ % alopt = FMINSEARCH('sqp_penalty',a0);

  29. MATLAB Program – VMCON-Continued % % GET THE NEW POINT % X=X0+alopt*dd'; % % PREPARE TO UPDATE THE APPROXIMATE HESSIAN % z=X-X0; [df,dlgn_1]=sqp_Lagrangian(X0); [df,dlgn_2]=sqp_Lagrangian(X); y=dlgn_2-dlgn_1; zz=z'*H*z; zzp=0.2*zz; zy=z'*y; if(zy>zzp) theta=1; else theta=0.8*z'*H*z/(zz-zy); end w=theta*y+(1-theta)*H*z; % % UPDATE THE APPROXIMATE HESSIAN % H=H-(H*z*z'*H)/zz+w*w'/(z'*w);

  30. MATLAB Program – VMCON-Continued % % CALCULATE THE OBJ AND EQUALITY CONSTRAINT AT THE NEW POINT % hh=X(1)*X(2)-2; ddx=X-X0; obj=sqp_obj(X); % % UPDATE THE NEW POINT % X0=X; % % CHECK THE CONVERGENCY % dx=sqrt(ddx(1)^2+ddx(2)^2); end % % OUTPUT THE RESULT % iter X obj hh

  31. MATLAB Program Using Fmincon > help fmincon FMINCON Finds a constrained minimum of a function of several variables. FMINCON solves problems of the form: min F(X) subject to: A*X <= B, Aeq*X = Beq (linear constraints) X C(X) <= 0, Ceq(X) = 0 (nonlinear constraints) LB <= X <= UB

  32. MATLAB Program Using Fmincon-Continued X=FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON) subjects the minimization to the constraints defined in NONLCON. The function NONLCON accepts X and returns the vectors C and Ceq, representing the nonlinear inequalities and equalities respectively. FMINCON minimizes FUN such that C(X)<=0 and Ceq(X)=0.

  33. MATLAB Program Using Fmincon-Example X0=[2;1]; A=[-1 -1]; B=1; Aeq=[];Beq=[];LB=[];UB=[]; X=FMINCON('sqp',X0,A,B,Aeq,Beq,LB,UB,'sqp_nlcon') function [c,ceq]=sqp_nlcon(x) c=[]; ceq=x(1)*x(2)-2;

More Related