html5-img
1 / 23

nlpql

References. Schittkowski, K. Solving Nonlinear Least Squares Problems by a General Purpose SQP method in Trends in Mathematical Optimization.Schittkowski, K. NLPQL: A fortran subroutine solving constrained non linear programming problems. Annals of Operations Research Vol 5.NLPQL User ManualOptimization Concepts and Applications in Engineering by Belegundu..

MikeCarlo
Download Presentation

nlpql

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. NLPQL Sequential Quadratic Programming 6/17/05

    3. Sequential Quadratic Programming IMHO: This is the algorithm of choice for smooth constrained optimization problems. (i.e. continuously differentiable) Prefered by Vanderplaats, Onwubiko, Belegundu, Rao, Lasdon Works with both feasible and infeasible initial design points. Works well with equality constraints. Requires fewer function evaluations then GRG Superior rate of convergence

    4. NLPQL Developed and constantly improved by Prof. Dr. K. Schittkowski at University of Bayreuth. First version in 1981. Latest version from 2000. Probably most heavily tested implementation with over 900 test cases. Hundreds of commercial applications. Customers include General Electric, Rolls-Royce, Siemens, BMW, Dow Chemical.

    5. NLPQL Formulation

    6. Search Direction In Steepest Descent and GRG the search direction was found using gradient information. For SQP, the search direction is found by solving a subproblem with a quadratic objective and linear constraints. The quadratic objective is an approximation to the Lagrangian.

    8. Line Search

    9. NLPQL Implementation Generates a sequence of quadratic programming subproblems obtained by a quadratic approximation of the Lagrangian function and linearization of constraints. Second order information is updated by quasi-Newton formula (similar to BFGS) Line search used to stabilize method. Only two user parameters are maximum number of iterations and desired final accuracy. The final accuracy should not be smaller than minimum absolute gradient step. As a user, you can not tune algorithm. You can primarily tune the problem formulation. Insure program is scaled. Verify that your gradients are of the right tolerance. Start the algorithm from multiple points.

    10. NLPQL within iSIGHT Engineous has limited a line search to a maximum of 10 evaluations. You cannot change this. The print level is 4 for full print out. You cannot change this. Diagnostics are minimal. Look at Karush-Kuhn-Tucker conditions and see if it is approaching zero. Lagrangian multipliers suggest constraintsto relax for greatest gain. Worth investigatingwith a Tradeoff Analysis If the objective or gradient values are greater then SCBOU (1000) the NLPQL automatically scales by 1/sqrt(value)

    13. 13

    14. NLPQL Termination

    15. NLPQL Termination Reasons If you have scaled the model and are using a good finite step size and multiple starting points then 3 possibilities are: Termination parameter is too small. The constraints are contradicting with an empty set of feasible solutions. Constraints are feasible but some are degenerate.

    16. Basic Parameters

    17. Advanced Parameters

    18. Lab

    19. Lab

    20. Lab Continued Task 2: Run the cantilevered bean again but this time provide a scale factorfor vol of 100000 and a scale factor of 5 for b1-b5 and a scale factorof 40 for h1-h5. (Note: These should have been the same settings that you used for exterior penalty). What was the optimium objective value: How many function evaluations did it take? What was the reason for termination? Compare the number of function evaluations with that of exteriorpenalty. Which is better? Review the Lagrangian Multipliers. Which constraint can be relaxedfor the greatest gain in an objective? Verify your analysis by running a tradeoff analysis.

    21. Lab Continued

More Related