1 / 13

MA05136 Numerical Optimization

MA05136 Numerical Optimization. 05 Conjugate Gradient Methods. Conjugate gradient methods. For convex quadratic problems, the steepest descent method is slow in convergence. the Newton’s method is expensive in solving Ax = b . the conjugate gradient method solves Ax = b iteratively.

Download Presentation

MA05136 Numerical Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MA05136 Numerical Optimization 05 Conjugate Gradient Methods

  2. Conjugate gradient methods • For convex quadratic problems, • the steepest descent method is slow in convergence. • the Newton’s method is expensive in solving Ax=b. • the conjugate gradient method solves Ax=b iteratively. • Outline • Conjugate directions • Linear conjugate gradient method • Nonlinear conjugate gradient method

  3. Conjugate Gradient (CG) • Method for minimizing quadratic function • Low storage method CG only stores vector information • CG superlinear convergence for nice problems or when properly scaled • Great for solving QP subproblems

  4. Quadratic optimization problem • Consider the quadratic optimization problem • A is symmetric positive definite • The optimal solution is at f (x) = 0 • Define r(x) = f (x) = Ax− b (the residual). • Solve Ax=b without inverting A. (Iterative method)

  5. Steepest descent+line search • Given an initial guess x0. • The search direction: pk= −fk= −rk= b −Axk • The optimal step length: • The optimal solution is • Update xk+1= xk + kpk. Goto 2.

  6. Conjugate direction • For a symmetric positive definite matrix A, one can define A-inner-product as x, yA= xTAy. • A-norm is defined as • Two vectors x and y are A-conjugate for a symmetric positive definite matrix A if xTAy=0 • x and y are orthogonal under A-inner-product. • The conjugate directions are a set of search directions {p0, p1, p2,…}, such that piApj=0 for any i j.

  7. Steepest descent Conjugate direction Example

  8. Conjugate gradient • A better result can be obtained if the current search direction combines the previous one pk+1= −rk + kpk • Let pk+1 be A-conjugate to pk.

  9. CG-Preliminary Version • Given x0, r0=Ax0−b, p0= −r0 • For k = 0,1,2,… until ||rk|| = 0

  10. The linear CG algorithm • With some linear algebra, the algorithm can be simplified as • Given x0, r0=Ax0−b, p0= −r0 • For k = 0,1,2,… until ||rk|| = 0

  11. Properties of linear CG • One matrix-vector multiplication per iteration. • Only four vectors are required. (xk, rk, pk, Apk) • Matrix A can be stored implicitly • The CG guarantees convergence in r iterations, where r is the number of distinct eigenvalues of A • If A has eigenvalues 1  2  …  n,

  12. CG for nonlinear optimization The Fletcher-Reeves method • Given x0. Set p0= −f0, • For k = 0,1,…, until f0=0 • Compute optimal step length k and set xk+1=xk+ kpk • Evaluate fk+1

  13. Other choices of  • Polak-Ribiere: • Hestens-Siefel: • Y.Dai and Y.Yuan • (1999) • WW.Hager and H.Zhang • (2005)

More Related