1 / 19

Dan Simon Cleveland State University

Jang, Sun, and Mizutani Neuro-Fuzzy and Soft Computing Chapter 6 Derivative-Based Optimization. Dan Simon Cleveland State University. Outline. Gradient Descent The Newton-Raphson Method The Levenberg–Marquardt Algorithm Trust Region Methods. Contour plot. Gradient descent: head downhill

duda
Download Presentation

Dan Simon Cleveland State University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Jang, Sun, and MizutaniNeuro-Fuzzy and Soft ComputingChapter 6Derivative-Based Optimization Dan Simon Cleveland State University

  2. Outline • Gradient Descent • The Newton-Raphson Method • The Levenberg–Marquardt Algorithm • Trust Region Methods

  3. Contour plot Gradient descent: head downhill http://en.wikipedia.org/wiki/Gradient_descent

  4. Fuzzy controller optimization: Find the MF parameters that minimize tracking error min E() with respect to   = n-element vector of MF parameters E() = controller tracking error

  5. Contour plot of x2+10y2  too small: convergence takes long time  too large: overshoot minimum x=-3: 0.1: 3; y=-2: 0.1: 2; for i=1:length(x), for j=1:length(y), z(i,j)=x(i)^2+10*y(j)^2; end, end contour(x,y,z)

  6. Gradient descent is a local optimization method (Rastrigin function)

  7. Step Size Selection How should we select the step size? • k too small: convergence takes long time • k too large: overshoot minimum Line minimization:

  8. Step Size Selection Recall the general Taylor series expansion: f(x) = f(x0) + f’(x0)(x – x0) + … Therefore, The minimum of E(k+1) occurs when its derivative is 0, which means that:

  9. Step Size Selection Compare the two equations: We see that Newton’s method, also called the Newton-Raphson method

  10. Jang, Fig. 6.3 – The Newton-Raphson method approximates E( ) as a quadratic function

  11. Step Size Selection The Newton-Raphson method The second derivative is called the Hessian (Otto Hesse, 1800s) How easy or difficult is it to calculate the Hessian? What if the Hessian is not invertible?

  12. Step Size Selection The Levenberg–Marquardt algorithm:  is a parameter selected to balance between steepest descent ( = ) and Newton-Raphson ( = 0). We can also control the step size with another parameter :

  13. Jang, Fig. 6.5 – Illustration of Levenberg-Marquardt gradient descent

  14. Step Size Selection Trust region methods: These are used in conjuction with the Newton-Raphson method, which approximates E as quadratic in  : If we use Newton-Raphson to minimize E with a step size of k, then we are implicitly assuming that E(k+k) will be equal to the above expression.

  15. Step Size Selection Actually, we expect the actual improvement kto be slightly smaller than the true improvement: We limit the step size k so that |k|<Rk Our “trust” in the quadratic approximation is proportional to k

  16. Step Size Selection k= ratio of actual to expected improvement Rk = trust region: maximum allowable size of k

More Related