che 250 numeric methods l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
ChE 250 Numeric Methods PowerPoint Presentation
Download Presentation
ChE 250 Numeric Methods

Loading in 2 Seconds...

play fullscreen
1 / 12

ChE 250 Numeric Methods - PowerPoint PPT Presentation


  • 265 Views
  • Uploaded on

ChE 250 Numeric Methods. Lecture #15, Chapra, Chapter 14: Multidimensional Unconstrained Optimization 20070223. Multidimensional Unconstrained Optimization.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'ChE 250 Numeric Methods' - Ava


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
che 250 numeric methods

ChE 250 Numeric Methods

Lecture #15, Chapra, Chapter 14: Multidimensional Unconstrained Optimization

20070223

multidimensional unconstrained optimization
Multidimensional Unconstrained Optimization
  • For two and higher dimensions, simple methods like the Golden Search are not feasible and we must resort to open methods generally starting from a single point
  • If the function and its partials are readily calculated, then gradient, or ‘descent’ methods can be employed
  • Otherwise a non-gradient, or direct method is used
non gradient
Non-Gradient
  • Random Search
    • Brute force attack by calculating many points and simply recording the point with the highest function value
    • Only useful for situations where the function is easily calculated on computer
    • Might be useful to find initialization points for another, faster direct method
    • Can be used for any function, weather including non-linear or discontinuous, which is a distinct advantage over the other methods of this chapter
pattern searches
Pattern Searches
  • Univariate
    • Use 1-D methods and iterate
    • Find the extremum of one variable while holding the others constant
    • Then move to the next variable, and the next, etc.
    • This method gets to the general neighborhood of the maximum quickly, but then can get stuck
  • Pattern searches make use of the lines 2-4-6 and 1-3-5 to find the maximum more quickly
  • Questions?
gradient methods
Gradient Methods
  • Gradient methods use the derivative of the objective function to locate the optima
  • The gradient is a vector of the partial derivatives for each variable
  • It tells us the direction of the steepest incline as well as the magnitude of that incline
gradient methods6
Gradient Methods
  • The Hessian
    • For 1-D problems, we know the optima is when the derivative equals zero
    • For multi-D cases, it is more complicated
  • A point can appear to be a minimum if viewed in only the x or only the y cross section
gradient methods7
Gradient Methods
  • The Hessian is an indication of second order curvature and is also used in Newton’s method
  • |H|<0 then we have a saddle point, otherwise it’s a minimum or maximum
  • Scilab example
  • Questions?
steepest ascent method
Steepest Ascent Method
  • Assuming you can calculate the gradient at any point, you can start at x0, y0 and head off in the direction of the gradient and then stop at the maximum on that straight line.
  • This is accomplished with a transformed variable, h, in the direction of the gradient at x0,y0
steepest ascent method9
Steepest Ascent Method
  • First calculate the partial equations
  • Then substitute in the initial x,y values
  • Then define h in the direction of the gradient
  • Substitute x and y equations into the function
  • Use 1-D methods to find the maximum
  • Then start the process again at that maximum point
  • Example 14.4
  • Questions?
newton s method
Newton’s Method
  • The Newton method is implemented using the inverse of the n by n Hessian matrix
  • So all the partials must be evaluated and the inverse found for every iteration
  • It is important to be ‘close’ to the solution to reduce the number if iteration using this method especially if n is large
  • Equations for the partials are given in Chapra, page 365
  • Questions?
multivariate approach
Multivariate Approach
  • An advanced algorithm would use a hybrid method, perhaps start with random, or steepest ascent method and change to a Newton method after a few iterations to quickly find the solution
  • Questions
preparation for 26feb
Preparation for 26Feb
  • Reading
    • Chapter 15: Constrained Optimization