1 / 24

Optimization Techniques

Optimization Techniques. Problem Description. Definition Modeling Solution algorithm. Problem Definition. Decision (independent) and dependent variables Constraints functions Objective functions. Solution Algorithms. Mathematical Techniques Heuristic Techniques.

bin
Download Presentation

Optimization Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimization Techniques

  2. Problem Description • Definition • Modeling • Solution algorithm

  3. Problem Definition • Decision (independent) and dependent variables • Constraints functions • Objective functions

  4. Solution Algorithms • Mathematical Techniques • Heuristic Techniques

  5. Mathematical Algorithms • Calculus Methods • Linear Programming (LP) Method • Non Linear Programming (NLP) Method • Dynamic Programming (DP) Method • Integer Programming Method

  6. Calculus Methods • Lagrange Multipliers • Kuhn-Tucker conditions

  7. Linear Programming (LP) Method LP is an optimization method in which both the objective functionand the constraints are linear functions of the decision variables • Any LP problem can be stated as a minimization problem; due to the fact that, as already described, maximizing C(x) is equivalent to minimizing (-C(x)). • All constraints may be stated as equality type; due to the fact that any inequality constraint of the form given by can be transformed to equality constraints, given by • All decision variables can be considered nonnegative, as any xj, unrestricted in sign, can be written as where

  8. Non Linear Programming (NLP) Method The objective functionand/or the constraints are nonlinear functions of the decision variables. Solution methods of unconstrained problems • direct search (or non-gradient) methods: steepest descent method • descent (or gradient) methods: Solution methods of constrained problems • Direct methods: constraint approximation • Indirect methods: penalty function

  9. Dynamic Programming (DP) Method A mathematical technique used for multistage decision problems • Optimal decisions have to be made over some stages • The stages may be different times, different spaces, different levels, etc. • The output of each stage is the input to the next serial stage.

  10. Integer Programming Method • If all decision variables are of integer type, the problem is addressed as Integer Programmingproblem. • If some decision variables are of integertype while some others are of non-integer type, the problem is known as mixed integer programming problem.

  11. Heuristic Algorithms • Genetic Algorithm (GA), based on genetics and evolution, • Simulated Annealing (SA), based on some thermodynamics principles, • Particle Swarm (PS), based on bird and fish movements, • TabuSearch (TS), based on memory response, • Ant Colony (AC), based on how ants behave.

  12. Genetic Algorithm (GA):Main Steps • Initialization of genetic algorithm • Selection • Mutation • Cross over • Termination

  13. GA: Initialization • Individual solutions are randomly generated to form an initial population • The population size depends on the nature of the problem

  14. GA: Selection • Roulette Wheel Selection • Rank Selection • Steady-State Selection • Elitism

  15. GA: Mutation • Bit string mutation • Flip Bit • Boundary • Non-Uniform • Uniform • Gaussian

  16. GA: Cross Over • One-point crossover • Two-point crossover • Cut and splice • Uniform Crossover and Half Uniform Crossover

  17. Termination • A solution is found that satisfies minimum criteria • Fixed number of generations reached • Allocated budget (computation time/money) reached • The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results • Manual inspection • Combinations of the above

  18. Simulated Annealing (SA) Step 1: Initialize – Start with a random initial placement. Initialize a very high “temperature”. Step 2: Move – Perturb the placement through a defined move. Step 3: Calculate score – calculate the change in the score due to the move made. Step 4: Choose – Depending on the change in score, accept or reject the move. The probe of acceptance depending on the current “temperature”. Step 5: Update and repeat– Update the temperature value by lowering the temperature. Go back to Step 2. The process is done until “Freezing Point” is reached.

  19. SA: main parameters • Initial temperature T0, • Number of transitions performed at each temperature level (Nk), • Final temperature, Tf(as the stopping criterion) • Cooling sequence (given by Tk+1= g(Tk) . Tk; where g(Tk) is a function which controls the temperature)

  20. Particle Swarm (PS) • For each particle i = 1, ..., S do: • Initialize the particle's position with a random vector: xi ~ U(blo, bup), where blo and bup are the lower and upper boundaries of the search-space. • Initialize the particle's best known position to its initial position: pi ← xi • If (f(pi) < f(g)) update the swarm's best known position: g ← pi • Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|) • Until a termination criterion is met (e.g. number of iterations performed, or a solution with adequate objective function value is found), repeat:

  21. Particle Swarm (PS) • For each particle i = 1, ..., S do: • Pick random numbers: rp, rg ~ U(0,1) • For each dimension d = 1, ..., n do: • Update the particle's velocity:  vi,d ← ω vi,d + φprp (pi,d-xi,d) + φgrg (gd-xi,d) • Update the particle's position: xi ← xi + vi • If (f(xi) < f(pi)) do: • Update the particle's best known position: pi ← xi • If (f(pi) < f(g)) update the swarm's best known position: g ← pi • Now g holds the best found solution. The parameters ω, φp, and φg are selected by the practitioner and control the behavior and efficacy of the PSO method.

  22. Tabu Search (TS) • Generate an initial solution, • Select move, • Update the solution. The next solution is chosen from the list of neighbors which is either considered as desired (aspirant) or not tabuand for which the objective function is optimum. • The process is repeated based on any stopping rule

  23. Ant Colony (AC) • Initializationin which the problem variables, are encoded and initial population is generated; randomly within the feasible region. They will crawl to different directions at a radius not greater than R. • Evaluationin which the objective function is calculated for all ants. • Trail adding in which a trail quantity is added for each ant; in proportion to its calculated objective function (the so called fitness). • Ants sending in which the ants are sent to their next nodes, according to the trail density and visibility.

  24. AC • We have already described trail density as the pheromone is deposited. The ants are not completely blind and will move to some extent based on node visibilities. These two actions resemble the steps involved in PS and TS algorithms (intensificationand diversification) to avoid trapping in local optimum points. • Evaporation in which the trail deposited by an ant is eventually evaporated and the starting point is updated with the best combination found. The steps are repeated until a stopping rule criterion is achieved.

More Related