1 / 30

MOO General Form

MOO General Form. Minimize f k (x), k = 1,2,…,K Subject to g j (x) <= 0, j = 1,2,….,J h m (x) = 0, m= 1,2,….,M x i (L) <= x i <= x i (U) , i = 1,2, …, N where

Download Presentation

MOO General Form

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MOO General Form Minimize fk(x), k = 1,2,…,K Subject to gj(x) <= 0, j = 1,2,….,J hm(x) = 0, m= 1,2,….,M xi(L) <= xi <= xi(U) , i = 1,2, …, N where Xi = Rn continuous variables Xi = In integer variables Xi = (X1, X2, …) discrete variables Dave Powell, Elon University, dpowell2@elon.edu

  2. Ideal or Utopian Solution Vector • For each of the K objectives, there exists one different optimal solution. • An objective vector constructed with these individual optimal objective values constitutes the ideal objective vector or utopian vector. • In general, this is never obtainable • What is its use: • Individual optimal objective values used for normalization • Used by some classical techniques as solutions closer to ideal are better. Dave Powell, Elon University, dpowell2@elon.edu

  3. Utopian Objective Vector Nadir – upperbound of eachindividualoptimizedobjective Utopia – lowest value of each objective Figure from Deb p. 27 Dave Powell, Elon University, dpowell2@elon.edu

  4. Domination • A solution x(1) is said to dominate the other solution x(2), if following 2 conditions are true: • The solution x(1) is no worse than x(2) in all objectives for j = 1, 2, …, K • The solution x(1) is strictly better than x(2) in at least one objective Dave Powell, Elon University, dpowell2@elon.edu

  5. Pareto Optimal • Globally Pareto-optimal set. The non-dominated set of the entire feasible search space S is the globally Pareto-optimal set. Dave Powell, Elon University, dpowell2@elon.edu

  6. Pareto Optimal Front Figure from Anderson Dave Powell, Elon University, dpowell2@elon.edu

  7. No articulation of preference information Global criterion (SC) MinMax (SN) Prior Weighted Sum (C,SC) Goal Programming (SN) Lexicographic (SN) aka Preemptive Goal Programming Posterior Weighted Sum (SC) eConstraint (SN) Progressive Satisficing Tradeoff Analysis (SN) Surrogate Worth Tradeoff Analytical Hierarchy Process Classification of MOO Techniques For clarity I will present in category order but deviate on individual techniques Simple programming/modeling required S Convex objective space C Nonconvex objective space N Dave Powell, Elon University, dpowell2@elon.edu

  8. No Articulation of Preferences • Global Criterion – simple programming required) • Unweighted • Weighted (falls into Prior & Posterior Articulation but will cover here). • Can work in nonconvex spaces depending on value of exponent. • MinMax – (simple programming required) • Unweighted • Weighted (falls into Prior & Posterior Articulation but will cover here). • Works in nonconvex objective spaces Dave Powell, Elon University, dpowell2@elon.edu

  9. Global Criterion Lp Norms L1 = Sum L2 = Least Squares L∞ = Min Max After Min Max? Dave Powell, Elon University, dpowell2@elon.edu

  10. Weighted Global Criterion Dave Powell, Elon University, dpowell2@elon.edu

  11. Weighted Global Criterion Dave Powell, Elon University, dpowell2@elon.edu

  12. Priori Aggregation of Preference Information • Weighted Sum – (No programming required) • Lexicographic – (No programming required) • Goal Programming • Introduces idea of soft constraints • Weighted Min Max – previously shown • Weighted Global Criterion – previously shown Dave Powell, Elon University, dpowell2@elon.edu

  13. Weighted Sum Approach • Minimize f(x) = S wkfk(x) k=1,..,K • Subject to gj(x) <= 0, j = 1,2,….,J • hm(x) = 0, m= 1,2,….,M • K • S k=1 wk = 1 and wk>= 0 • xi(L) <= xi <= xi(U) , i = 1,2, …, n • Need to normalize each objective for weights to be meaningful. Dave Powell, Elon University, dpowell2@elon.edu

  14. Pareto Optimal Theorems (Deb) • The solution to the problem represented by Weighted Sum is Pareto optimal if the weight is positive for all objectives. • If x* is a pareto-optimal solution of a convex multi-objective optimization problem then there exists a non-zero positive weight vector w such that x* is a solution to the problem. Dave Powell, Elon University, dpowell2@elon.edu

  15. Goal Programming • Choose a target value for each objective (dealing in criterion space) • Minimize deviation from target and objective • Create a constraint for each objective • If minimize a goal • Minimize PositiveDeviation from Target • subject to: F(x) – PositiveDeviation <= target • PostiveDeviation >=0 • If maximize • Minimize NegativeDeviation from Target • subject to: F(x) + NegativeDeviation >= target • If equal to target • Minimize PositiveDeviation + Negative Deviation from target • Subject to: F(x) – Positive Deviation + NegativeDeviation = target Dave Powell, Elon University, dpowell2@elon.edu

  16. Goal Programming for Objective Minimization Define targets for each objective. Targets may or maynot be reachable Could add weights Dave Powell, Elon University, dpowell2@elon.edu

  17. Goal Programming Figure from Deb Dave Powell, Elon University, dpowell2@elon.edu

  18. Minimization Formulation • Single level task requires a calculation to calculate new scaled constraint values. • Add one variable for each objective to track the positive deviations • Minimize sum of positive deviations • Add soft constraint for each objective(f(x) – ftarget)/(fmax – foptimal) – deviation <= 0 • Add constraint to insure each deviation is >= 0. Dave Powell, Elon University, dpowell2@elon.edu

  19. Lexicographic / Preemptive GP • Min Objective 1 • Objective 1 = Optimal Value 1 • Min Objective 2 • Objective 2 = Optimal Value 2 • etc. • Each objective takes advantage of alternative optima from higher priority objectives • Can be combined with Goal Programming

  20. e-Constraint Approach Idea is to have a single objective and make the others constraints. Figure from Deb Dave Powell, Elon University, dpowell2@elon.edu

  21. e-Constraint Figure from Deb Dave Powell, Elon University, dpowell2@elon.edu

  22. e-Constraint • Advantages • Works in convex and non convex spaces • Can find all points on pareto optimal front • Fully supported without programming in iSIGHT-FD. • Disadvantage • Requires user to select appropriate values of constraints Dave Powell, Elon University, dpowell2@elon.edu

  23. Progressive Articulation of Preference Information Satisficing Tradeoff Analysis Surrogate Worth Tradeoff Analytical Hierarchy Process

  24. Satisficing Trade-off Analysis Utopia Point Reference point for Pareto solution search • It does not consider the whole Pareto optimal front • Looks near user’s desired point • One Pareto solution is calculated after a trade-off trial • Calculation effort for one trade-off trial roughly equals to single-objective optimization • Intuitive and Quick solution Aspirant/Request Point User’s desired value A Pareto Solution Near solution by request point Dave Powell, Chair of Computing Sciences, Elon University

  25. Basically a MinMax Formulation Solve problem interactively by adjusting aspirant values and possibly adding objective constraints. Need to add calculation to calculate constraint values foreach objective. Need to add a design variable Z Dave Powell, Chair of Computing Sciences, Elon University

  26. Interactive Surrogate Worth Trade-Off (ISWT) Method(Chankong, Haimes) • Idea: Approximate (implicit) U by surrogate worth values using trade-offs of the -constraint method • Assumptions: • continuously differentiable U is implicitly known • functions are twice continuously differentiable • S is compact and trade-off information is available • KKT multipliers li> 0 i are partial trade-off rates between fl and fi • For all i the DM is told: ``If the value of fl is decreased by li, the value of fi is increased by one unit or vice versa while other values are unaltered´´ • The DM must tell the desirability with an integer [10,-10] (or [2,-2]) called surrogate worth value

  27. Analytical Hierarchy Method • Method widely introduced by Thomas L. Saaty • Requires hierarchical organization of problem • Performed by comparing activities at different levels • Uses pair-wise comparisons

  28. An Example: Purchase a Car Cost Comfort Reliability Power Rabbit Subaru Wagon Jaguar

  29. The Scale With respect to Cost, compare alternatives Rabbit and Subaru Score Meaning 1/9 A is absolutely less important than B 1/7 A is demonstrably or very strongly less important than B 1/5 A is less important than B 1/3 A is weakly less important than B 1 A and B are equally important 3 A is weakly more important than B 5 A is more important than B 7 A is demonstrably or very strongly more important than 9 A is absolutely more important than B

  30. The Process of AHP • Pair-wise comparison matrices are generated at each level • The eigen value is taken of each matrix for a final ranking of alternatives at each level • Measures of consistency are generated

More Related