1 / 149

Tier I: Mathematical Methods of Optimization

Tier I: Mathematical Methods of Optimization. Section 2: Linear Programming. Linear Programming (LP). Linear programming (linear optimization) is the area of optimization problems with linear objective functions and constraints Example:

Download Presentation

Tier I: Mathematical Methods of Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tier I: Mathematical Methods of Optimization Section 2: Linear Programming

  2. Linear Programming (LP) • Linear programming (linear optimization) is the area of optimization problems with linear objective functions and constraints Example: minimize: f(x) = 6x1 + 5x2 + 2x3 + 7x4subject to: 2x1 + 8x3 + x4≥ 20 x1 – 5x2 – 2x3 + 3x4 = -5

  3. Linear Programming con’t • None of the variables are multiplied by another variable, raised to a power, or used in a nonlinear function • Because the objective function and constraints are linear, they are convex. Thus, if an optimal solution to an LP problem is found, it is the global optimum

  4. LP Standard Form • LP Standard form: minimize: f = cx subject to: Ax = b xi≥ 0; i = 1, …, n where c is called the cost vector (1 by n), x is the vector of variables (n by 1), A is the coefficient matrix (m by n), and b is a m by 1 vector of given constants.

  5. Standard Form Basics • For a maximization problem, we can transform using: max(f(x))  min(-f(x)) • For inequality constraints, use “slack” variables:2x1 + 3x2≤ 5  2x1 + 3x2 + s1 = 5 where s1≥ 0

  6. Using Slack Variables When we transform the equation 2x1 + 3x2≤ 5 to 2x1 + 3x2 + s1 = 5 If the left-hand side (LHS) (2x1 + 3x2) is less than the right-hand side (RHS) (5), then s1 will take a positive value to make the equality true. The nearer the value of the LHS is to the RHS, the smaller the value of s1 is. If the LHS is equal to the RHS, s1 = 0. s1 cannot be negative because the LHS cannot be greater than the RHS.

  7. Standard Form Example Example: Write in Standard Form: maximize: f = x1 + x2 subject to: 2x1 + 3x2≤ 6 x1 + 7x2≥ 4 x1 + x2 = 3 x1≥ 0, x2≥ 0 Define slack variables x3≥ 0 & x4≥ 0

  8. Example Problem Rewritten The problem now can be written: minimize: g = –x1 – x2 subject to: 2x1 + 3x2 + x3 = 6 x1 + 7x2 – x4 = 4 x1 + x2 = 3 x1≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0

  9. Linear Algebra Review • The next few slides review several concepts from linear algebra that are the basis of the methods used to solve linear optimization problems

  10. Vectors & Linear Independence • Vectors • A k-vector is a row or column array of k numbers. It has a dimension of k. • Linear Independence (LI) • A collection of vectors a1, a2, …, ak, each of dimension n, is called linearly independent if the equation means that for j=1, 2, …, k

  11. Linear Independence con’t • In other words, a set of vectors is linearly independent if one vector cannot be written as a combination of any of the other vectors. • The maximum number of LI vectors in a n-dimensional space is n.

  12. Linear Independence con’t For example, in a 2-dimension space: The vectors & are not linearly independent because x2 = 5x1. are LI because there is no constant you can multiply one by to get the other. &

  13. Spanning Sets • A set of vectors a1, a2, …, ak in a n-dimensional space is said to span the space if any other vector in the space can be written as a linear combination of the vectors • In other words, for any vector b, there must exist scalars l1, l2, …, lk such that

  14. Bases • A set of vectors is said to be a basis for a n-dimensional space if: • The vectors span the space • If any of the vectors are removed, the set will no longer span the space • A basis of a n-dimensional space must have exactly n vectors • There may be many different bases for a given space

  15. Bases con’t • An example of a basis is the coordinate axis of a graph. For a 2-D graph, you cannot remove one of the axes and still form any line with just the remaining axis. • Or, you cannot have three axes in a 2-D plot because you can always represent the third using the other two.

  16. Systems of Equations • Linear Algebra can be used to solve a system of equations Example: 2x1 + 4x2 = 8 & 3x1 – 2x2 = 11 This can be written as an augmented matrix:

  17. Systems of Equations con’t • Row operations may be performed on the matrix without changing the result • Valid row operations include the following: • Multiplying a row by a constant • Interchanging two rows • Adding one row to another

  18. Solving SOE’s • In the previous example, we want to change the A matrix to be upper triangular multiply top row by ½ add -3 times the top row to the bottom row

  19. Solving SOE’s con’t multiply bottom row by -1/8 • From the upper triangular augmented matrix, we can easily see that x2 = 1/8and use this to get x1 x1 = 4 – 2 . 1/8 = 15/4

  20. Matrix Inversion • The inverse of a matrix can be found by using row operations Example: Form the augmented matrix (A, I) Transform to (I, A-1) using row operations

  21. Optimization Equations • We have seen that the constraints can be written in the form . • We should have more variables than equations so that we have some degrees of freedom to optimize. • If the number of equations are more than or equal to the number of variables, the values of the variables are already specified.

  22. General Solution to SOE’s • Given a system of equations in the form • Assume m (number of equations) < n (number of variables)  underspecified system • We can split the system into (n-m) independent variables and (m) dependent variables. The values of the dependent variables will depend on the values we choose for the independent variables.

  23. General Solution con’t • We call the dependent variables the basic variables because their A-matrix coefficients will form a basis. The independent variables will be called the nonbasic variables. • By changing the variables in the basis, we can change bases. It will be shown that this allows examining different possible optimum points.

  24. General Solution con’t Separate the A matrix in the following way: Or,

  25. General Solution con’t Define matrices B & N as the following: where B is a m by m matrix, N is a m by (n-m) matrix, & aj is the jth column of the A matrix • B is called the “basic matrix” and N is called the “nonbasic matrix”

  26. General Solution con’t • The B matrix contains the columns of the A-matrix that correspond to the x-variables that are in the basis. Order must be maintained. • So, if x4 is the second variable of the basis, a4 must be the second column of the B-matrix • The N matrix is just the columns of the A-matrix that are left over.

  27. General Solution con’t Similarly, define & We will see later how to determine which variables to put into the basis. This is an important step to examine all possible optimal solutions.

  28. General Solution con’t Now, we have Multiply both sides by B-1: So,

  29. Basic Solution • We can choose any values for (n-m) variables (the ones in xN) and then solve for the remaining m variables in xB • If we choose xN = 0, thenThis is called a “basic solution” to the system Basic Solution:

  30. Basic Feasible Solutions Now we have a solution to Ax = b. But that was just one of two sets of constraints for the optimization problem. The other was: xi≥ 0, i = 1, …, n (non-negativity) • A basic feasible solution (BFS) is a basic solution where every x is non-negative A BFS satisfies all of the constraints of the optimization problem

  31. Extreme Points • A point is called an extreme point (EP) if it cannot be represented as a strict (0 < l < 1) convex combination of two other feasible points. • Remember: a convex combination of two points is a line between them. • So, an EP cannot be on a line of two other feasible points.

  32. Extreme Points (Graphical) • Given a feasible region, an extreme point cannot lie on a line between two other feasible points (it must be on a corner) • In a n-dimensional space, an extreme point is located at the intersection of n constraints Not Extreme Points FeasibleRegion Extreme Point

  33. Optimum & Extreme Points • We have a maximization problem, so we want to go as far in the direction of the c (objective function) vector as we can • Can we determine anything about the location of the optimum point? c Starting Point

  34. Optimum & Extreme Points • If we start on a line, we can move along that line in the direction of the objective function until we get to a corner • In fact, for any c vector, the optimum point will always be on a corner c

  35. Basic Feasible Solutions • In a n-dimensional space, a BFS is formed by the intersection of n equations. • In 2-D: Basic Feasible Solution Constraint 1 Constraint 2 • But, we just saw that an extreme point is also a corner point. So, a BFS corresponds to an EP.

  36. Tying it Together • We just saw that a basic feasible solution corresponds to an extreme point. • This is very important because for LP problems, the optimum point is always at an extreme point. • Thus, if we can solve for all of the BFS’s (EP’s), we can compare them to find the optimum. Unfortunately, this takes too much time.

  37. Simplex Method Introduction • The simplex method is the most common method for solving LP problems. • It works by finding a BFS; determining whether it is optimal; and if it isn’t, it moves to a “better” BFS until the optimal is reached. • This way, we don’t have to calculate every solution.

  38. Simplex Method Algebra Recall: Sum over all non-basic variables Objective Function: substitute into above equation:

  39. Simplex Method Algebra Multiply through and collect xj terms: where

  40. Simplex Method Equations So, minimize If (cj – zj)≥ 0 for all j  N, then the current BFS is optimal for a minimization problem. Because, if it were < 0 for some j, that nonbasic variable, xj, could enter the basis and reduce the objective function. Subject to:

  41. Entering Variables • A nonbasic variable may enter the basis and replace one of the basic variables • Since xN = 0, and we have non-negativity constraints, the entering variable must increase in value. • The entering variable’s value will increase, reducing the objective function, until a constraint is reached.

  42. Entering Variable Equation • The equation to determine which variable enters is: . Calculate for all nonbasic indices j • For a minimization problem, choose the index j for which cj - zj is the most negative • If cj - zj≥ 0 for all j, the solution is optimal • For a maximization problem, choose the index j for which cj - zj is the most positive • If cj - zj≤ 0 for all j, the solution is optimal

  43. Leaving Variables • As the value of the entering variable increases, the value of at least one basic variable will usually decrease • If not, the problem is called “unbounded” and the value of the minimum objective function is - • The variable whose value reaches zero first will be the variable that leaves the basis

  44. Entering & Leaving Variables • Example: x1 is entering the basis while x2, x3& x4 are the current basic variables As soon as x2reaches zero, we must stop because of the non-negativity constraints. But, now x2 = 0, so it is a nonbasic variable and x1 > 0, so it is a basic variable. So, x2 leaves the basis & x1 enters the basis. x4 x3 x2 x1

  45. Leaving Variable Equation • Let j be the index of the variable that is entering the basis and i* be the index of the variable that is leaving the basis Meaning, for every index i that is in the basis and has , calculate . The index of the value that is the minimum is the index of the leaving variable.

  46. Leaving Variable Equation The previous expression is obtained from the equation: which applies when a constraint is reached

  47. x4 x3 x2 x1 The Example Revisited • x2, x3, & x4 start out at (B-1b)i ; (i=2, 3, 4) and have slopes of (–B-1aj)i ; (i=2, 3, 4) where j=1 because 1 is the index of the entering variable (x1) • Thus, the distance we can go before a basic variable reaches zero is for B-1a1 > 0. But, if (B-1a1)i < 0 (like x3), it won’t ever reach zero.

  48. The Example Revisited • We can also see how if none of the variables decreased, we could keep increasing x1 and improving the objective function without ever reaching a constraint –This gives an unbounded solution x4 x3 x2 x1

  49. Example Problem Minimize f = -x1 – x2 Subject to: x1 + x2≤ 5 2x1 – x2 ≤ 4 x1 ≤ 3 ; x1, x2 ≥ 0 Given: The starting basis is x1, x2, & x3. Insert slack variables x3, x4, & x5.

More Related