EE/Econ 458 The Simplex Method. J. McCalley. An approach. Adjacen t corner points are connected by a single line segment on the boundary of the feasible region. One corner point is better than another if it has a higher value of the objective function f. Definitions.
Adjacent corner points are connected by a single line segment on the boundary of the feasible region.
One corner point is better than another if it has a higher value of the objective function f.
Optimality (stopping) condition: If a corner point feasible solution is equal to or better than all its adjacent corner point feasible solutions, then it is equal to or better than all other corner point feasible solutions, i.e., it is optimal.
Main ideas of proof:
If objective function monotonically increases (decreases) in some direction within the decision-vector space, then each adjacent corner point will become progressively better in the direction of objective function increase (decrease) such that the last corner point must have two adjacent corner points that are worse.
The monotonicity of objective function increase (decrease) is guaranteed by its linearity.
Many problems require non-negativity constraints on decision variables. SCED is like that due to requirement that generation & demand be positive.
Simplex method requires non-negativity on decision variables to ensure a bounded region.
When decision variable must allow negativity, they can be converted into one, or two decision variables with non-negativity constraints.
x3 is the slack variable.
It gives “slack” between two sides of the inequality x1<4.
We can replace the first inequality x1<4 with
If “slack” is zero, then inequality is satisfied with equality.
Observe that slack variables cannot be negative, because
then the inequality would be violated.
And we can do that with all inequalities, leading to…
Here, we have introduced slack variables within all inequalities
EQUALITY FORM: Has all inequality constraints converted to equality constraints via introduction of slack variables.
AUGMENTED SOLUTION: A solution to the LP that includes values of the decision and slack variables.
Augmented solution: (x1,x2, x3, x4, x5)=(3,2,1,8,5).
(x1,x2,x3,x4,x5)=(4,6,0,0,-6). Basic solutions may be feasible or infeasible.
Basic Feasible Solution: A feasible augmented corner-point solution, e.g.,
Above equation is satisfied by below basic solution
Because bk ≥0, we are assured xn+k≥0, which satisfies variable non-negativity.
And so an initial BFS is found by letting all decision variables be zero (the origin!).
(x1, x2, x3, x4, x5)=(0,0,4,12,18) is the initial BFS.
A feasible corner point is the simultaneous solution of a set of n constraint equations that does not violate any constraint equations.
5 feasible corner points
The slack variable, for a particular corner point, is the weighted distance between that corner point and the slack variable’s constraint (weights are the coefficients from constraint equation).
For the constraint corresponding to x1<4 (or x1+x3=4), because the coefficient for x1 is 1, the slack variable is exactly the “distance” between the given corner point and this constraint (from Table 2, this would be 4, 4, 2, 0, 0 for points 1, 2, 3, 4, 5, respectively).
All BFSs have exactly n variables equal to 0 because any BFS is the simultaneous solution of a system of n constraint equations. There are 2 types of such constraint equations that can define a BFS:
Any BFS has n variables 0. Any BFS has n non-basic variables.
One constraint equation becomes inactive while one constraint equation becomes active.
One variable enters the basis (becomes nonzero) while one variable leaves the basis (becomes zero).
The candidates for the entering basic variable are the n nonbasic variables. In the first step of our example (solution 1 to 2), the candidates are x1 and x2.
Select the one that improves the objective at the highest rate (i.e., the largest amount of objective per unit change in variable).
Increasing either variable, x1 or x2, increases the objective, but x2 increases it the most for a given unit change, since 5>3; so x2 is the entering variable.
The candidates for the leaving basic variable are the m basic variables. In the first step of our example (solution 1 to 2), the candidates are x3, x4, and x5. The balance between how much the entering variable may increase without pushing any current basic variable negative is controlled by our m constraints.
Recall initial point, origin, has all n DV zero and all m SV non-zero. There is 1 SV per constraint, so each constraint contains exactly 1 basic variable at initialization. Since iterations always cause each constraint to exchange 1 basic and 1 non-basic variables, each constraint always has just 1 basic variable.
Criterion is: Choose the leaving variable to be the basic variable that hits 0 first as the entering variable is increased, as dictated by one of the m constraint equations.
Recall x2 is our entering variable. We want to identify the constraint, when its basic variable is 0, most limits x2.
But we need a more structured way (that we can teach to a computer).
Write the objective function like this:
For the initial solution, we can write F together with constraint equations:
The above is set up for our initial solution: (x1, x2, x3, x4, x5)=(0,0,4,12,18). We observe values of the basic variables are the right-hand-sides of (8), (9), (10).
Because x1 and x2 are non-basic (0), the right-hand-side of (7) is the objective value at this solution.
Goal: Have x2 enter the basis & x4 leave the basis so that, at the new solution, our system of equations is in the same form as above where:
• the values of the basic variables at the solution can be directly read off as the right-hand-sides of those equations and
• the value of the objective at that solution can be directly read off as the right-hand-side of the objective equation.
To accomplish this…
• Because x2 is going to be non-zero, it must show up in only one constraint equation, with a 1 as its coefficient (so it can be directly read off), and it must not appear in the objective equation (since it will not be 0).
• Because x4 is going to be 0, it must appear in the objective equation (so the right-hand-side of the objective equation will directly give F).
Join the matrix with the right-hand-side vector, like this
Use Gaussian Elimination! For an n × n matrix:
Choose the row from which the leaving variable was identified to be the pivot row. Let this be row k so that akk is the pivot.
Divide row k by akk.
Eliminate all ajk, j=1,…,n except j=k. This means to make all elements directly above and beneath the pivot equal to 0 by adding an appropriate multiple of the pivot row to each row above or beneath the pivot.
Select the variable that improves the objective
at the highest rate (i.e., the largest amount
of objective per unit change in variable).
How to identify the leaving variable?
Choose the leaving variable to be the one that
hits 0 first as the entering variable is increased,
as dictated by one of the m constraint equations.
How is the new BFS found?
Using the equation used to identify the leaving variable as the pivot row, eliminate the entering variable from all other equations.
If there are any remaining variables with positive coefficients in current objective function expression, then the current solution may still be improved; take another iteration.
If not, the current solution may not be further improved, and it is therefore optimal.
Since x1 has a positive coefficient, F may still be improved. Therefore this solution is not optimal.
In the right-hand-form, optimality occurs when all coefficients are negative.
In the left-hand-form, optimality occurs when all coefficients are negative.