Esi 6448 discrete optimization theory
Download
1 / 25

Lecture 12 - PowerPoint PPT Presentation


  • 369 Views
  • Uploaded on

ESI 6448 Discrete Optimization Theory. Section number 5643 Lecture 12. Last class. Little linear algebra review Polyhedral theory. Linear algebra review.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lecture 12' - emily


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Esi 6448 discrete optimization theory

ESI 6448Discrete Optimization Theory

Section number 5643

Lecture 12


Last class
Last class

  • Little linear algebra review

  • Polyhedral theory


Linear algebra review
Linear algebra review

  • A finite collection of vectorsx1, ..., xk Rn is linearlyindependent if the unique solution to ki=1 ixi = 0 is i = 0, i = 1, ..., n.Otherwise, the vectors are linearly dependent.

  • A finite collection of vectors x1, ..., xk Rn is affinelyindependent if the unique solution to ki=1 ixi = 0, ki=1 i = 0, is i = 0, i = 1, ..., n.

  • x1, ..., xk Rn are affinely independent iffx2– x1, ..., xk – x1 are linearly independent iff(x1, 1), ..., (xk, 1) Rn+1 are linearly independent

  • If {x  Rn : Ax = b}  , the maximum number of affinely independent solutions to Ax = b is n + 1 – rank(A).


Linear algebra review1
Linear algebra review

  • A nonempty subset H  Rn is called a subspace if x +y  H x, y  H and ,  R.

  • A linear combination of a collection of vectors x1, ..., xk Rn is any vector y  Rn s.t. y = ki=1 ixi for some   Rk.

  • The span of a collection of vectors x1, ..., xk Rn is theset of all linear combinations of those vectors.

  • Given a subspace H  Rn, a collection of linearlyindependent vectors whose span is H is called a basis of H. The numberof vectors in the basis is the dimension of the subspace.


Linear algebra review2
Linear algebra review

  • The span of the columns of a matrix A is a subspace called the column space or the range, denoted range(A).

  • The span of the rows of a matrix A is a subspace called the row space.

  • rank(A) = dimensions of the column space and row space

  • Clearly, rank(A)  min {m, n}. If rank(A) = min{m, n}, then A is said to have full rank.

  • The set {x  Rn : Ax = 0} is called the nullspace of A (null(A)) and has dimension n – rank(A).


Polyhedra
Polyhedra

  • A polyhedron is a set of the form {x  Rn : Ax  b}, where A  Rmn and b  Rm.

  • A polyhedron P  Rn is bounded if there exists a constant K s.t. |x| < K x  S, i  [1, n].

  • A bounded polyhedron is called a polytope.

  • Let a  Rn and b  R be given.

    • The set {x  Rn : aTx = b}is called a hyperplane.

    • The set {x  Rn : aTx  b} is called a half-space.


Convex
Convex

  • A set S  Rn is convex if x, y  S,  [0, 1], we have x + (1 – )y  S.

  • Let x1, ..., xk Rn and  Rk be given such that T1 = 1. Then

    • the vector ki=1 ixi is said to be a convex combination of x1, ..., xk.

    • the convex hull of x1, ..., xk is the set of all convex combinations of these vectors.

  • A set is convex iff for any two points in the set, the line segment joining those two points lies entirely in the set.

  • All polyhedra are convex.


Dimensions
Dimensions

  • A polyhedron P is of dimension k, denoted dim(P) = k, if the maximum number of affinely independent points in P is k + 1.

  • A polyhedron P  Rn is full-dimensional if dim(P) = n.

  • Let

    • M = {1, ..., m},

    • M= = {i  M : aix = bix  P} (the equality set),

    • M = M \ M= (the inequality set).

    • (A=, b=); (A, b) be the corresponding rows of (A, b).

  • If P  Rn, then dim(P) + rank(A=, b=) = n.


Inner interior points
Inner (interior) points

  • x  P is called an inner point of P if aix < bii  M.

  • x  P is called an interior point of P if aix < bii  M.

  • Every nonempty polyhedron has an inner point.

  • A polyhedron has an interior point iff it is full-dimensional.


Valid inequalities
Valid inequalities

  • The inequality denoted by (, 0) is called a valid inequality for P if x 0x  P.

    • (,0) is a valid inequality iff P lies in the half-space {x  Rn : x  0} iff max{x : x  P} 0.

  • If (,0) is a valid inequality for P and F = {x  P : x = 0}, F is called a face of P and we say that (,0) represents or defines F.

    • A face is said to be proper if F  , and F  P.

    • The face represented by (, 0) is nonempty iff max{x : x  P} = 0.

    • If the face F is nonempty, we say it supports P.

    • The set of optimal solutions to an LP is always a face of the feasible region.


Descriptions
Descriptions

  • If P = {x  Rn: Ax  b}, then the inequalities corresponding to therows of (A, b) are called a description of P.

    • Every polyhedron has an infinite number of descriptions.

    • We assumethat all inequalities are supporting.

  • If (, 0) and (, 0) are two valid inequalities for apolyhedron P  R+n, we say (, 0) dominates (, 0) if there existsu > 0 such that   u and 0 u0.

  • A valid inequality (, 0) is redundant in the descriptionof P if there exists a linear combination of the inequalities in thedescription that dominates (, 0).


Facets
Facets

  • A face F is said to be a facet of P if dim(F) = dim(P) – 1.

    • Facets are all we need to describe polyhedra.

  • If F is a facet of P, then in any description of P, thereexists some inequality representing F.


Representations
Representations

  • Every full-dimensional polyhedron P has a unique (to within scalar multiplication) representation that consists of one inequality representing each facet of P.

  • If dim(P) = n – k with k > 0, then P is described by a maximal set of linearly independent rows of (A=, b=), as well as one inequality representing each facet of P.

  • If a facet F of P is represented by (, 0), then the set of all representations of F is obtained by taking scalar multiples of (, 0) plus linear combinations of the equality set of P.


Extreme points
Extreme points

  • x is an extreme point of P if there do not exist x1, x2 P s.t. x = 1/2x1 + 1/2x2.

  • x is an extreme point of P iff x is a zero-dimensionalface of P.

  • If a (A, b) is a description of P , and rank(A) = n – k,then P has a face of dimension k and no proper face of lower dimension.

  • P has an extreme point iff rank(A) = n.


Extreme rays
Extreme rays

  • Let P0 be {r  Rn : Ar  0}. r  P0 \ {0} is called a ray of P.

  • r is an extreme ray of P if there do not exist rays r1 andr2 of P s.t. r = 1/2r1 + 1/2r2.

  • If P , then r is an extreme ray of P iff {r : R+} is a one-dimensional face of P0.

    • A polyhedron has a finite numberof extreme points and extreme rays.


Polarity
Polarity

  •  = {(, 0)  Rn+1 : Tx  0 x  P} is the polar of the polyhedron P = {x  Rn : Ax  b}.

  • Let P  Rn be a polyhedron with extreme points {xk}kK and extreme rays {rj}jJ. Then  = {(, 0)}is a (polyhedral) cone that satisfies :

    • Txk – 0  0 k  K

    • Trj  0 j  J


Polarity1
Polarity

  • Duality between P and 

    • dim(P) = n, rank(A) = n

    • The facets of P are the extreme rays of the polar of P

    • Tx  0 defines a facet of  iff x is an extreme point of P

    • Tr  0 defines a facet of  iff r is an extreme ray of P


Polarity2
Polarity

  • If aTx  b is a valid inequality for P, b > 0

    • Scale each inequality by the RHS and rewrite the polytope as P = {x  Rn : Ax  1}.

  • The 1-polar of P is1 = {  Rn :Txk  1 k  K}

  • If P = {x  Rn : Ax  1} is a full-dimensional polytope, then 1 is a full-dimensional polytope and P is the 1-polar of 1.


Polarity3
Polarity

  • If P is full-dimensional and bounded, and 0 is an interior point of P, then

    • P = {x : tx  1 for t T, {t}tT are the extreme points of 1}

    • 1 = { : xk  1 for k K, {xk}kK are the extreme points of P}

    • x*  P iff max{x* :   1}  1

    • *  1 iff max{*x : x  P}  1

  • Given a linear program, if we can optimization problem in polynomial time, then we can solve separation problem in polynomial time using the polarity.


Ellipsoid algorithm
Ellipsoid algorithm

  • First polynomial-time algorithm for linear programming

  • Computationally impractical, but provides a connection between separation and optimization problems

    • Efficient Optimization Property

      • For a given class of optimization problems (P) max {cx : x X  Rn}, there exists an efficient (polynomial) algorithm.

    • Efficient Separation Property

      • There exists an efficient algorithm for the separation problem associated with the problem class.


Ellipsoid property
Ellipsoid Property

  • Ellipsoid w/ center y :E = E(D, y) = {x  Rn : (x – y)TD-1(x – y)  1},where D : nn positive definite matrix.

  • Ellipsoid property

    • Given an ellipsoid E = E(D, y), the half-ellipsoid H = E  {x  Rn : dx  dy} obtained by any inequality dx  dy through its center is contained in an ellipsoid E’ with the property thatvol(E’) / vol(E)  e-1/2(n+1).

E

E’


Ellipsoid algorithm1
Ellipsoid algorithm

1. Find ellipsoid E0 P

2. Find the center x0 of E0

3. Test if x0  P

4. If x0  P, stop. O.w. find the violated inequality (, 0) passing through x0

5. From (, 0), get a half-ellipsoid HE  P

6. Find a new ellipsoid E1  HE s.t. vol(E1) / vol(E0)  e-1/2(n+1)< 1

7. E0 := E1. Go to 2.


Ellipsoid method
Ellipsoid method

  • Shrinking ellipsoid

  • In a polynomial number of steps, we can show

    • A point x in P, or

    • P is empty

  • Given a linear program, if we can solve separation problem in polynomial time, then we can solve the optimization problem in polynomial time using the ellipsoid algorithm.


Equivalence of separation and optimization
Equivalence of separation and optimization

Ellipsoid

  • Separate over P in P Solve LP over P in P

    Polarity

  • Solve LP over P in P Separate over 1 in P

    Ellipsoid

  • Separate over 1 in P Solve LP over 1 in P

    Polarity

  • Solve LP over 1 in P Separate over P in P


Today
Today

  • Polynomial equivalence of separation and optimization problems

    • Polarity

    • Ellipsoid algorithm


ad