Boundary oracle and its use in convex optimization
Download
1 / 17

Boundary oracle and its use in convex optimization - PowerPoint PPT Presentation


  • 104 Views
  • Uploaded on

Boundary oracle and its use in convex optimization. Boris Polyak Institute for Control Science Moscow Russia Luminy, April 2007. Contents. Convex optimization problem Boundary oracle Examples Center of gravity and Radon theorem Algorithm 1: Hit-and-Run

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Boundary oracle and its use in convex optimization' - skah


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Boundary oracle and its use in convex optimization

Boundary oracle and its use in convex optimization

Boris Polyak

Institute for Control Science

Moscow Russia

Luminy, April 2007


Contents
Contents

  • Convex optimization problem

  • Boundary oracle

  • Examples

  • Center of gravity and Radon theorem

  • Algorithm 1: Hit-and-Run

  • Algorithm 2: Stochastic approximation

  • Algorithm 3: Monte-Carlo

  • Implementation

  • Simulation

  • Future directions


Convex optimization problem
Convex optimization problem

x*

max (c,x)

x  K

K

d

K is a convex closed

bounded set in Rn

Main problem: random vs deterministic algorithms


Boundary oracle
Boundaryoracle

L

B

Given x є K, d є Rn

T=max{t: x +td  K}

B=x+Td = B(x,d,K)

L is support vector to K

in B: (L,B)(L,y),  y  K

{B,L} – boundary oracle

K

d

x


Examples of k
Examples of K

  • Boundary oracle is available for many sets K:

  • Linear inequalities (LI) Axb

  • Quadratic inequalities (QI) (Aix,x)+(bi,x)gi,i=1,…,m,

  • Linear matrix inequalities (LMI) A0+xiAi 0

  • Solution: A= A0+xiAi, C= –, diAi , i=eig(A,C),,

  • T = k = min {i: i0}, B = x+Td, ek – eigenvector corresponding

  • to k , L = ((A1ek,ek), … , (Anek,ek))

  • Conic quadratic inequalities (CQI) Aix+bi (ci, x)+ gi

  • Nonnegative polynomials x(s)=x0+x1s+…xnsn 0 s

  • Solution: Let si be real roots of the polynomial x(s)d(s)+x(s)d(s).

  • Then T=min{- x(si)/d(si): d(si)<0}, b(s)=x(s)+Td(s), s*- the single

  • real root of b(s), L=(1, s*, …, s*n).


Robust versions of k
Robust versions of K

  • Boundary oracle can be also constructed for robust

  • versions of the above sets. Examples:

  • Robust linear inequalities (RLI) K={x: (A+A)x

  • b+b,, A  A,  b  b}

    2. Robust linear matrix inequalities (RLMI) K=

    {x: A0+0+xi(Ai+i) 0, i  i}


Center of gravity and radon theorem
Center of gravity and Radon theorem

Let g be the center of gravity of a convex set K in Rn.

Radon theorem (1916)

If f+=maxxK (c,x), f-=minxK (c,x), fg=(c,g)

then 1/n(f+- fg)/(fg- f-) n

Worst case – a pyramid

f+

g

n/(n+1)

f-

1/(n+1)

fg


Idea of algorithms
Idea of algorithms

max xK (c,x)

Find approximate center of gravity g for K using random

search and boundary oracle. Then construct

Knew = K{x: (c,x)(c,g)}

For Knew boundary oracle is also available and we can proceed.

If fk is the value of (c,x) obtained at k-th iteration and f* is the

optimal value, we can expect geometric rate of convergence:

fk-f*(f0-f*)qk, q=n/(n+1)

Values of L can be exploited to calculate upper bound for f*.


Comparison with center of gravity and ellipsoids methods
Comparison with center of gravity and ellipsoids methods

MinxK f(x) f, K –convex, g – center of gravity for K

Knew = K{x: (f(g),x-g)0}

Newman (1965), Levin (1965)

Another validation : Vol(K)/Vol(Knew), Grunbaum theorem.

How to find g?

Ellipsoids method – K and Knew are placed into ellipsoids.

Lower rate of convergence.


Algorithm 1 hit and run
Algorithm 1- Hit-and-Run

Hit-and-Run (Smith 1984, Lovasz 1999) is an algorithm

for generating uniformly distributed points in a convex set

K. It exploits boundary oracle. The arithmetic mean of the points can be taken as an approximation for g.

This method was used for convex minimization by

Bertsimas and Vempala (2004).

The simplest and well working version was: generate points

xk recursively:

xk+1= rkB(xk, dk)+ (1-rk)B(xk, - dk), rk=rand

where dk is random direction uniformly distributed on the

unit sphere. Then after N samples the estimate for g is

gN= 1/Nxk


Algorithm 2 stochastic approximation
Algorithm 2 – stochastic approximation

xk+1=xk+gkTkdk, gk1/k, Tk=T(xk,dk)

dk is random uniformly distributed on the unit sphere

Then xkg.

There are many versions of the algorithm.


Algorithm 3 monte carlo
Algorithm 3 – Monte-Carlo

gk is the center of gravity of the

element of volume dVk,

gk=a+(n/(n+1))Tkdk,

dVk=gTkndSk,

g=gkdVk/  dVk

Thus the estimate for g after N

samples generated is

Bk=Tkdk

gk

dk

a

a

gN= a+nTkn+1dk/(n+1) Tkn


General scheme of the method
General scheme of the method

  • Choose aK, e>0.

  • Starting from a, generate approximation gN and oracles

  • Bk, Lk, k=1,…,N. Calculate f–=max (c,Bk), f+=max {(c,x):

  • (Lk, x-Bk)_0}.

  • If f+- f–<e, stop. Else

  • KK{x: (c,x-gN) 0 }, agN.


Implementation
Implementation

  • Finding gN is the best if K is ball-like and a is close to its center.

  • Hence we can reshape K using transformation

  • W=1/N (Bk-a) (Bk-a) T

  • Accelerating step. We can use relaxation: if Gi, Gi+1 are two

  • successive estimates for center of gravity, a= Gi+1 +l(Gi+1 – Gi),

  • l>0 is better approximation for a.

  • Choosing N adaptively.

  • Final polishing.

  • +many other tricks to accelerate convergence.


Simulation results sdp
Simulation results – SDP

SDP: max(c,x) s.t. LMI constraints A(x)<0,

A(x)=A0+x1A1+…+xnAn Ai – m x m matrices

Results comparable with standard SDP solvers (YALMIP).

n~40, m~40, N~1000, k~15 ε~10-8

n~300, m~10, N~2000, k~15 ε~10-7

n~10, m~100, N~200, k~15 ε~10-9

Simulation for robust SDP (low-dimensional)



Rigorous validation complexity estimation
Rigorous validation (complexity estimation)

  • Random algorithms for LP

  • Convex problems with the lack of interior-point methods (nonnegative polynomials?)

  • In general: can random algorithms be competitors with deterministic ones in convex optimization?


ad