The smoothed analysis of algorithms
Download
1 / 43

The Smoothed Analysis of Algorithms - PowerPoint PPT Presentation


  • 728 Views
  • Updated On :

The Smoothed Analysis of Algorithms. Daniel A. Spielman MIT. With Shang-Hua Teng (Boston University) John Dunagan (Microsoft Research) and Arvind Sankar (Goldman Sachs). Outline. Why?. What?. The Simplex Method. Gaussian Elimination. Other Problems. Conclusion. Problem:

Related searches for The Smoothed Analysis of Algorithms

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'The Smoothed Analysis of Algorithms' - mike_john


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
The smoothed analysis of algorithms l.jpg

The Smoothed Analysis of Algorithms

Daniel A. Spielman

MIT

With Shang-Hua Teng (Boston University)

John Dunagan (Microsoft Research) and Arvind Sankar (Goldman Sachs)


Outline l.jpg
Outline

Why?

What?

The Simplex Method

Gaussian Elimination

Other Problems

Conclusion


Slide3 l.jpg

Problem:

Heuristics that work in practice,

with no sound theoretical explanation

Exponential worst-case complexity,

but works in practice

Polynomial worst-case complexity,

but much faster in practice

Heuristic speeds up code,

with poor results in worst-case


Slide4 l.jpg

Attempted resolution:

Average-case analysis

Measure expected performance

on random inputs


Slide5 l.jpg

Random

is not typical


Slide6 l.jpg

Critique of

Average-case Analysis

Random objects have

very special properties

with exponentially high probability

Actual inputs might not

look random.


Slide7 l.jpg

Smoothed Analysis:

a hybrid of worst and average case

worst case

average case


Slide8 l.jpg

Smoothed Analysis:

a hybrid of worst and average case

is Gaussian

of stand dev

worst case

average case

smoothed complexity


Slide9 l.jpg

Smoothed Complexity

Interpolates between worst and average case

Considers neighborhood of every input

If low, all bad inputs are unstable


Complexity landscape l.jpg
Complexity Landscape

worst case

run time

average case

input space


Smoothed complexity landscape convolved with gaussian l.jpg
Smoothed Complexity Landscape(convolved with Gaussian)

run time

smoothed

complexity

input space


Classical example simplex method for linear programming l.jpg
Classical Example: Simplex Method for Linear Programming

max

s.t.

Worst-Case: exponential

Average-Case: polynomial

Widely used in practice



Classical example simplex method for linear programming15 l.jpg
Classical Example: Simplex Method for Linear Programming

max

s.t.

Worst-Case: exponential

Average-Case: polynomial

Widely used in practice



Smoothed analysis of simplex method l.jpg

max

s.t.

max

s.t.

Smoothed Analysis of Simplex Method

G is Gaussian

Theorem: For all A, b, c, simplex method takes

expected time polynomialin


Slide18 l.jpg

Analysis of Simplex Method

Using Shadow-Vertex Pivot Rule


Slide19 l.jpg

Shadow vertex pivot rule

start

objective



Slide21 l.jpg

Polar Form of Linear Programming

c

max 

cÎ ConvexHull(a1, a2, ..., am)



Count facets by discretizing to n directions n l.jpg
Count facets by discretizingto N directions, N∞


Count pairs in different facets l.jpg
Count pairs in different facets

[

]

Different

Facets

< c/N

Pr

So, expect c Facets



Slide27 l.jpg

Angle

Distance



Smoothed analysis of simplex method29 l.jpg

max

s.t.

max

s.t.

Smoothed Analysis of Simplex Method

G is Gaussian

Theorem: For all A, b, c, simplex method takes

expected time polynomialin


Slide30 l.jpg

Interior Point Methods for Linear Programming

Analysis Method #Iterations

Observation

Worst-Case, upper

Worst-Case, lower

Average-Case

Smoothed, upper

( )

[Dunagan-S-Teng], [S-Teng]

Conjecture


Slide31 l.jpg

Gaussian Elimination for Ax = b

>> A = randn(2)

A =

-0.4326 0.1253

-1.6656 0.2877

>> b = randn(2,1)

b =

-1.1465

1.1909

>> x = A \ b

x =

-5.6821

-28.7583

>> norm(A*x - b)

ans =

8.0059e-016


Slide32 l.jpg

Gaussian Elimination for Ax = b

>> A = 2*eye(70) - tril(ones(70));

>> A(:,70) = 1;

>> b = randn(70,1);

>> x = A \ b;

>> norm(A*x - b)

ans =

3.5340e+004

Failed!

Perturb A

>> Ap = A + randn(70) / 10^9;

>> x = Ap \ b;

>> norm(Ap*x - b)

ans =

5.8950e-015


Slide33 l.jpg

>> A = 2*eye(70) - tril(ones(70));

>> A(:,70) = 1;

>> b = randn(70,1);

>> x = A \ b;

>> norm(A*x - b)

ans =

3.5340e+004

Failed!

Perturb A

>> Ap = A + randn(70) / 10^9;

>> x = Ap \ b;

>> norm(Ap*x - b)

ans =

5.8950e-015

>> norm(A*x - b)

ans =

3.6802e-008

Solved original too!

Gaussian Elimination for Ax = b


Slide34 l.jpg

>> A = 2*eye(70) - tril(ones(70));

>> A(:,70) = 1;

>> b = randn(70,1);

>> x = A \ b;

>> norm(A*x - b)

ans =

3.5340e+004

Failed!

Perturb A

>> Ap = A + randn(70) / 10^9;

>> x = Ap \ b;

>> norm(Ap*x - b)

ans =

5.8950e-015

>> norm(A*x - b)

ans =

3.6802e-008

Solved original too!

Gaussian Elimination for Ax = b


Slide35 l.jpg

>> A = 2*eye(70) - tril(ones(70));

>> A(:,70) = 1;

>> b = randn(70,1);

>> x = A \ b;

>> norm(A*x - b)

ans =

3.5340e+004

Failed!

Perturb A

>> Ap = A + randn(70) / 10^9;

>> x = Ap \ b;

>> norm(Ap*x - b)

ans =

5.8950e-015

>> norm(A*x - b)

ans =

3.6802e-008

Solved original too!

Gaussian Elimination for Ax = b


Slide36 l.jpg

Gaussian Elimination with Partial Pivoting

Fast heuristic for maintaining precision,

by trying to keep entries small


Slide37 l.jpg

Gaussian Elimination with Partial Pivoting

Fast heuristic for maintaining precision,

by trying to keep entries small

Pivot not just on zeros,

but to move up entry of largest magnitude


Slide38 l.jpg

Gaussian Elimination with Partial Pivoting

“Gaussian elimination with partial pivoting is utterly stable in practice. In fifty years of computing, no matrix problems that excite an explosive instability are know to have arisen under natural circumstances …

Matrices with large growth factors are vanishingly rare in applications.”

Nick Trefethen


Slide39 l.jpg

Gaussian Elimination with Partial Pivoting

“Gaussian elimination with partial pivoting is utterly stable in practice. In fifty years of computing, no matrix problems that excite an explosive instability are know to have arisen under natural circumstances …

Matrices with large growth factors are vanishingly rare in applications.”

Nick Trefethen

Theorem:

[Sankar-S-Teng]


Slide40 l.jpg

Mesh Generation

Parallel complexity of Ruppert’s Delaunay refinement is O( (log n/s)2)

Spielman-Teng-Üngör


Slide41 l.jpg

Other Smoothed Analyses

Perceptron[Blum-Dunagan]

Quicksort[Banderier-Beier-Mehlhorn]

Parallel connectivity in digraphs [Frieze-Flaxman]

Complex Gaussian Elimination [Yeung]Smoothed analysis of K(A) [Wschebor]

On smoothed analysis in dense graphs and formulas

[Krivelevich-Sudakov-Tetali]Smoothed Number of Extreme Points under Uniform Noise

[Damerow-Sohler]

Typical Properties of Winners and Losers in Discrete Optimization [Beier-Vöcking]

Multi-Level Feedback scheduling

[Becchetti-Leonardi-Marchetti-Shäfer-Vredeveld]

Smoothed motion complexity

[Damerow, Meyer auf der Heide, Räcke, Scheideler, Sohler]


Slide42 l.jpg

Future Smoothed Analyses

Multilevel graph partitioning

Smoothed Analysis of Chaco and Metis

Differential Evolution

and other optimization heuristics

Computing Nash Equilibria


Slide43 l.jpg

Future Smoothed Analyses

Perturb less!

Preserve zeros

Preserve magnitudes of numbers

Property-preserving perturbations

More Discrete smoothed analyses

New algorithms

For more, see the Smoothed Analysis Homepage


ad