Phase transition behaviour. Toby Walsh Dept of CS University of York. Outline. What have phase transitions to do with computation? How can you observe such behaviour in your favourite problem? Is it confined to random and/or NP-complete problems?
Dept of CS
University of York
What have phase transitions to do with computation?
How can you observe such behaviour in your favourite problem?
Is it confined to random and/or NP-complete problems?
Can we build better algorithms using knowledge about phase transition behaviour?
What open questions remain?
Achlioptas, Chayes, Dunne, Gent, Gomes, Hogg, Hoos, Kautz, Mitchell, Prosser, Selman, Smith, Stergiou, Stutzle, … Walsh
A little history ...
In 91, I has just finished my PhD and was looking for some new research topics!
Enough of the history, what has this got to do with computation?
Ice melts. Steam condenses. Now that’s a proper phase transition ...
(x1 v x2) & (-x2 v x3 v -x4)
x1/ True, x2/ False, ...
What happens with larger problems?
Why are some dots red and others blue?
What’s so special about 4.3?
Little evidence yet to support any of these claims!
No, you just need a suitable ensemble of problems to sample from?
Ignoring trivial cases (like O(1) algorithms)
How do you identify phase transition behaviour in your favourite problem?
dividing a bag of numbers into two so their sums are as balanced as possible
other distributions work (Poisson, …)
e.g. ignores structural features that cluster solutions together
kappa= 1 - log2(<Sol>)/n
where n is dimension of state space
Now, kappa > 1 implies <Sol> < 1
Hence, kappa > 1 implies prob(Sol) < 1
Prob(perfect partition) against kappa
Prob(perfect partition) against gamma
Optimization cost against gamma
What do we understand about problem hardness at the phase boundary?
How can this help build better algorithms?
=> problems become less constrained
Aside: can anyone explain simple scaling?
l/n against depth/n
=> problems become more constrained
Aside: why is there again such simple scaling?
Clause length, k against depth/n
kappa against depth/n
must take into account branching rate, max-kappa often therefore not a good move!
discontinuity at phase boundary!
Can we make algorithms that identify and exploit the backbone structure of a problem?
Search cost against n
Can we model structural features not found in uniform random problems?
How does such structure affect our algorithms and phase transition behaviour?
Can we identify structural features common in real world problems?
dense random graphs contains lots of (rare?) structure
Real graphs tend to have short path lengths
as do random graphs
Real graphs tend to be clustered
unlike sparse random graphs
L, average path length
C, clustering coefficient
(fraction of neighbours connected to each other, cliqueness measure)
mu, proximity ratio is C/L normalized by that of random graph of same size and densityReal versus Random
real problems often contain similar structure and stochastic components?
It’s not just small world graphs that have been studied
prob(leading digit=i) = log(1+1/i)
What open questions remain?
Where to next?
That’s nearly all from me!
Cheeseman, Kanefsky, Taylor, Where the really hard problem are, Proc. of IJCAI-91
Gent et al, The Constrainedness of Search, Proc. of AAAI-96
Gent & Walsh, The TSP Phase Transition, Artificial Intelligence, 88:359-358, 1996
Gent & Walsh, Analysis of Heuristics for Number Partitioning, Computational Intelligence, 14 (3), 1998
Gent & Walsh, Beyond NP: The QSAT Phase Transition, Proc. of AAAI-99
Gent et al, Morphing: combining structure and randomness, Proc. of AAAI-99
Hogg & Williams (eds), special issue of Artificial Intelligence, 88 (1-2), 1996
Mitchell, Selman, Levesque, Hard and Easy Distributions of SAT problems, Proc. of AAAI-92
Monasson et al, Determining computational complexity from characteristic ‘phase transitions’, Nature, 400, 1998
Walsh, Search in a Small World, Proc. of IJCAI-99
Watts & Strogatz, Collective dynamics of small world networks, Nature, 393, 1998