1 / 86

Adaptive annealing: a near-optimal connection between sampling and counting

Adaptive annealing: a near-optimal connection between sampling and counting. Daniel Štefankovič (University of Rochester) Santosh Vempala Eric Vigoda (Georgia Tech). Counting. independent sets spanning trees matchings perfect matchings k-colorings. Compute the number of.

egil
Download Presentation

Adaptive annealing: a near-optimal connection between sampling and counting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive annealing: a near-optimal connectionbetween sampling and counting Daniel Štefankovič (University of Rochester) Santosh Vempala Eric Vigoda (Georgia Tech)

  2. Counting independent sets spanning trees matchings perfect matchings k-colorings

  3. Compute the number of independent set subset S of vertices, of a graph no two in S are neighbors = independent sets (hard-core gas model)

  4. # independent sets = 7 independent set = subset S of vertices no two in S are neighbors

  5. # independent sets = 5598861 independent set = subset S of vertices no two in S are neighbors

  6. graph G  # independent sets in G #P-complete #P-complete even for 3-regular graphs (Dyer, Greenhill, 1997)

  7. graph G  # independent sets in G ? approximation randomization

  8. We would like to know Q Goal: random variable Y such that P( (1-)Q  Y  (1+)Q )  1- “Y gives (1)-estimate”

  9. (approx) counting  sampling Valleau,Card’72 (physical chemistry), Babai’79 (for matchings and colorings), Jerrum,Valiant,V.Vazirani’86 the outcome of the JVV reduction: random variables: X1 X2 ... Xt such that E[X1 X2 ... Xt] 1) = “WANTED” 2) the Xi are easy to estimate V[Xi] squared coefficient of variation (SCV) = O(1) E[Xi]2

  10. (approx) counting  sampling E[X1 X2 ... Xt] 1) = “WANTED” 2) the Xi are easy to estimate V[Xi] = O(1) E[Xi]2 Theorem (Dyer-Frieze’91) O(t2/2) samples (O(t/2) from each Xi) give 1 estimator of “WANTED” with prob3/4

  11. JVV for independent sets GOAL: given a graph G, estimate the number of independent sets of G 1 # independent sets = P( )

  12. P(AB)=P(A)P(B|A) JVV for independent sets ? ? ? ? ? ? P() = P() P() P( ) P( ) X1 X2 X3 X4 V[Xi] Xi [0,1] and E[Xi] ½  = O(1) E[Xi]2

  13. Self-reducibility for independent sets P( ) ? 5 = ? 7 ?

  14. Self-reducibility for independent sets P( ) ? 5 = ? 7 ? 7 = 5

  15. Self-reducibility for independent sets P( ) ? 5 = ? 7 ? 7 7 = = 5 5

  16. Self-reducibility for independent sets P( ) 3 = ? 5 ? 5 = 3

  17. Self-reducibility for independent sets P( ) 3 = ? 5 ? 5 5 = = 3 3

  18. Self-reducibility for independent sets 7 5 7 = = 5 3 5 7 5 3 = 7 = 5 3 2

  19. JVV: If we have a sampler oracle: random independent set of G SAMPLER ORACLE graph G then FPRAS using O(n2) samples.

  20. JVV: If we have a sampler oracle: random independent set of G SAMPLER ORACLE graph G then FPRAS using O(n2) samples. ŠVV: If we have a sampler oracle: SAMPLER ORACLE set from gas-model Gibbs at  , graph G then FPRAS using O*(n) samples.

  21. Application – independent sets O*( |V| ) samples suffice for counting Cost per sample (Vigoda’01,Dyer-Greenhill’01) time = O*( |V| ) for graphs of degree  4. Total running time: O* ( |V|2 ).

  22. Other applications matchings O*(n2m) (using Jerrum, Sinclair’89) spin systems: Ising model O*(n2) for <C (using Marinelli, Olivieri’95) k-colorings O*(n2) for k>2 (using Jerrum’95) total running time

  23. easy = hot hard = cold

  24. Hamiltonian 4 2 1 0

  25. Big set =  Hamiltonian H :   {0,...,n} Goal: estimate |H-1(0)| |H-1(0)| = E[X1] ... E[Xt]

  26. Distributions between hot and cold • = inverse temperature • = 0 hot uniform on  • = cold uniform on H-1(0)  (x)  exp(-H(x)) (Gibbs distributions)

  27. Distributions between hot and cold  (x)  exp(-H(x)) exp(-H(x))  (x) = Z() Normalizing factor = partition function Z()=  exp(-H(x)) x

  28. Partition function Z()=  exp(-H(x)) x have: Z(0) = || want: Z() = |H-1(0)|

  29. Assumption: we have a sampler oracle for  exp(-H(x))  (x) = Z() SAMPLER ORACLE subset of V from  graph G 

  30. Assumption: we have a sampler oracle for  exp(-H(x))  (x) = Z() W 

  31. Assumption: we have a sampler oracle for  exp(-H(x))  (x) = Z() W  X = exp(H(W)( - ))

  32. Assumption: we have a sampler oracle for  exp(-H(x))  (x) = Z() W  X = exp(H(W)( - )) can obtain the following ratio: Z() E[X] = (s) X(s) = Z() s

  33. Our goal restated Partition function Z() =  exp(-H(x)) x Goal: estimate Z()=|H-1(0)| Z(1) Z(2) Z(t) Z() = Z(0) ... Z(0) Z(1) Z(t-1) 0 = 0 < 1 < 2 < ... < t = 

  34. Our goal restated Z(1) Z(2) Z(t) Z() = Z(0) ... Z(0) Z(1) Z(t-1) Cooling schedule: 0 = 0 < 1 < 2 < ... < t =  How to choose the cooling schedule? minimize length, while satisfying Z(i) V[Xi] =O(1) E[Xi] = E[Xi]2 Z(i-1)

  35. Parameters: A andn n Z() = ak e- k  k=0 Z() =  exp(-H(x)) x Z(0) = A H:  {0,...,n} ak = |H-1(k)|

  36. Parameters Z(0) = A H:  {0,...,n} A n 2V E independent sets matchings perfect matchings k-colorings V V! V! V kV E

  37. Previous cooling schedules Z(0) = A H:  {0,...,n} 0 = 0 < 1 < 2 < ... < t =  “Safe steps” •  + 1/n •  (1 + 1/ln A) ln A  (Bezáková,Štefankovič, Vigoda,V.Vazirani’06) Cooling schedules of length O( n ln A) (Bezáková,Štefankovič, Vigoda,V.Vazirani’06) O( (ln n) (ln A) )

  38. No better fixed schedule possible A 1+a Z(0) = A H:  {0,...,n} A schedule that works for all - n Za() = (1 + a e ) (with a[0,A-1]) has LENGTH ( (ln n)(ln A) )

  39. Parameters Z(0) = A H:  {0,...,n} Our main result: can get adaptive schedule of length O* ( (ln A)1/2 ) Previously: non-adaptive schedules of length *( ln A )

  40. Related work can get adaptive schedule of length O* ( (ln A)1/2 ) Lovász-Vempala Volume of convex bodies in O*(n4) schedule of length O(n1/2) (non-adaptive cooling schedule)

  41. Existential part can get adaptive schedule of length O* ( (ln A)1/2 ) Lemma: for every partition function there exists a cooling schedule of length O*((ln A)1/2) there exists

  42. Express SCV using partition function Z() E[X] = Z() (going from  to ) W  X = exp(H(W)( - )) E[X2] Z(2-) Z() C = E[X]2 Z()2

  43. E[X2] Z(2-) Z() C = E[X]2 Z()2   2- f()=ln Z() Proof: C’=(ln C)/2

  44. f is decreasing f is convex f’(0)  –n f(0)  ln A f()=ln Z() either f or f’ changes a lot Proof: Let K:=f 1 1 (ln |f’|)  K

  45. f:[a,b]  R, convex, decreasing can be “approximated” using f’(a) (f(a)-f(b)) f’(b) segments

  46. Technicality: getting to 2- Proof:   2-

  47. Technicality: getting to 2- Proof: i   2- i+1

  48. Technicality: getting to 2- Proof: i   2- i+2 i+1

  49. Technicality: getting to 2- Proof: ln ln A extra steps i   2- i+2 i+1 i+3

  50. Existential  Algorithmic can get adaptive schedule of length O* ( (ln A)1/2 ) can get adaptive schedule of length O* ( (ln A)1/2 ) there exists

More Related