1 / 35

COMS 6998-06 Network Theory Week 5: February 21, 2008

COMS 6998-06 Network Theory Week 5: February 21, 2008. Dragomir R. Radev Thursdays, 6-8 PM 233 Mudd Spring 2008. (8) Random walks and electrical networks. Random walks. Stochastic process on a graph Transition matrix E Simplest case: a regular 1-D graph. 0. 1. 2. 3. 4. 5.

yestin
Download Presentation

COMS 6998-06 Network Theory Week 5: February 21, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COMS 6998-06 Network TheoryWeek 5: February 21, 2008 Dragomir R. Radev Thursdays, 6-8 PM 233 Mudd Spring 2008

  2. (8) Random walks and electrical networks

  3. Random walks • Stochastic process on a graph • Transition matrix E • Simplest case: a regular 1-D graph 0 1 2 3 4 5

  4. Gambler’s ruin • A has N pennies and B has M pennies. • At each turn, one of them wins a penny with a probability of 0.5 • Stop when one of them loses all his money.

  5. Harmonic functions • Harmonic functions: • P(0) = 0 • P(N) = 1 • P(x) = ½*p(x-1)+ ½*p(x+1), for 0<x<N • (in general, replace ½ with the bias in the walk)

  6. Simple electrical circuit 0 1 2 3 4 5 V(0)=0 V(N)=1

  7. Arbitrary resistances

  8. The Maximum principle • Let f(x) be a harmonic function on a sequence S. • Theorem: • A harmonic function f(x) defined on S takes on its maximum value M and its minimum value m on the boundary. • Proof: • Let M be the largest value of f. Let x be an element of S for which f(x)=M. Then f(x+1)=f(x-1)=M. If x-1 is still an interior point, continue with x-2, etc. In the worst case, reach x=0, for which f(x)=M.

  9. The Uniqueness principle • Let f(x) be a harmonic function on a sequence S. • Theorem: • If f(x) and g(x) are harmonic functions on S such that f(x)=g(x) on the boundary points B, then f(x)=g(x) for all x. • Proof: • Let h(x)=f(x)-g(x). Then, if x is an interior point, and h is harmonic. But h(x)=0 for x in B, and therefore, by the Maximum principle, its minimal and maximal values are both 0. Thus h(x)=0 for all x which proves that f(x)=g(x) for all x.

  10. How to find the unique solution? • Try a linear function: f(x)=x/N. • This function has the following properties: • f(0)=0 • f(N)=1 • (f(x-1)+f(x+1))*1/2=x/N=f(x)

  11. Reaching the boundary • Theorem: • The random walker will reach either 0 or N. • Proof: • Let h(x) be the probability that the walker never reaches the boundary. Thenh(x)=1/2*h(x+1)+1/2*h(x-1),so h(x) is harmonic. Also h(0)=h(N)=0. According to the maximum principle, h(x)=0 for all x.

  12. Number of steps to reach the boundary • m(0)=0 • m(N)=0 • m(x)=1/2m(x+1)+1/2m(x-1) • The expected number of steps until a one dimensional random walk goes up to b or down to -a is ab. • Examples: (a=1,b=1); (a=2,b=2) • (also: the displacement varies as sqrt(t) where t is time).

  13. Fair games • In the penny game, after one iteration, the expected fortune is ½(k-1)+1/2(k+1)=k • Fair game = martingale • Now if A has x pennies out of a total of N, his final fortune is:(1-p(x)).0+p(x).N=p(x).N • Is the game fair if A can stop when he wants? No – e.g., stop playing when your fortune reaches $x.

  14. (9) Method of relaxations and other methods for computing harmonic functions

  15. 2-D harmonic functions 0 0 x y 1 z 1

  16. The original Dirichlet problem U=1 U=0 • Distribution of temperature in a sheet of metal. • One end of the sheet has temperature t=0, the other end: t=1. • Laplace’s differential equation: • This is a special (steady-state) case of the (transient) heat equation : • In general, the solutions to this equation are called harmonic functions.

  17. Learning harmonic functions • The method of relaxations • Discrete approximation. • Assign fixed values to the boundary points. • Assign arbitrary values to all other points. • Adjust their values to be the average of their neighbors. • Repeat until convergence. • Monte Carlo method • Perform a random walk on the discrete representation. • Compute f as the probability of a random walk ending in a particular fixed point. • Linear equation method • Eigenvector methods • Look at the stationary distribution of a random walk

  18. Monte Carlo solution • Least accurate of all. Example: 10,000 runs for an accuracy of 0.01

  19. Example • x=1/4*(y+z+0+0) • y=1/2*(x+1) • z=1/3*(x+1+1) • Ax=u • X=A-1u

  20. Effective resistance • Series: R=R1+R2 • Parallel: C=C1+C2 1/R=1/R1+1/R R=R1R2/(R1+R2)

  21. Example • Doyle/Snell page 45

  22. 1 Ω Electrical networks and random walks c • Ergodic (connected) Markov chain with transition matrix P 1 Ω 1 Ω w=Pw b a 0.5 Ω 0.5 Ω d From Doyle and Snell 2000

  23. 1 Ω Electrical networks and random walks c 1 Ω 1 Ω b a 0.5 Ω 0.5 Ω • vxis the probability that a random walk starting at x will reach a before reaching b. d • The random walk interpretation allows us to use Monte Carlo methods to solve electrical circuits. 1 V

  24. Energy-based interpretation • The energy dissipation through a resistor is • Over the entire circuit, • The flow from x to y is defined as follows: • Conservation of energy

  25. Thomson’s principle • One can show that: • The energy dissipated by the unit current flow (for vb=0 and for ia=1) is Reff. This value is the smallest among all possible unit flows from a to b (Thomson’s Principle)

  26. Eigenvectors and eigenvalues • An eigenvector is an implicit “direction” for a matrix where v (eigenvector)is non-zero, though λ (eigenvalue) can be any complex number in principle • Computing eigenvalues:

  27. Eigenvectors and eigenvalues • Example: • Det (A-lI) = (-1-l)*(-l)-3*2=0 • Then: l+l2-6=0; l1=2; l2=-3 • For l1=2: • Solutions: x1=x2

  28. Stochastic matrices • Stochastic matrices: each row (or column) adds up to 1 and no value is less than 0. Example: • The largest eigenvalue of a stochastic matrix E is real: λ1 = 1. • For λ1, the left (principal) eigenvector is p, the right eigenvector = 1 • In other words, GTp = p.

  29. Markov chains • A homogeneous Markov chain is defined by an initial distribution x and a Markov kernel E. • Path = sequence (x0, x1, …, xn).Xi = xi-1*E • The probability of a path can be computed as a product of probabilities for each step i. • Random walk = find Xjgiven x0, E, and j.

  30. Stationary solutions • The fundamental Ergodic Theorem for Markov chains [Grimmett and Stirzaker 1989] says that the Markov chain with kernel E has a stationary distribution p under three conditions: • E is stochastic • E is irreducible • E is aperiodic • To make these conditions true: • All rows of E add up to 1 (and no value is negative) • Make sure that E is strongly connected • Make sure that E is not bipartite • Example: PageRank [Brin and Page 1998]: use “teleportation”

  31. t=0 1 6 8 2 7 t=1 5 3 4 Example This graph E has a second graph E’(not drawn) superimposed on it:E’ is the uniform transition graph.

  32. Eigenvectors • An eigenvector is an implicit “direction” for a matrix. Ev = λv, where v is non-zero, though λ can be any complex number in principle. • The largest eigenvalue of a stochastic matrix E is real: λ1 = 1. • For λ1, the left (principal) eigenvector is p, the right eigenvector = 1 • In other words, ETp = p.

  33. Computing the stationary distribution functionPowerStatDist (E): begin p(0) = u; (or p(0) = [1,0,…0]) i=1; repeat p(i) = ETp(i-1) L = ||p(i)-p(i-1)||1; i = i + 1; untilL <  returnp(i) end Solution for thestationary distribution Convergence rate is O(m)

  34. t=0 1 6 8 2 7 t=1 5 3 4 t=10 Example

  35. More dimensions • Polya’s theorem says that a 1-D random walk is recurrent and that a 2-D walk is also recurrent. However, a 3-D walk has a non-zero escape probability (p=0.66). • http://mathworld.wolfram.com/PolyasRandomWalkConstants.html

More Related