lecture 6 l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Lecture 6 PowerPoint Presentation
Download Presentation
Lecture 6

Loading in 2 Seconds...

play fullscreen
1 / 21

Lecture 6 - PowerPoint PPT Presentation


  • 146 Views
  • Uploaded on

Lecture 6. Calculating P n – how do we raise a matrix to the n th power? Ergodicity in Markov Chains. When does a chain have equilibrium probabilities? Balance Equations Calculating equilibrium probabilities without the fuss. The leaky bucket queue

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lecture 6' - ostinmannual


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
lecture 6
Lecture 6
  • Calculating Pn – how do we raise a matrix to the nth power?
  • Ergodicity in Markov Chains.
    • When does a chain have equilibrium probabilities?
  • Balance Equations
    • Calculating equilibrium probabilities without the fuss.
  • The leaky bucket queue
    • Finally an example which is to do with networks.
  • For more information:
    • Norris: Markov Chains (Chapter 1)
    • Bertsekas: Appendix A and Section 6.3
how to calculate p n
How to calculate Pn
  • If P is diagonalisable (3x3) then we can find some invertible matrix such that:

where i are the eigenvalues

Therefore pij(n)=A1n+B2n+ C3n

assuming the eigenvalues are distinct

general procedure
General Procedure
  • For an M state chain. Compute the eigenvalues 1,2,..M
  • If the eigenvalues are distinct then pij(n) has the general form:
  • If an eigenvalue  is repeated once then the general form includes a term (an+b)n
  • As roots of a polynomial with real coefficients, complex eigenvalues come in conjugate pairs and can be written as sin and cosine pairs.
  • The coefficients of the general form can be found by calculating pij(n) by hand for n= 0...M-1 and solving.
example of p n
Example of Pn

(where states are no’s 1, 2 and 3)

where I is the identity matrix

Eigenvalues are 1, i/2, -i/2. Therefore p11(n) has the form:

where the substitution can be made since p11(n) must be real

we can calculate that p11(0)=1, p11(1)=0 and p11(2)=0

example of p n 2
Example of Pn (2)
  • We now have three simultaneous equations in ,  and .
  • Solving we get =1/5, =4/5 and =-2/5.
equilibrium probabilities
Equilibrium Probabilities
  • Recall the distribution vector  of equilibrium probabilities. If n is the distribution vector after n steps  is given by:
  • This is also the distribution which solves:
  • When does this limit exist? When is there a unique solution to the equation?
  • This is when the chain is ergodic:
    • Irreducible
    • Recurrent non-null (also called positive recurrent)
    • Aperiodic
irreducible
Irreducible
  • A chain is irreducible if any state can be reached from any other.
  • More formally for all i and j:

0

1

1-

For what values of  and 

is this chain irreducible?

1-

2

1

aperiodic chains
Aperiodic chains
  • A state i is periodic if it is returned to after a time period > 1.
  • Formally, it is periodic if there exists an integer k > 1 where, for all j:
  • Equivalently, a state is aperiodic if there is always a sufficiently large n that for all m > n:
a useful aperiodicity lemma
A useful aperiodicity lemma
  • If P is irreducible and has one aperiodic state i then all states are aperiodic. Proof:

By irreducibility there exists r, s  0 with

pji(r),pik(s) > 0

Therefore there is an n such that for all m > n:

And therefore all the states are aperiodic (consider j=k in the above equation).

return recurrence time
Return (Recurrence) Time
  • If a chain is in state i when will it next return to state i?
  • This is known as “return time”.
  • First we must define the probability that the first return to state i is after n steps: fi(n)
  • The probability that we ever return is:
  • A state where fi = 1 is recurrent fi < 1 is called transient.
  • The expectation of this is the “mean recurrence time” or “mean return time”.
  • Mi= recurrent null Mi< recurrent non-null
return recurrence time11
Return (Recurrence) Time
  • A finite irreducible chain is always recurrent non null.
  • In an irreducible aperiodic Markov Chain the limiting probabilities

always exist and are independent of the starting distribution. Either:

    • All states are transient or recurrent null in which case j=0 for all states and no stationary distribution exists.
    • All states are recurrent non null and a unique stationary distribution exists with:
ergodicity summary
Ergodicity (summary)
  • A chain which is irreducible, aperiodic and recurrent non-null is ergodic.
  • If a chain is ergodic, then there is a unique invariant distribution which is equivalent to the limit:
  • In Markov Chain theory, the phrases invariant, equilibrium and stationary are often used interchangeably.
invariant density in periodic chains
Invariant Density in Periodic Chains
  • It is worth noting that an irreducible, recurrent non null chain which is periodic, has a solution to the invariant density equation but the limit distribution does not exist. Consider:
  • However, it should be clear that does not exist in general though it may for specific starting distributions

1

0

1

1

=( ½ , ½ ) solves =P

balance equations
Balance Equations
  • Sometimes it is not practical to calculate the equilibrium probabilities using the limit.
  • If a distribution is invariant then at every iteration, the inputs to a state must add up to its starting probability.
  • The inputs to a state i are the probabilities of each state j (j) which leads into it multiplied by the probability pji
balance equations 2
Balance Equations (2)
  • More formally if i is the probability of state i :
  • And to ensure it is a distribution:
  • Which, for an n state chain gives us n+1 equations for n unknowns.
queuing analysis of the leaky bucket
Queuing Analysis of the Leaky Bucket
  • A “leaky bucket” is a mechanism for managing buffers to smooth the downstream flow.
  • What is described here is what is sometimes called a “token bucket”.
  • A queue holds a stock of “permits” which arrive at a rate r (one every 1/r seconds) up to W permits may be held.
  • A packet cannot leave the queue if there is no permit stored.
  • The idea is that the scheme limits downstream flow but can deal with bursts of traffic.
modelling the leaky bucket
Modelling the Leaky Bucket
  • Let us assume that the arrival process is a Poisson process with a rate 
  • Consider how many packets arrive in 1/r seconds. The prob ak that k packets arrive is:

Queue of

permits (arrive

at 1/r seconds)

Exit of

buffer

Queue of

packets (Poisson)

Exit queue for packets

with permits

a markov model
A Markov Model
  • Model this as a Markov Chain which changes state every 1/r seconds.
  • States 0iW represent no packets waiting and W-i permits available. States W+i (where i > 1) represent 0 permits and i packets waiting.
  • Transition probabilities:

a2

a2

a2

. . .

. . .

0

1

2

W

W+1

a0

a0

a0

a1

a0+a1

a1

a1

a1

solving the markov model
Solving the Markov Model
  • By solving the balance equations we get:

Similarly, we can get expressions for 3 in terms

of 2 ,1 and 0. And so on...

solving the markov model 2
Solving the Markov Model (2)
  • Normally we would solve this using the remaining balance equation:
  • This is difficult analytically in this case.
  • Instead we note that permits are generated every step except when we are in state 0 and no packets arrive (W permits none used).
  • This means permits are generated at a rate (1-0a0)r
  • This must be equal to  since each packet gets a permit (assume none dropped while waiting).
and finally
And Finally
  • The average delay for a packet to get a permit is given by:
  • Of course this is not a closed form expression. To complete this analysis, look at Bertsekas P515

No of iterations

taken to get out

of queue from

state j

Amount of time

spent in given state

Time taken for each

iteration of chain

For those states with queue