CS433 : Modeling and Simulation. Lecture 10: Discrete Time Markov Chains Dr. Shafique Ahmad Chaudhry Department of Computer Science E-mail: [email protected] Office # GR-02-C Tel # 2581328. Markov Processes. Stochastic Process X ( t ) is a random variable that varies with time.
Apathis a sequence of states, where each transition has a positive probability of occurring.
State jis reachable (or accessible)(يمكن الوصول إليه) from state i(ij) if there is a path from i to j –equivalently Pij(n)> 0 for some n≥0, i.e.the probability to go from ito j in nsteps is greater than zero.
States i and j communicate (ij)(يتصل) ifiis reachable fromjandjis reachable fromi.
(Note: a state i always communicates with itself)
A set of states C is a communicating classif every pair of states in C communicates with each other, and no state in C communicates with any state not in C.
A state i is said to be an absorbing state if pii= 1.
A subset S of the state space Xis a closed set if no state outside of S is reachable from any state in S (like an absorbing state, but with multiple states), this means pij= 0for every iS and j S
A closed set S of states is irreducible(غير قابل للتخفيض) if any state j Sis reachable from every state iS.
A Markov chain is said to be irreducible if the state space X is irreducible.
Irreducible Markov Chain
Closed irreducible set
State iis atransient state(حالة عابرة)if there exists a state j such that j is reachable from ibut i is not reachable from j.
A state that is not transient is recurrent(حالة متكررة) . There are two types of recurrent states:
Positive recurrent: if the expected time to return to the state is finite.
Null recurrent (less common): if the expected time to return to the state is infinite(this requires an infinite number of states).
A state iis periodic with periodk>1, ifkis the smallest number such that all paths leading from state iback to state i have a multiple of k transitions.
A state is aperiodic if it has period k =1.
A state is ergodic if it is positive recurrent and aperiodic.
Example from Book
Introduction to Probability: Lecture Notes
D. Bertsekas and J. Tistsiklis – Fall 2000
We define the hittingtime Tijas the random variable that represents the time to go from state j to stat i, and is expressed as:
k is the number of transition in a path from i to j.
Tijis the minimum number of transitions in a path from i to j.
We define the recurrence timeTii as the first time that the Markov Chain returns to state i.
The probability that the first recurrence to state ioccurs at the nth-step is
TiTime for first visit to i given X0 = i.
The probability of recurrence to state iis
We define Niasthe number of visits to stateigiven X0=i,
Theorem: If Ni is the number of visits to state igiven X0=i,then
Transition Probability from
state i to state i after n steps
The probability of reaching state j for first time in n-steps starting from X0 = i.
The probability of ever reaching j starting from state i is
If a Markov Chain has finite state space, then: at least one of the states is recurrent.
If state i is recurrent and state j is reachable from state i
then: state j is also recurrent.
IfS is a finite closed irreducible set of states, then: every state in S is recurrent.
Let Mi be the mean recurrence time of state i
A state is said to be positive recurrent if Mi<∞.
If Mi=∞ then the state is said to be null-recurrent.
If state i is positive recurrent and statej is reachable from state i then, state j is also positive recurrent.
If S is a closed irreducible set of states, then every state in S is positive recurrent or, every state in S is null recurrent, or, every state in S is transient.
If S is a finite closed irreducible set of states, then every state in S is positive recurrent.
Positive Recurrent States
Suppose that the structure of the Markov Chain is such that state i is visited after a number of steps that is an integer multiple of an integer d >1. Then the state is called periodic with period d.
If no such integer exists (i.e., d =1) then the state is called aperiodic.
Periodic State d = 2
Recall that the state probability, which is the probability of finding the MC at state i after the kth step is given by:
Example: TAX auditing problem:
Assume that whether a tax payer is audited by Tax department or not in the n + 1 is dependent only on whether he was audit in the previous year or not.
How to model this problem as a stochastic process ?
State Space: Two states: s0 = 0 (no audit), s1 = 1 (audit)
Transition Matrix P is the prob. of transition in one step
How do we calculate the probabilities for transitions involving more than one step?
Notice: p01 = 0.4, is conditional probability of audit next year given no audit this year.
p01 = p (x1 = 1 | x0 = 0)
This idea generalizes to an arbitrary number of steps:
In matrix form, P(2) = P P,
P(3) = P(2) P = P2 P = P3
or more generally
P(n) = P(m) P(n-m)
The ij'th entry of this reduces to
Pij(n) = Pik(m) Pkj(n-m) 1 m n1
Chapman - Kolmogorov Equations
“The probability of going from i to k in m steps & then going from k to j in the remaining nm steps, summed over all possible intermediate states k”
What happens with t get large?
Observations: as n gets large, the values in row of the matrix becomes identical OR they asymptotically approach a steady state value
What does it mean?
The probability of being in any future state becomes independent of the initial state as time process
j = limn Pr (Xn=j |X0=i } = limnpij (n) for all i and j
These asymptoticalvalues are calledSteady-State Probabilities
Recall the recursive probability
where Mj is the mean recurrence time of state j
1. Steady-state probabilities might not exist unless the Markov chain is ergodic.
2.Steady-state predictions are never achieved in actuality due to a combination of
(i) errors in estimating P,
(ii) changes in P over time, and
(iii) changes in the nature of dependence relationships among the states.
Nevertheless, the use of steady-state values is an important diagnostic tool for the decision maker.
Just because an ergodic system has steady-state probabilities does not mean that the system “settles down” into any one state.
j is simply the likelihood of finding the system in state j after a large number of steps.
The limiting probability πj that the process is in state j after a large number of steps is also equals the long-run proportion of time that the process will be in state j.
When the Markov chain is finite, irreducible and periodic, we still have the result that the πj, j Î S, uniquely solves the steady-state equations, but now πj must be interpreted as the long-run proportion of time that the chain is in state j.
All states are transient
All states are positive recurrent
All states are null recurrent
Transient Set T
Irreducible Set S1
Irreducible Set S2
Transient Set T
Irreducible Set S