1 / 15

Markov Chains

Markov Chains. Definition, Chapman-Kolmogorov Equations, Classification of States, Limiting Probabilities, Transient Analysis, Time Reversibility. Stochastic Processes. A stochastic process is a collection of random variables Typically, T is continuous (time) and we have

sovann
Download Presentation

Markov Chains

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov Chains Definition, Chapman-Kolmogorov Equations, Classification of States, Limiting Probabilities, Transient Analysis, Time Reversibility Chapter 4

  2. Stochastic Processes A stochastic process is a collection of random variables Typically, T is continuous (time) and we have Or, T is discrete and we are observing at discrete time points n that may or may not be evenly spaced. Refer to X(t) as the state of the process at time t. The state space of the stochastic process is the set of all possible values of X(t): this set may be discrete or continuous as well. Chapter 4

  3. Markov Chains In this chapter, consider discrete-state, discrete-time: A Markov chain is a stochastic process where each Xn belongs to the same subset of {0, 1, 2, …}, and for all states i0, i1,…, in-1 and all n  0 . Say Then Let be the matrix of one-step transition probabilities. Chapter 4

  4. n-step Transition Probabilities Given the chain is in state i at a given time, what is the probability it will be in state j after n transitions? Find it by conditioning on the initial transition(s). Chapter 4

  5. Chapman-Kolmogorov Equations In general, can find the n-step transition probabilities by conditioning on the state at any intermediate stage: Let P(n) be the matrix of n-step transition probabilities: So, by induction, Chapter 4

  6. Classification of States State j is accessible from state i if If j is accessible from i and i is accessible from j, we say that states i and j communicate (i j). Communication is a class property: • State i communicates with itself, for all i  0 • If i communicates with j then j communicates with i • If i j and j k, then i k. Therefore, communication divides the state space up into mutually exclusive classes. If all the states communicate, the Markov chain is irreducible. Chapter 4

  7. Recurrence vs. Transience Let fi be the probability that, starting in state i, the process will ever reenter state i. If fi = 1, the state is recurrent, otherwise it is transient. If state i is recurrent then, starting from state i, the process will reenter state i infinitely often (w/prob. 1). If state i is transient then, starting in state i, the number of periods in which the process is in state i has a geometric distribution with parameter 1 – fi. Or, state i is recurrent if and transient if Recurrence (transience) is a class property: If i is recurrent (transient) and i j then j is recurrent (transient). A special case of a recurrent state is if Pii = 1 then i is absorbing. Chapter 4

  8. Recurrence, Transience and Other Properties Not all states in a finite Markov chain can be transient (why?). All states of a finite irreducible Markov chain are recurrent. If whenever n is not divisible by d, and d is the largest integer with this property, then state i is periodic with period d. If a state has period d = 1, then it is aperiodic. If state i is recurrent and if, starting in state i, the expected time until the process returns to state i is finite, it is positive recurrent (otherwise it is null recurrent). A positive recurrent, aperiodic state is called ergodic. Chapter 4

  9. Limiting Probabilities Theorem: For an irreducible ergodic Markov chain, exists for all j and is independent of i. Furthermore, pj is the unique nonnegative solution of The probability pj also equals the long run proportion of time that the process is in state j. If the chain is irreducible and positive recurrent but periodic, the same system of equations can be solved for these long run proportions. Chapter 4

  10. Limiting Probabilities 2 The long run proportions pj are also called stationary probabilities because if then Let mjj be the expected number of transitions until the Markov chain, starting in state j, returns to state j (finite if state j is positive recurrent). Then If is an irreducible Markov chain with stationary probabilities , and r is a bounded function on the state space. Then with probability 1, Long run average reward Chapter 4

  11. Transient Analysis Suppose a finite Markov chain with m states has some transient states. Assume the states are numbered so that T = {1, 2, …, t} is the set of transient states, and let PT be the matrix of transition probabilities among these states. Let R be the tx (m-t) matrix of one-step transition probabilities from transient states to the recurrent states and PR be the (m-t) x (m-t) matrix of transition probabilities among the recurrent states: the overall one-step transition probability matrix can be written as If the recurrent states are all absorbing then PR = I. Chapter 4

  12. Transient Analysis 2 • If the process starts in a transient state, how long does it spend among the transient states? • What are the probabilities of eventually entering a given recurrent state? Define dij = 1 if i = j and 0 otherwise. For i and j in T, let sij be the expected number of periods that the Markov chain is in state j given that it started in state i. Condition on the first transition, and note that transitions from recurrent states to transient states are impossible Chapter 4

  13. Transient Analysis 3 Let S be the matrix of sij values. Then S = I + PTS. Or, For i and j in T, let fij be the probability that the Markov chain ever makes a transition into j, starting from i. For i in T and j in Tc, the matrix of these probabilities is Chapter 4

  14. Time Reversibility • One approach to estimate transition probabilities from each state is by looking at transitions into states and tracking what the previous state was. • How do we know this information is reliable? • How do we use it to estimate the forward transition probabilities? Consider a stationary ergodic Markov chain. Trace the sequence of states going backwards: Xn, Xn-1,…, X0 This is a Markov chain with transition probabilities: If Qij = Pij for all i, j, then the Markov chain is time reversible. Chapter 4

  15. Time Reversibility 2 Another way of writing the reversibility equation is: Proposition: Consider an irreducible Markov chain with transition probabilities Pij. If one can find positive numbers pi summing to 1 and a transition probability matrix Q such that the above equation holds for all i, j, then Qij are the transition probabilities for the reversed chain and the pi are the stationary probabilities for both the original and the reversed chain. Use this, thinking backwards, to guess at transition probabilities of reversed chain. Chapter 4

More Related