1 / 12

Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)

Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4). Markov Chains. Mathematical models for processes that evolve over time in a probabilistic manner are called stochastic processes .

binh
Download Presentation

Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov Chains(covered in Sections 1.1, 1.6, 6.3, and 9.4)

  2. Markov Chains Mathematical models for processes that evolve over time in a probabilistic manner are called stochastic processes. A special kind of stochastic process is Markov Chain. It is characterized by the special property that probabilities involving how the process will evolve in the future depends only on the present state of the process, and so are independent of the events in the past. Markov chains are used to analyze trends and predict the future (weather, stock market, genetics, product success, etc.)

  3. Markov Chains A finite Markov chain is • a set of objects, • a set of consecutive time periods, • a finite set of different states such that • during any given time period, each object is in only one state (although different objects can be in different states); • the probability that an object will move from one state to another state (or remain in same state) over a time period depends only on the beginning and ending states.

  4. Markov Chains • A Markov chain can be represented by a matrix P=[pij] where pij represents the probability of an object moving from state i to state j in one time period. Such a matrix is called a transition matrix. - The transition matrix is square (by the nature of Markov process); • The sum of the probabilities of each row must add to one. • A Markov chain can also be represented by a graph. • Example in the next slide

  5. 0.1 0.9 0.8 coke pepsi 0.2 Markov Chains: Coke vs. Pepsi Example • Given that a person’s last cola purchase was Coke, there is a 90% chance that his next cola purchase will also be Coke. • If a person’s last cola purchase was Pepsi, there is an 80% chance that his next cola purchase will also be Pepsi. transition matrix:

  6. Powers of transition matricesCoke vs. Pepsi Example (cont.) Given that a person is currently a Pepsi purchaser, what is the probability that he will purchase Coke two purchases from now? Pr[ Pepsi?Coke ] = Pr[ PepsiCokeCoke ] +Pr[ Pepsi Pepsi Coke ] = 0.2 * 0.9 + 0.8 * 0.2 = 0.34 P2 is the transition matrix after two time period.

  7. Powers of transition matricesCoke vs. Pepsi Example (cont.) Given that a person is currently a Coke purchaser, what is the probability that he will purchase Pepsithree purchases from now?

  8. Distribution Row Vector A distribution row vector d for an N-state Markov chain is an N-dimensional row vector having as its components, one for each state, the probabilities that an object in the system is in each of the respective states. Example (cont.): Suppose 60% of all people now drink Coke, and 40% drink Pepsi. Then the distribution vector is (0.6, 0.4). Let d(k) denote the distribution vector for a Markov chain after k time periods. Thus, d(0) represents the initial distribution. Then d(k) = d(0)Pk

  9. Distribution Row Vector Example (cont.): • Suppose 60% of all people now drink Coke, and 40% drink Pepsi • What fraction of people will be drinking Coke three weeks from now? • d(0) = (0.6, 0.4) • d(3) = d(0)P3 • Pr[X3=Coke] = 0.6 * 0.781 + 0.4 * 0.438 = 0.6438

  10. Regular Markov Chains A Markov chain is regular if some power of the transition matrix contains only positive elements. If the matrix itself contains only positive elements then the power is one, and the matrix is automatically regular. Transition matrices that are regular always have an eigenvalue of unity. They also have limiting distribution vectors x(∞), where the ith component of x(∞) represents the probability of an object in state i after a large number of time periods have elapsed.

  11. stationary distribution 0.1 0.9 0.8 coke pepsi 0.2 Limiting distributionCoke vs. Pepsi Example (cont) 2/3 Pr[Xi= Coke] week - i

  12. Regular Markov Chains Definition: A nonzero vector x is a left eigenvector for a matrix A if there exists a scalar λ such that xA = λx. (Left and right eigenvectors are usually different; they are the same only for special type of matrices.) The limiting distribution x(∞) is a left eigenvector of the transition matrix corresponding to the eigenvalue of unity, and having the sum of its components equal to one. Examples on the board.

More Related