1 / 16

Markov-opoly

Markov-opoly. Markov chains: an Applied Approach By Daniel Huang and Mo Dwyer. What is a Markov Chain, Anyway?. It is a sequence of states where every future state is independent of the preceding ones, except for the n-1 state. The Process. Create a “transition matrix.”

raja-mckay
Download Presentation

Markov-opoly

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov-opoly Markov chains: an Applied Approach By Daniel Huang and Mo Dwyer

  2. What is a Markov Chain, Anyway? • It is a sequence of states where every future state is independent of the preceding ones, except for the n-1 state.

  3. The Process • Create a “transition matrix.” • This defines the probability of something being in any given location in the state you are interested in. • Raise the transition matrix to the nth power.

  4. Confused? Have an example… The state: a car rental agency has three locations in LA: • Downtown location (A) • East end location (B) • West end location (C).

  5. The agency's statistician has determined the following: Of the calls to the Downtown location, 30% are delivered in Downtown area, 30% are delivered in the East end, and 40% are delivered in the West end Of the calls to the East end location, 40% are delivered in Downtown area, 40% are delivered in the East end, and 20% are delivered in the West end Of the calls to the West end location, 50% are delivered in Downtown area, 30% are delivered in the East end, and 20% are delivered in the West end.

  6. T= T7= T2= The Transition Matrix: Notice how it starts to converge!

  7. Tn= You now have the probability distribution of the drivers at time n!

  8. So where will you be next? • Take the transition matrix and multiply by the current state. • That is: XO+1=TXO

  9. So where will you be next? • Take the transition matrix and multiply by the current state. • That is: XO+1=TXO • After time n approaching infinity, the resulting vector is not dependent on Xo P=TnX

  10. Our Project: Creating a Markov chain to predict monopoly moves.

  11. For Monopoly • First, find the probability of rolling a certain number, and thus landing on a certain square. • Add the prob of Chance or Community Chest sending you somewhere.

  12. Thank goodness for MATLAB • You end up with a 40x40 matrix because there are 40 squares to land on.

  13. Et c. …

  14. Alternatively: • You can also use eigenvectors to solve steady-states!

  15. Alternatively: • You can also use eigenvectors to solve steady-states! • Take λ=1 • Tp=p • p is the probability vector of dimension mx1. Its elements add to equal 1.

  16. THE END! Any questions?

More Related