1 / 12

V5 Stochastic Processes

V5 Stochastic Processes. Poisson Process Suppose that a sequence of random events occur during some time interval. These events form a homogenous Poisson process if the following 2 conditions are met:.

ismet
Download Presentation

V5 Stochastic Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. V5 Stochastic Processes • Poisson Process • Suppose that a sequence of random events occur • during some time interval. • These events form a homogenous Poisson process if the following 2 conditions are met: • The occurrence of any event in the time interval (a, b) is independent of the occurrence of any event in the time interval (c, d), where (a, b) and (c, d) do not overlap. • There is a constant λ > 0 such that for any sufficiently small time interval (t, t+h), h > 0, the probability that one event occurs in (t, t+h) is independent of t, and is λh + O(h), and the probability that more than one event occurs in the interval (t, t+h) is O(h). Membrane Bioinformatics

  2. Poisson process • Condition (2) has two implications: • time homogeneity, the probability of an event in the time interval (t, t+h) is independent of t. • this condition means that the probability of an event occurring in a small time interval is proportional to the length of the interval. • As the probability of one (or more events) in the interval (t, t+h) is • λh + O(h) • the probability of no events in the interval (t, t+h) is • 1 – λh + O(h) • We will now show that under conditions (1) and (2), the number N of events that occur up to any arbitrary time t has a Poisson distribution with parameter λt. Membrane Bioinformatics

  3. Poisson process At time 0 the value of N is necessarily 0. At any later time t, the possible values of N are 0, 1, 2, 3, ... We denote the probability that N = j at any given time t by Pj(t). The event that N = 0 at time t+h occurs only if no events occur in (0, t) and also no events occur in (t, t+h). Thus for small h P0(t+h) = P0(t)(1 - λh + O(h) ) = P0(t) – P0(t) (λh) + O(h) (4.2) Membrane Bioinformatics

  4. Poisson process The event that N = 1 at time t+h can occur in two ways. The first is that N = 1 at time t and that no event occurs in the time interval (t, t+h). The second is that N = 0 at time t and that exactly one event occurs in the time interval (t, t+h). This gives P1(t+h) = P0(t)(λh) + P1(t) (1 – λh) + O(h) (4.3) Here, O(h) is the sum of two terms, both of which are O(h). Membrane Bioinformatics

  5. Poisson process Finally, for j = 2, 3, ... the event that N = j at time t+h can occur in 3 different ways. (1) N = j at time t and no event occurs in the time interval (t, t+h). (2) N = j-1 at time t and exactly one event occurs in the time interval (t, t+h). (3) N ≤ j – 2 at time t and two or more events occur in (t, t+h). This process has the probability O(h). Thus, for j = 2, 3, ... Pj(t+h) = Pj-1(t)(λh) + Pj(t) (1 – λh) + O(h) (4.4) Membrane Bioinformatics

  6. Poisson process P1(t+h) = P0(t)(λh) + P1(t) (1 – λh) + O(h) and Pj(t+h) = Pj-1(t)(λh) + Pj(t) (1 – λh) + O(h) look identical. The difference between them relates only to terms of order O(h). Thus we can take (4.4) to hold for all j ≥ 1. Subtracting P0(t) from both sides of eq. (4.2) and dividing through by h gives Similarly, subtracting Pj(t) (j ≥ 1) from both sides of eq. (4.4) and dividing through by h gives j = 1, 2, 3, ... Membrane Bioinformatics

  7. Poisson process Letting h  0, we get The Pj(t) are subject to the conditions P0(0) = 1, Pj(0) = 0, j = 1,2,3,... Therefore, the above equation for P0has the solution By induction we can show that the second set of equationshas the solution at time t the random variable N has a Poisson distribution with parameter t V5 SS 2009 Membrane Bioinformatics 7

  8. Finite Markov Chains Consider some finite discrete set S of possible states, labelled {E1, E2, ..., Es}. At each of the unit time points t = 1, 2, 3, ... a Markov chain process occupies onne of these states. In each time step t to t + 1, the process either stays in the same state or moves to some other state in S. It does this in a probabilistic, or stochastic, way rather than in a deterministic way. That is, if at time t the process is in state Ej, then at time t + 1 it either stays in this state or moves to some other state Ek according to some well-defined probabilistic rule. V5 SS 2009 Membrane Bioinformatics 8

  9. Finite Markov Chains The process is called Markovian if it has the following distinguishing Markov characteristics: (i) the memoryless property. If at some time t the process is in state Ej , the probability that one time unit later it is in state Ekdepends only on Ejand not on the past history of the states it was in before time t. (ii) The time homogeneity property. Given that at time t the process is in state Ej , the probability that one time unit later it is in state Ekis independent of t. V5 SS 2009 Membrane Bioinformatics 9

  10. Transition Probabilities Suppose that at time t a Markovian random variable is in state Ej. We denote the probability that at time t + 1 it is in state Ekby pjk , called the transition probability from Ej to Ek. In writing this, we implicity assume that the Markovian assumptions hold, i.e. the transition probabilities do not depend on previous states and do not depend on the current time t. V5 SS 2009 Membrane Bioinformatics 10

  11. Transition Probability matrix It is convenient to group the transition probabilities pjk into the transition probability matrixP of the Markov chain. The rows and colunms of P are in correspondence with the states E1, ..., Es. Any row in the matrix corresponds to the state from which the transition is made, and any column in the matrix corresponds to the state to which the transition is made. Thus, the probabilities in any particular row must sum to 1. However, the sums in any given column do not have to sum to anything in particular. The initial distribution and the transition matrix P jointly determine the probability for any event of interest in the entire process. V5 SS 2009 Membrane Bioinformatics 11

  12. Graphical Representation of a Markov Chain It is often convenient to represent a Markov chain by a directed graph. Here, we identify the states with the graph nodes and the transition probabilities with edges. Consider the Markov chain with states E1, E2, and E3 and the probability transition matrix This Markov chain is represented by the graph on the right. V5 SS 2009 Membrane Bioinformatics 12

More Related