1 / 41

State based methods

State based methods. State based methods. State of the system a state represents all that must be known to describe the system at any given instant of time For reliability models, each state represents a distinct combination of failed and working modules.

essien
Download Presentation

State based methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. State based methods

  2. State based methods • State of the system a state represents all that must be known to describe the system at any given instant of timeFor reliability models, each state represents a distinct combination of failed and working modules. • State transitions govern the changes of state that occur within a system-For reliability models, as time passes the system goes from state to state as modules fail and repair. The state transitions are characterized by probabilities, such as the probability of failure and the probability of repair Random processes

  3. Random process A random process is a collection of random variables indexed by time Examples of random process {Xt}, with time T = {1, 2, 3. …} 1) Let X1 be the result of tossing a dieLet X2 be the result of tossing a die, and so on. {Xt} represents the sequence of results of tossing a die P[X1 = 4] = 1/6 P[X4 = 4 | X2 = 2] = P[X4 = 4 ] =1/6 2) Let X1 be the result of tossing a die. Let X2 be the result of tossing a die plus X1, and so on (Xi = the result of tossing a die plus Xi-1) P[X2 = 12] = 1/36 P[X3 = 14 | X1 = 2] = 1/36

  4. Random process Discrete-time random processif the number of time points defined for the random process is finite or countable (e.g., integers) Continuous-time random process if the number of time points defined for the random process is uncountable (e.g., real numbers) Example:Let Xt be the number of fault arrivals in a system up to time t.If t is a real number , Xt is a continuous-time random process

  5. Random process state space State space S of a random process X:is the set of all possible values the process can take S = {y: Xt = y, for some t} If X is a random process that models a system, then the state space S of X can represent the set of all possible configuration that the system could be in

  6. Random process state space Discrete-state random process Xif the state space of random process X is finite or countable (e.g., S = {1,2,3,…})If the number of states of the process is finite, N denotes this number. Continuous-state random process Xif the state space of random process X is infinite and uncountable (e.g., S = the set of real numbers) Let X be a random process that represents the number of bad packets received over a network. X is a discrete-state random process Let X be a random process that represents the voltage on a telephone line. X is a continuous-state random process

  7. Markov process • Let {Xt, t>=0} be a random process. A special type of random process is called the Markov process. Basic assumption underlying Markov process: the probability of a given state transition depends only on the current state the future behaviour is independent of past values (memoryless property)

  8. Markov process: steady-state transition probabilities • Let {Xt, t>=0} be a Markov process. The Markov process X has steady-state transition probabilities if for any pair of states i, j: The probability of transition from state i to state j does not depend by the time. This probability is called pij

  9. Markov chain • A Markov chain is a Markov process with discrete-state space S. • A Markov chain is omogeneous if X has steady-state transition probabilities; non-omogeneous otherwise • A Markov chain is a finite-state Markov chain if the number of states is finite (N). We consider discrete-time omogeneous Markov chains (DTMC)

  10. Transition probability matrix • If a Markov chain is finite-state, we can define the transition probability matrix P (NxN) pij = probability of moving from state i to state j in one step row i of matrix P: probability of make a transition starting from state i column j of matrix P: probability of making a transition from any state to state j

  11. An example A computer is idle, working or failed. When the computer is idle jobs arrives with a given probability. When the computer is idle or busy itmay fail with probability Pfi or Pfb, respectively S={1,2,3} 1 computer idle 2 computer working 3 computer failed

  12. Transition probability after n-time steps THEOREM: Generalization of the steady-state transition probabilities.For any i, j in S, and for any n>0 Definition: steady-state transition probability after n-time steps Definition: transition matrix after n-time steps

  13. Transition probability after n-time steps Definition: • Properties:

  14. State occupancy probability vector Let p be a row vector. Let pithe i-th element of the vector. If p is a state occupancy probability vector, then pi(t) is the probability that a DTMC is in state i at time-step t. pi(t) = P{Xt = i} Initial probability vector: pi(0) = P{X0 = i} A single step forward: pi(1) = Sk=1,..,npk(0) Pki That can be rewritten in terms of the transition probability matrix P: p(1) = p(0) P

  15. Compute State occupancy vectors State occupancy probability vector at time-step t: We have that: P(t) = P P … P = Pt State occupancy vector at time t in terms of the transition probability matrix: p(t) = p(0) Pt

  16. Classification of states A state j is said to be accessible from state i if there exists n >=0 such that Pij(n) >0, we write i->j In terms of the graph, j is accessible from state i if there is a path from node i to node j. A DTMC is irriducible if every state is accessible from every other state. For each i, j, it holds i ->j. A DTMC is reducible if it is not irreducible.

  17. Steady-state behaviour of a DTMC A DTMC can be specified in terms of an occupancy probability vector p and a transition probability matrix P • Time limiting behaviour of a DTMC (steady-state behaviour): p(t) = p(0) Pt The steady-state behaviour of the Markov chain is given by the solution of the equation: p = P p The steady-state behaviour of a DTMC depends on the characteristics of its states. Sometimes the solution is simple.

  18. Aperiodic states A state i is periodic with period d if Pij(t) >0 only when t is some multiple of d. If d=1, the state i is said to be aperiodic. For irriducible Markov chain the solution of the equation p = P p is unique If all states are aperiodic, the steady-state behaviour of the Markov chain is given by the solution of the equation: p = P p

  19. Time-average steady-state solution A time-average steady–state solution exists for any irriducible DTMC time-average distribution, calledp* doesn’t exists p(0) = (1,0) p(1) = (0,1) p(2) = (1,0) ……….. 1 2 1 0 1 2 1 0 P0= p(n+1) = p(n) ????

  20. 1 2 1 1 0 2 0 1 1 2 1 1 0 2 0 1 1 2 1 0 1 2 1 0 1 2 1 0 1 2 1 0 P0 P1 P2 P3 State i=1 P012 >0 P212 >0 … P112 =0 P312 =0 … State i is periodic with period d=2

  21. Let us consider the equation p = P p In steady-state, we havep(n+1) = p(n) P. By definitionp(n+1)=p(n)in steady state

  22. Discrete-time Markov chain • Definition • Transient solution • Classification reducible/irreducible • Steady-state solution

  23. Continuous-time Markov chain (CTMC) • Continuous time Markov chains allow to model systems in which events may occur at any point in time. • Let T be an interval of real numbers (e.g., T=[0,1]). A continuous-time Markov chain has the following property: for all t>0 and 0<t1<t2<…<tn. Omogeneous continuous time Markov chain (steady-state transition probabilities):

  24. A CTMC can be specified in terms of the occupancy probability vector p and a transition probability matrix P p(t) = p(0) Pt butptijis difficult to compute. For the memoryless property, ptijis independent of how long the CTMC has previously been in state i. There is only one random variable that has this property: the exponential random variable.

  25. Exponential random variables • Say X an exponential random variable of parameter l, when its probability distribution function is The cumulative distribution function is The exponential random variable is the only random variable that is memoryless

  26. Let X an exponential random variable that represents the time that an event occurs, we prove that

  27. An exponential random variable has the memoryless property. This means that the rate at which events occurr is constant (over time). • The event rate of X is defined ad the probability that the event occurs within the small interval [t, t+Dt] given that the event has not occurred by time t, per the interval size Dt Look at X at time t, observe that the event has not occurred, measure the probability of the event that occurs per unit of time at time t

  28. If X is an exponential random variable, We say that X is an exponential with rate l

  29. Properties • Minimum of two independent exponential.Let A and B independent exponential random variables with rates a and b, respectively. Let us define X=min{A,B}.X is exponential with rate a + b. • Competition of two independent exponentialsLet A and B independent exponential random variables with rates a and b, respectively, and A and B are competing. One will win with an exponentially distributed time with rate a + b.The probability that A wins is:

  30. CTMC Random process X. S={1,2,3} X0=1 X goes to state 2 with exponential distribution A with rate a X goes to state 3 with exponential distribution B with rate b Consider min{A,B}: exponential with (a+b). P(X<=t) = P(min{A,B}<=t) = 1- e-(a+b)t P(X>t) = e-(a+b)t X remains in state 1 for an exponentially distributed time of rate a+b Consider competition of two independent exponential. The probability that X goes to state 2 is : a/(a+b) The probability that X goes to state 3 is: b/(a+b)

  31. State-transition-rate matrix • A state-transition-rate matrix Q is defined as follows: A CTMC can be specified in terms of the occupancy probability vector p and the transition probability matrix Q p(t) = p(0) eQt

  32. The Markov model fits with the standard assumption of failure rates are constant, leading to exponentially distributed inter-arrival times of failures. • Similarly, we assume costant repair rate.

  33. CTMC X=1 system idle X=2 system busy X=3 system failed Jobs arrive with rate l Jobs are completed with rate b Computer idle fails with rate li Computer working fails with rate lw

  34. Computer repaired with rate m

  35. Example Multiprocessor system with 2 processors and 3 shared memories system.System is operational if at least one processor and one memory are operational. lm failure rate for memory lp failure rate for processor Given a state (i, j): i is the number of operational memories; j is the number of operational processors S = {(3,2), (3,1), (3,0), (2,2), (2,1), (2,0), (1,2), (1,1), (1,0), (0,2), (0,1)}

  36. Markov chain lm failure rate for memory lp failure rate for processor (3, 2) -> (2,2) after a memory fault (3,0), (2,0), (1,0), (0,2), (0,1) are absorbent states

  37. Availability modeling • Assume that faulty components are replaced and we evaluate the probability that the system is operational at time t • Constant repair rate m (number of expected repairs in a unit of time) • Strategy of repair: only one processor or one memory at a time can be substituted • The behaviour of components (with respect of being operational or failed) is not independent: it depends on whether or not other components are in a failure state. If the failure and repair are exponential, we can use a Markov chain model.

  38. Example Markov chain modelling the 2 processors and 3 shared memories system with repair. lm failure rate for memory lp failure rate for processor mm repair rate for memory mp repair rate for processor

  39. Example (cont.) • Strategy of repair: only one component can be substituted at a time • processor higher priority • exclude the lines mm representing memory repair in the case where there has been a process failure

  40. Safety and performability modeling Similar approaches for Safety and Performability Safety measures • probability that a fault is handled correctly, given that the fault has occurred • Safety = effectiveness of a fault coverage strategy • Modeling coverage in a Markov chain means that every unfailed state has two transitions to two different states, one of which is covered, the other uncovered.

  41. Safety and performability modeling Performability measures • -> Extend the Markov model to take performance into account • attach a reward rate to each state, making the model a Markov reward model • Reward rate: some measure of the goodness of the state, possibly a performance measure

More Related