1 / 14

Markov Chain Based Evaluation of Conditional Long-run Average Performance Metrics

Markov Chain Based Evaluation of Conditional Long-run Average Performance Metrics. B.D. Theelen. Overview. Introduction Conditional Long-run Sample Averages Markov Chain Reduction Computation Simulation Algebra of Confidence Intervals Accuracy Analysis of Complex Performance Metrics

idania
Download Presentation

Markov Chain Based Evaluation of Conditional Long-run Average Performance Metrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov Chain Based Evaluation of Conditional Long-run Average Performance Metrics B.D. Theelen www.ics.ele.tue.nl/~btheelen

  2. Overview • Introduction • Conditional Long-run Sample Averages • Markov Chain Reduction • Computation • Simulation • Algebra of Confidence Intervals • Accuracy Analysis of Complex Performance Metrics • Recurrency Condition Approximation www.ics.ele.tue.nl/~btheelen

  3. Understandable POOSL Model + Extensions for Performance Evaluation Formal Semantics Markov Decision Process + Reward Structure External Scheduler Discrete-Time Markov Chain + Reward Structure Reflexive Performance Analysis Example • Expected packet size • POOSL model with extra behaviour toevaluate long-run average packet size • Variable PacketSize holding size oflast packet received • Assign new value to PacketSizeeach time a new packet is received www.ics.ele.tue.nl/~btheelen

  4. Xi’s assume values in countable State Space Transition Matrix Initial Distribution Classic Analysis Techniques • Discrete-time Markov chain • Stochastic process {Xi | i ≥ 1} defined by triple (S, I, P) • Reward function • Function r that assigns a reward value to each state in S • Elementary performance metric: long-run sample average • Analytic computation • Simulation-based estimation www.ics.ele.tue.nl/~btheelen

  5. Conditional Long-run Sample Averages • Classic analysis techniques not suitable for PacketSize example • Condition of receiving new packet cannot be taken into account • Conditional reward function • Function c that assigns 0 or 1 to each state in S • Elementary metric: conditional long-run sample average • Example Separately apply ergodic theorem to numerator and denominator www.ics.ele.tue.nl/~btheelen

  6. if and only if Original Markov chain Reduced Markov chain Markov Chain Reduction • Can conditional long-run sample averages be evaluated without considering irrelevant states? • Consider only reduced state space • Define Xic as ith relevant state in trace of original Markov chain • Theoretical results • {Xic| i ≥ 1} is a Markov chain (Sc, Ic, Pc) in case original Markov chain is ergodic and has relevant recurrent state • Reduced Markov chain is ergodic • Preservation of performance results • Application is called reduction technique www.ics.ele.tue.nl/~btheelen

  7. Reduction and Computation • Need equilibrium distribution πc of reduced Markov chain • Theoretical result • πccan be computed from equilibrium distribution π based on • Example • Approach involves expensive computation of π Sc = {B, D, E, G} www.ics.ele.tue.nl/~btheelen

  8. Construction of Reduced Markov Chain • Derive transition matrix Pc and initial distribution Ic from P and I • Define for and , as probability of visiting T when starting from S without intermediate visits to relevant states • Note that for , • Theoretical results • Set of linear equations has unique bounded solution • Example • Approach involves expensive computation of M www.ics.ele.tue.nl/~btheelen

  9. Reduction and Simulation • Reduction does not help for analytic computation • Equal computational complexity as direct computation • Example of PacketSize • Reward function PacketSize • Conditional reward functionPacketReceived implicitlydefined by assignment toPacketSize variable • Reduction helps for simulation • Reduced Markov chain is based on traces of original Markov chain • Apply central limit theorem to implicitly defined reduced Markov chain • Only need to update estimation if PacketSize changes instead of in all states • But central limit theorem involves slightly more stringent conditions www.ics.ele.tue.nl/~btheelen

  10. Cumulative Rewards 1 2 3 4 Y Y Y Y Sr Sr Sr Sr sum of rewards obtained during ith cycle through Sr length of ith cycle through Sr Ss Sr Sr Sr Sr Cycle Lengths 1 2 3 4 L L L L Sr Sr Sr Sr • iid • mean 0, variance τ2 Central Limit Theorem: Regenerative Cycles • Accuracy of estimation? • Subsequently obtained rewards are not iid • Infinite traces of ergodic chain visit recurrent state Sr infinitely many times • Define • Central limit theorem enables generation of confidence interval for analysing accuracy of conditional long-run sample averages www.ics.ele.tue.nl/~btheelen

  11. Reward Conditional long-run sample variance (jitter) Time Conditional long-run timeaverage (buffer occupancy) Conditional long-run time variance (burstiness) Complex Performance Metrics • Other types of long-run average performance metrics • Combinations of conditional long-run sample averages • No direct way for deriving confidence intervals for these metrics • Define algebraic operations on confidence intervals • Allows deriving confidence intervals for these metrics www.ics.ele.tue.nl/~btheelen

  12. Algebra of Confidence Intervals • For γ confidence interval for , • Theoretical results • Applying the negation, square or reciprocal on a γ confidence interval results in a γ confidence interval again • Addition, substraction, multiplication or division of a γ confidence interval with a δ confidence interval yields a γ + δ - 1 confidence interval • Examples • Negation of the γ confidence interval for , yields the γ confidence interval for • Addition of the γ confidence interval for and the δ confidence interval for , yields the γ + δ - 1 confidence interval for • Theoretical results on reduction technique and complex performance metrics found library classes for accuracy analysis www.ics.ele.tue.nl/~btheelen

  13. Recurrency Condition Approximation • Use of central limit theorem requires detection of recurrent state • Infeasible: too many states to compare and comparison is expensive • Current approach: user must define recurrency condition • Use of ‘local’ recurrent state • Use of ‘sufficiently large’ fixed-size batches of rewards • Theory of Markov chain lumpability might help • Cluster states of the Markov chain with (ε-near) equal reward values • Example www.ics.ele.tue.nl/~btheelen

  14. Would Lumpability Help? • Take certain reward value as recurrency condition • Exactly one cluster state for each reward value • Advantages • Lumping states may reduce state space considerably • Only necessary to check single variable • Can be easily checked automatically during simulation • Approach might work for all long-run average performance metrics • Challenges • When is lumping allowed? • Will performance results be preserved? • Orthogonality of using reduction technique and lumpability? • How to apply theoretical results in simulation practice? www.ics.ele.tue.nl/~btheelen

More Related