1 / 31

Output Analysis: Variance Estimation

Output Analysis: Variance Estimation. Jong-hyun Ryu. Contents. Motivation (Quality Control Chart) Collecting output data Type of simulations Steady-State Simulation Stochastic Stationary Process Variance Estimation Methods Replication Method Batch mean methods

dotty
Download Presentation

Output Analysis: Variance Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Output Analysis: Variance Estimation Jong-hyun Ryu

  2. Contents • Motivation (Quality Control Chart) • Collecting output data • Type of simulations • Steady-State Simulation • Stochastic Stationary Process • Variance Estimation Methods • Replication Method • Batch mean methods • Non-overlapping vs. Overlapping • Standardized Time Series • Additional Methods

  3. Motivation • Quality Control Chart • Simple Control Chart (Shewhart Control Chart) • Design of Shewhart Control Chart • Control Limits • UCL & LCL • Results from incorrect control limits • If overestimated  detection delay • If underestimated  high false alarm rate • Estimating process variance (standard deviation) is critical.

  4. Type of simulations • Terminating vs. non-terminating simulations • Terminating simulations • Runs for some duration of time TE, where E is a specified event that stops the simulation. • Starts at time 0 under well-specified initial conditions. • Ends at the stopping time TE. • Bank example: Opens at 8:30 am (time 0) with no customers present and 8 of the 11 teller working (initial conditions), and closes at 4:30 pm (Te) • The simulation analyst chooses to consider it a terminating system because the object of interest is one day’s operation.

  5. Type of simulations • Non-terminating simulation • Runs continuously, or at least over a very long period of time. • Examples: assembly lines that shut down infrequently, telephone systems, hospital emergency rooms. • Initial conditions defined by the analyst. • Runs for some analyst-specified period of time TE. • Study the steady-state (long-run) properties of the system, properties that are not influenced by the initial conditions of the model. • Terminating or non-terminating • The objectives of the simulation study • The nature of the system.

  6. Output Analysis for Steady-State Simulation • Consider a single run of a simulation model to estimate a steady-state or long-run characteristics of the system. • The single run produces observations Y1, Y2, ... (generally the samples of an autocorrelated time series). • Performance measures: • Point estimator and confidence interval • Independent of the initial conditions

  7. Output Analysis for Steady-State Simulation • The sample size is a design choice, with several considerations in mind: • Any bias in the point estimator that is due to artificial or arbitrary initial conditions (bias can be severe if run length is too short). • Desired precision of the point estimator. • Budget constraints on computer resources.

  8. Initialization Bias • Methods to reduce the point-estimator bias caused by using artificial and unrealistic initial conditions: • Intelligent initialization. • Divide simulation into an initialization phase and data-collection phase. • Intelligent initialization • Initialize the simulation in a state that is more representative of long-run conditions. • If the system exists, collect data on it and use these data to specify more nearly typical initial conditions. • If the system can be simplified enough to make it mathematically solvable, e.g. queueing models, solve the simplified model to find long-run expected or most likely conditions, use that to initialize the simulation.

  9. Initialization Bias • No widely accepted, objective and proven technique to guide how much data to delete to reduce initialization bias to a negligible level. • Plots can be misleading but they are still recommended. • If averages reveal a smoother and more precise trend as the # of replications increases. • If averages can be smoothed further by plotting a moving average. • Cumulative average becomes less variable as more data are averaged. • The more correlation present, the longer it takes for to approach steady state.

  10. Introduction to variance estimation • Simulation output analysis • Point estimator and confidence interval • Variance estimation  confidence interval • Independent and identically distributed (IID) • Suppose X1,…Xm are iid

  11. Stochastic Stationary Process The stochastic process X is stationary for t1,…,tk, t∈ T, if Stationary time series with positive autocorrelation Stationary time series with negative autocorrelation Nonstationary time series with an upward trend

  12. Stochastic Stationary Process (2) A discrete-time stationary process X = {Xi : i≥1} with mean µ and variance σX2 = Cov(Xi,Xi), Variance of the sample mean Easy to see For iid,

  13. Stochastic Stationary Process (3) • The expected value of the variance estimator is: • If Xi are independent, then is an unbiased estimator of • If the autocorrelation is positive, then is biased low as an estimator of • If the autocorrelation is negative, then is biased high as an estimator of

  14. Replication Method • Use to estimate point-estimator variability and to construct a confidence interval. • Approach: make R replications, initializing and deleting from each one the same way. • Important to do a thorough job of investigating the initial-condition bias: • Bias is not affected by the number of replications, instead, it is affected only by deleting more data or extending the length of each run (i.e. increasing TE). • Basic raw output data {Xrj, r = 1, ..., R; j = 1, …, n} is derived by: • Individual observation from within replication r. • Batch mean from within replication r of some number of discrete-time observations. • Batch mean of a continuous-time process over time interval j.

  15. Replication Method • Length of each replication (n) beyond deletion point (d): (n - d) > 10d • Number of replications (R) should be as many as time permits, up to about 25 replications. • For a fixed total sample size (n), as fewer data are deleted (d decreases) • C.I. shifts: greater bias. • Decrease Variance • Trade off between bias and variance.

  16. Batch means method • When using a single, long replication: • Problem : • Data are dependent so the usual estimator is biased • Replication method (more than 25 replications) is expensive • One solution: Batch means method

  17. Batch means method (Nonoverlapping batch mean) • Non-overlapping batch mean (NBM) Batch k Batch 1 m observations with batch mean Y1,m m observations with batch mean Yk,m

  18. Nonoverlapping batch mean (NBM) • Suppose the batch means become uncorrelated as m  ∞ • NBM estimator for σ2 • Confidence Interval

  19. Overlapping Batch Mean (OBM) Y1,m Y2,m OBM estimator for σ2

  20. NBM vs. OBM • Under mild conditions • Thus, both have similar bias • Variance of the estimators • Thus, the OBM method gives better MSE

  21. Standardized Time Series • Define the ‘centered’ partial sums of Xi as • Central Limit Theorem • Define the continuous time process Question: How does Zn(t) behave as n increases?

  22. n=100

  23. n=1000

  24. n=10000

  25. n=1,000,000

  26. Functional Central Limit Theorem (FCLT) where B(t) is the standard Brownian motion (drift coefficient =0, diffusion coefficient =1) Brownian motion with drift coefficient  and diffusion coefficient 2 is a real valued stochastic process with stationary and independent increments having continuous sample paths where Thus,

  27. Functional Central Limit Theorem (FCLT) • While CLT says that for any t, FCLT also shows that Zn(t) converges to an asymptotically continuous stochastic process with independent increments.

  28. The Weighted Area Estimator • If we can define the function f such that • Continuous [0,1] • Normalized so that Then, • Define

  29. The Weighted Area Estimator Since and is a variance estimator

  30. Additional Methods Regenerative Method (Fishman 1973; Shedler 1993) Spectral Theory (Heidelberger and Welch 1981, 1983; Damerdji 1991) Quantile estimation (Heidelberger and Lewis 1984; Seila 1982) Estimation of functions of means (Munoz and Glynn 1997) Estimation of multivariate means (Anderson 1984; Charnes 1989, 1995; Seila 1984)

  31. Reference Alexopoulos, C., D. Goldsman, and R. F. Serfozo. 2005. Stationary processes: statistical estimation. In The Encyclopedia of Statistical Sciences, ed. N. L. Johnson, and C. B. Read, 2nd edition. New York: John Wiley & Sons Kim, S.-H., C. Alexopoulos, K.-L. Tsui, and J. R. Wilson. 2006. A distribution-free tabular CUSUM chart for autocorrelated data. IIE Transactions, forthcoming. Alexopoulos, C., N. T. Argon, D. Goldsman, and G. Tokol. 2004. Overlapping variance estimators for simulations. Technical Report, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia. Law, A. M., and W. D. Kelton. 2000 Simulation Modeling and Analysis, 3rd ed. New York: McGraw-Hill.

More Related