1 / 37

Simulation Input and Output Data Analysis

Simulation Input and Output Data Analysis. Chapter 9 Business Process Modeling, Simulation and Design Augmented with material from other sources. Overview. Analysis of input data Identification of field data distributions Goodness-of-fit tests Random number generation

landon
Download Presentation

Simulation Input and Output Data Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Simulation Input and Output Data Analysis • Chapter 9 • Business Process Modeling, Simulation and Design • Augmented with material from other sources

  2. Overview • Analysis of input data • Identification of field data distributions • Goodness-of-fit tests • Random number generation • Analysis of Output Data • Non-terminating v.s. terminating processes • Confidence intervals • Hypothesis testing for comparing designs

  3. Input Data Output Data Simulation Model Random Random Why Input and Output Data Analysis? • Analysis of input data • Necessary for building a valid model • Three aspects • Identification of (time) distributions • Random number generation • Generation of random variates Integrated into Extend • Analysis of output data • Necessary for drawing correct conclusions • The reported performance measures are typically random variables!

  4. Capturing Randomness in Input Data • Collect raw field data and use as input for the simulation • No question about relevance • Expensive/impossible to retrieve a large enough data set • Not available for new processes • Not available for multiple scenarios  No sensitivity analysis • Very valuable for model validation • Generate artificial data to use as input data • Must capture the characteristics of the real data • Collect a sufficient sample of field data • Characterize the data statistically – Distribution type and parameters • Generate random artificial data mimicking the real data • High flexibility – easy to handle new scenarios • Cheap • Requires proper statistical analysis to ensure model validity

  5. 1. Gather data from the real system 2. Identify an appropriate distribution family 3. Estimate distribution parameters and pick an “exact” distribution • Informal test – “eye-balling” • Formal tests, for example • 2 - test • Kolmogorov-Smirnov test Distribution hypothesis rejected 4. Perform Goodness–of–fit test (Reject the hypothesis that the picked distribution is correct?) If a known distribution can not be accepted  Use an empirical distribution Procedure for Modeling Input Data • Plot histograms of the data • Compare the histogram graphically (“eye-balling”) with shapes of well known distribution functions • How about the tails of the distribution, limited or unlimited? • How to handle negative outcomes?

  6. Example – Modeling Interarrival Times (I) • Data gathering from the real system

  7. Example – Modeling Interarrival Times (II) • Identify an appropriate distribution type/family • Plot a histogram • Divide the data material into appropriate intervals • Usually of equal size • Determine the event frequency for each interval (or bin) • Plot the frequency (y-axis) for each interval (x-axis) The Exponential distribution seems to be a good first guess!

  8. Example – Modeling Interarrival Times (III) • Estimate the parameters defining the chosen distribution • In the current example Exp()has been chosen  need to estimate the parameter  • ti = the ith interarrival time in the collected sample of n observations

  9. Example – Modeling Interarrival Times (III) • Perform Goodness-of-fit test • The purpose is to test the hypothesis that the data material is adequately described by the “exact” distribution chosen in steps 1-3. • Two of the most well known standardized tests are • The 2-test • Should not be applied if the sample size n<20 • The Kolmogorov-Smirnov test • A relatively simple but imprecise test • Often used for small sample sizes • The 2-test will be applied for the current example

  10. Data: x1, x2, …, xn(n observations from the real system) Model: X1, X2,…, Xn (Random variables, independent and identically distributed with CDF F(x)) Performing a 2-Test (I) • In principle • A statistical test comparing the relative frequencies for the intervals/bins in a histogram with the theoretical probabilities of the chosen distribution • Assumptions • The distribution involves k parameters estimated from the sample • The sample contains n observations (sample size=n) • F0(x) denotes the chosen/hypothesized CDF Null hypothesis H0: F(x) = F0(x) Alternative hypothesis HA: F(x)  F0(x)

  11. Performing a 2-Test (II) f0(x) The area = p2 = F0(a2) - F0(a1) Data values … Min=a0 a1 a2 a3 ar-2 ar-1 ar=Max Bin: 2 3 r-1 r 1 • Take the entire data range and divide it into r non overlapping intervals or bins • pi = The probability that an observation X belongs to bin i • The Null Hypothesis  pi = F0(ai) - F0(ai-1) • To improve the accuracy of the test • choose the bins (intervals) so that the probabilities pi (i=1,2, …r) are equal for all bins

  12. Performing a 2-Test (III) 2. Define r random variables Oi, i=1, 2, …r • Oi=number of observations in bin i (= the interval (ai-1, ai]) • If H0 is true  the expected value of Oi = n*pi • Oi is Binomially distributed with parameters n and pi 3. Define the test variable T • If H0 is true  T follows a 2(r-k-1) distribution • T = The critical value of T corresponding to a significance level  • obtained from a 2(r-k-1) distribution table • Tobs = The value of T computed from the data material • If Tobs > T  H0 can be rejected on the significance level 

  13. Validity of the 2-Test • Depends on the sample size n and on the bin selection (the size of the intervals) • Rules of thumb • The 2-test is acceptable for ordinary significance levels (=1%, 5%) if the expected number of observations in each interval is greater than 5 (n*pi>5 for all i) • In the case of continuous data and a bin selection such that pi is equal for all bins  • n20  Do not use the 2-test • 20<n 50  5-10 bins recommendable • 50<n 100  10-20 bins recommendable • n >100  n0.5 – 0.2n bins recommendable

  14. Example – Modeling Interarrival Times (IV) • Hypothesis – the interarrival time Y is Exp(0.084) distributed H0: YExp(0.084) HA: YExp(0.084) • Bin sizes are chosen so that the probability pi is equal for all r bins and n*pi>5 for all i • Equal pi  pi=1/r • n*pi>5  n/r > 5  r<n/5 • n=50  r<50/5=10  Choose for example r=8  pi=1/8 • Determining the interval limits ai, i=0,1,…8 i=1  a1=ln(1-(1/8))/(-0.084)=1.590 i=2  a2=ln(1-(2/8))/(-0.084)=3.425  i=8  a8 =ln(1-(8/8))/(-0.084)=

  15. Example – Modeling Interarrival Times (V) • Computing the test statistic Tobs Note: oi = the actual number of observations in bin i • Determining the critical value T • If H0 is true  T2(8-1-1)=2(6) • If =0.05  P(T T0.05)=1-=0.95  /2 table/  T0.05=12.60 • Rejecting the hypothesis • Tobs=39.6>12.6= T0.05  H0 is rejected on the 5% level

  16. The Kolmogorov-Smirnov test (I) • Advantages over the chi-square test • Does not require decisions about bin ranges • Often applied for smaller sample sizes • Disadvantages • Ideally all distribution parameters should be known with certainty for the test to be valid • A modified version based on estimated parameter values exist for the Normal, Exponential and Weibull distributions • In practice often used for other distributions as well • For samples with n30 the 2-test is more reliable!

  17. The Kolmogorov-Smirnov test (II) • Compares an empirical “relative-frequency” CDF with the theoretical CDF (F(x)) of a chosen (hypothesized) distribution • The empirical CDF = Fn(x) = (number of xix)/n • n=number of observations in the sample • xi=the value of the ith smallest observation in the sample  Fn(xi)=i/n • Procedure • Order the sample data from the smallest to the largest value • Compute D+ , D– and D = max{D+ , D–} • Find the tabulated critical KS value corresponding to the sample size n and the chosen significance level,  • If the critical KS value  D  reject the hypothesis that F(x) describes the data material’s distribution

  18. Distribution Choice in Absence of Sample Data • Common situation especially when designing new processes • Try to draw on expert knowledge from people involved in similar tasks • When estimates of interval lengths are available • Ex. The service time ranges between 5 and 20 minutes  Plausible to use a Uniform distribution with min=5 and max=20 • When estimates of the interval and most likely value exist • Ex. min=5, max=20, most likely=12  Plausible to use a Triangular distribution with those parameter values • When estimates of min=a, most likely=c, max=b and the average value=x-bar are available  Use a -distribution with parameters  and 

  19. Random Number Generators • Needed to create artificial input data to the simulation model • Generating truly random numbers is difficult • Computers use pseudo-random number generators based on mathematical algorithms – not truly random but good enough • A popular algorithm is the “linear congruential method” 1. Define a random seed x0 from which the sequence is started 2. The next “random” number in the sequence is obtained from the previous through the relation where a, c, and m are integers > 0

  20. Larger m  longer sequence before it starts repeating itself Example – The Linear Congruential Method • Assume that m=8, a=5, c=7 and x0=4

  21. The Runs Test (I) • Test for detecting dependencies in a sequence of generated random numbers • A run is defined as a sequence of increasing or decreasing numbers • “+” indictes an increasing run • “–” indicates a decreasing run Ex. Numbers: 1, 7, 8, 6, 5, 3, 4, 10, 12, 15 runs: + + – – – + + + + + • The test is based on comparing the number of runs in a true random sequence with the number of runs in the observed sequence

  22. The Runs Test (II) • Hypothesis: H0: Sequence of numbers is independent HA: Sequence of numbers is not independent • R = # runs in a truly random sequence of n numbers (random variable) • Have been shown that… R=(2n-1)/3 R=(16n-29)/90 RN(R, R) Test statistic: Z={(R-R)/R}N(0,1) • Assuming: confidence level  and a two sided test • P(-Z/2ZZ/2)=1- • H0 is rejected if Zobserved> Z/2

  23. Generating Random Variates • Assume random numbers, r, from a Uniform (0, 1) distribution are available • Random numbers from any distribution can be obtained by applying the “inverse transformation technique” The inverse Transformation Technique • Generate a U[0, 1] distributed random number r • T is a random variable with a CDF FT(t) from which we would like to obtain a sequence of random numbers • Note: 0 FT(t) 1 for all values of t •  t is a random number from the distribution of T, i.e., a realization of T

  24. Analysis of Simulation Output Data • The output data collected from a simulation model are realizations of stochastic variables • Results from random input data and random processing times  Statistical analysis is required to • Estimate performance characteristics • Mean, variance, confidence intervals etc. for output variables • Compare performance characteristics for different designs • The validity of the statistical analysis and the design conclusions are contingent on a careful sampling approach • Sample sizes – run length and number of runs. • Inclusion or exclusion of “warm-up” periods? • One long simulation run or several shorter ones?

  25. Terminating v.s. Non-Terminating Processes

  26. Non-Terminating Processes • Does not end naturally within a particular time horizon • Ex. Inventory systems • Usually reach steady state after an initial transient period • Assumes that the input data is stationary • To study the steady state behavior it is vital to determine the duration of the transient period • Examine line plots of the output variables • To reduce the duration of the transient (=“warm-up) period • Initialize the process with appropriate average values

  27. Illustration Transient and Steady state Line plot of cycle times and average cycle time Transient state Steady state

  28. Terminating Processes • Ends after a predetermined time span • Typically the system starts from an empty state and ends in an empty state • Ex. A grocery store, a construction project, … • Terminating processes may or may not reach steady state • Usually the transient period is of great interest for these processes • Output data usually obtained from multiple independent simulation runs • The length of a run is determined by the natural termination of the process • Each run need a different stream of random numbers • The initial state of each run is typically the same

  29. Confidence Intervals and Point Estimates • Statistical estimation of measures from a data material are typically done in two ways • Point estimates (single values) • Confidence intervals (intervals) • The confidence level  • Indicates the probability of not finding the true value within the interval (Type I error) • Chosen by the analyst/manager • Determinants of confidence interval width • The chosen confidence level  • Lower   wider confidence interval • The sample size and the standard deviation () • Larger sample  smaller standard deviation  narrower interval

  30. Important Point Estimates • In simulation the most commonly used statistics are the mean and standard deviation () • From a sample of n observations • Point estimate of the mean: • Point estimate of  :

  31. Confidence Interval for Population Means (I) • Characteristics of the point estimate for the population mean • Xi = Random variable representing the value of the ith observation in a sample of size n, (i=1, 2, …, n) • Assume that all observations Xi are independent random variables • The population mean = E[Xi]= • The population standard deviation=(Var[Xi])0.5= • Point estimate of the population mean= • Mean and Std. Dev. of the point estimate for the population mean

  32. Confidence Interval for Population Means (II) • Distribution of the point estimate for population means • For any distribution of Xi (i=1, 2, …n), when n is large (n30), due to the Central Limit Theorem • If all Xi (i=1, 2, …n) are normally distributed, for any n • A standard transformation: • Defining a symmetric two sided confidence interval • P(Z/2  Z  Z/2) = 1  •  is known  Z/2 can be found from a N(0, 1) probability table •  Confidence interval for the population mean 

  33. In practice when n is large (30) the t-distribution is often approximated with the Normal distribution! Confidence Interval for Population Means (III) • In case the population standard deviation, , is known • In case  is unknown we need to estimate it • Use the point estimate s • The test variable Z is no longer Normally distributed, it follows a Students-t distribution with n-1 degrees of freedom

  34. Determining an Appropriate Sample Size • A common problem in simulation • How many runs and how long should they be? • Depends on the variability of the sought output variables • If a symmetric confidence interval of width 2d is desired for a mean performance measure  • If x-bar is normally distributed • If  is unknown and estimated with s

  35. Hypothesis Testing (I) • Testing if a population mean () is equal to, larger than or smaller than a given value • Suppose that in a sample of n observations the point estimate of =

  36. Hypothesis Testing (II) 2. Testing if two sample means are significantly different • Useful when comparing process designs • A two tail test when 1=2=s • H0: 1- 2=a /typically a=0/ HA: 1- 2a • The test statistic Z belongs to a Student-t distribution • Reject H0 on the significance level  if it is not true that

  37. Hypothesis Testing (III) • If the sample sizes are large (n1+n2-2>30) • Z is approximately N(0, 1) distributed • Reject H0 if it is not true that • In practice, when comparing designs non-overlapping 3 • intervals are often used as a criteria • H0: 1- 2>0 • HA: 1- 20 • Reject H0 if

More Related