190 likes | 297 Views
This article explores the implications of autocorrelated data in statistical calculations, particularly focusing on variance and confidence intervals. We discuss how the covariance of consecutive observations affects the variance of sample averages, leading to common misconceptions about accuracy in confidence intervals. Key concepts include the impact of non-zero lag-1 autocorrelation on hypothesis testing and the classical "wrong-way" hypothesis test. Theoretical underpinnings are presented alongside practical results, emphasizing the need to account for autocorrelation in statistical analysis.
E N D
CALCULATIONS ON VARIANCES: SOME BASICS • Let X and Y be random variables COV=0 if X and Y are independent.
WHAT IF COV(Xi, Xi+1) > 0? • We calculate an AVG by adding X’s • The VAR of the AVG is bigger by COV(Xi, Xi+1) • The formula for VAR assumes COV(Xi, Xi+1) =0 • The formula underestimates VAR of the AVG • The formula for the width of the CI gives too small a width • The CI does not cover the true m with the advertized probability a • Our conclusion has oversold accuracy
AUTOCORRELATED DATA • Consider the formula, called the Auto-Regressive (Lag 1) Process
The Test for Rank 1 Autocorrelation Ho: r(1) = 0 Ha: r(1) <> 0
STATISTICALLY SIGNIFICANT AUTOCORRELATION • Lag 1 autocorrelation r(1) estimated by r(1) Normal Mean Variance
So the quantity z below is N(0, 1), and can be compared to critical values, and p-values can be computed… Simplifies when we are testing r(1) = 0 Remember that this is a classical “wrong-way” hypothesis test