- 56 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Economics 105: Statistics' - jonah-bentley

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Economics 105: Statistics

### Nature of Serial Correlation

### Nature of Serial Correlation

### Causes of Serial Correlation

### Causes of Serial Correlation

### Detection of Serial Correlation

### Serial Correlation

### Serial Correlation Example

### Serial Correlation Example

### Serial Correlation Example

### Serial Correlation Example

### Serial Correlation Example

### Serial Correlation Example

### Do’s and Don’ts

### Do’s and Don’ts

RAP is due via email at 5:15 last day of exams. Please save as a PDF file first. And email the Excel file separately.

I know! We can save the model, but not until Eco205.

Holy endogeneity, Batman!

Violations of GM AssumptionsAssumption Violation

Wrong functional form

Omit Relevant Variable (Include Irrelevant Var)

Errors in Variables

Sample selection bias, Simultaneity bias

Heteroskedasticerrors

Homoskedasticerrors

No serial correlation of errors

There exists serial correlation in errors

Model is linear in parameters, the betas (4)

i.i.d. sample of data (5)

- Error in period t is a function of error in prior period alone: first-order autocorrelation, denoted AR(1) for “autoregressive” process
- Usual assumptions apply to new error term
- is positive serial correlation
- is negative serial correlation

Error in period t can be a function of error in more than one prior period

- Second-order serial correlation
- Higher orders generated analogously
- Seasonally-based serial correlation

The error term in the regression captures

- Measurement error
- Omitted variables, that are uncorrelated with the included explanatory variables (hopefully)
- Frequently factors omitted from the model are correlated over time
- Persistence of shocks
- Effects of random shocks (e.g., earthquake, war, labor strike) often carry over through more than one time period
- Inertia
- times series for GNP, (un)employment, output, prices, interest rates, etc. follow cycles, so that successive observations are related

- Past actions have a strong effect on current ones
- Consumption last period predicts consumption this period
- 4. Misspecified model, incorrect functional form
- 5. Spatial serial correlation
- In cross-sectional data on regions, a random shock in one region can cause the outcome of interest to change in adjacent regions
- “Keeping up with the Joneses”

Consequences for OLS Estimates

- Using an OLS estimator when the errors are autocorrelated results in unbiased estimators
- However, the standard errors are estimated incorrectly
- Whether the standard errors are overstated or understated depends on the nature of the autocorrelation
- For positive AR(1), standard errors are too small!
- Any hypothesis tests conducted could yield erroneous results
- For positive AR(1), may conclude estimated coefficients ARE significantly different from 0 when we shouldn’t !
- OLS is no longer BLUE
- A pattern exists in the errors
- Suggesting an estimator that exploited this would be more efficient

no obvious pattern—the errors seem random.

Sometimes, however, the errors follow a pattern—they are correlated across observations, creating a situation in which the observations are not independent with one another.

Detection of Serial Correlation

Here the residuals do not seem random, but rather seem to follow a pattern.

Detection: The Durbin-Watson Test

- Provides a way to test H0: = 0
- It is a test for the presence of first-order serial correlation
- The alternative hypothesis can be
- 0
- > 0: positive serial correlation
- Most likely alternative in economics
- < 0: negative serial correlation
- DW Test statistic is d

Detection: The Durbin-Watson Test

- To test for positive serial correlation with the Durbin-Watson statistic, under the null we expect d to be near 2
- The smaller d, the more likely the alternative hypothesis

The sampling distribution

of d depends on the values of the explanatory variables. Since every problem has a different set of explanatory variables, Durbin and Watson derived upper and lower limits

for the critical value of the test.

Detection: The Durbin-Watson Test

- Durbin and Watson derived upper and lower limits such that d1d* du
- They developed the following decision rule

Detection: The Durbin-Watson Test

- To test for negative serial correlation the decision rule is

- Can use a two-tailed test if there is no strong prior belief about whether there is positive or negative serial correlation—the decision rule is

Table of critical values for Durbin-Watson statistic (table E11, page 833 in BLK textbook)

http://hadm.sph.sc.edu/courses/J716/Dw.html

What is the effect of the price of oil on the number of wells drilled in the U.S.?

What is the effect of the price of oil on the number of wells drilled in the U.S.?

Analyze residual plots … but be careful …

Remember what serial correlation is …

- This plot only “works” if obs number is in same order as the unit of time

Same graph when plot versus “year”

- Graphical evidence of serial correlation

Calculate DW test statistic

Compare to critical value at chosen sig level

dlower or dupper for 1 X-var & n = 62 not in table

dlower for 1 X-var & n = 60 is 1.55, dupper = 1.62

- Since .192 < 1.55, reject H0: = 0 in favor of H1: > 0 at α=5%

Do interpret coefficients carefully by keeping in mind the units of X and of Y

Do discuss separately – and not conflate – statistical significanceand economic magnitude, i.e., the size of the estimated effect (of X on Y)

Do not say one variable is “more significant” or “more important” than another because it has a smaller p-value

p-values are measures of evidence (against H0)

p-values do not give us info about the magnitude of the effect (i.e., the “effect size”)

Do not say one variable is “more significant” or “more important” than another because is twice as big as

remember the ceteris paribus interpretation

don’t compare the magnitudes of coefficients unless they are measured in the same units

Do not assume that two estimated coefficients are different from one another if one is statistically significant and the other isn’t

Gelman & Stern (2006), “The Difference Between ‘Significant’ and ‘Not Significant’ is not Itself Statistically Significant,” American Statistician, vol. 60, no. 4

Download Presentation

Connecting to Server..