- By
**omer** - Follow User

- 82 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Stat 112: Lecture 15 Notes' - omer

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Stat 112: Lecture 15 Notes

- Finish Chapter 6:
- Review on Checking Assumptions (Section 6.4-6.6)
- Outliers and Influential Points (Section 6.7)
- Homework 4 is due this Thursday.
- Please let me know of any ideas you want to discuss for the final project.

Review of Checking and Remedying Assumptions

- Linearity:

Check residual by predicted plots and residual plots for each variable for pattern in the mean of the residuals.

Remedies: Transformations and Polynomials. To see if remedy works, check new residual plots for pattern in the mean of the residuals..

- The standard deviation of Y for the subpopulation of units with

is the same for all subpopulations.

Check residual by predicted plot for pattern in the spread of the residuals.

Remedies: Transformation of Y. To see if remedy works, check residual by predicted plot for the transformed Y regression.

- Normality: The distribution of Y for the subpopulation of units with

is normally distributed for all subpopulations.

Check histogram for bell shaped distribution of residuals and normal quantile plot of residuals for approximately straight line.

Remedies: Transformation of Y. To see if remedy works, check histogram and normal quantile plot of residuals for transformed Y regression residuals

Checking whether a transformation of Y works for remedying Non-constant variance

- Create a new column with the transformation of the Y variable by right clicking in the new column and clicking Formula and putting in the appropriate formula for the transformation (Note: Log is contained in the class of transcendental functions)
- Fit the regression of the transformation of Y on the X variables
- Check the residual by predicted plot to see if the spread of the residuals appears constant over the range of predicted values.

Outliers in Residuals

- Standardized Residual:
- Under normality assumption, 95% of standardized residuals should be between -2 and 2, and 99% should be between -3 and 3.
- An observation with a standardized residual above 3 or below -3 is considered to be an outlier in its residual, i.e., its Y value is unusual given its explanatory variables. It is worth looking further at the observation to see if any reasons for the large magnitude residual can be identified.

Influential Points and Leverage Points

- Influential observation: Point that if it is removed would markedly change the statistical analysis. For simple linear regression, points that are outliers in the X direction are often influential.
- Leverage point: Point that is an outlier in the X direction that has the potential to be influential. It will be influential if its residual is of moderately large magnitude.

Which Observations Are Influential?

Center City Philadelphia is influential; Gladwyne is not. In general,

points that have high leverage are more likely to be influential.

Excluding Observations from Analysis in JMP

- To exclude an observation from the regression analysis in JMP, go to the row of the observation, click Rows and then click Exclude/Unexclude. A red circle with a diagonal line through it should appear next to the observation.
- To put the observation back into the analysis, go to the row of the observation, click Rows and then click Exclude/Unexclude. The red circle should no longer appear next to the observation.

Formal measures of leverage and influence

- Leverage: “Hat values” (JMP calls them hats)
- Influence: Cook’s Distance (JMP calls them Cook’s D Influence).
- To obtain them in JMP, click Analyze, Fit Model, put Y variable in Y and X variable in Model Effects box. Click Run Model box. After model is fit, click red triangle next to Response. Click Save Columns and then Click Hats for Leverages and Click Cook’s D Influences for Cook’s Distances.
- To sort observations in terms of Cook’s Distance or Leverage, click Tables, Sort and then put variable you want to sort by in By box.

Center City Philadelphia has both influence (Cook’s Distance much

Greater than 1 and high leverage (hat value > 3*2/99=0.06). No other

observations have high influence or high leverage.

Rules of Thumb for High Leverage and High Influence

- High Leverage Any observation with a leverage (hat value) > (3 * # of coefficients in regression model)/n has high leverage, where

# of coefficients in regression model = 2 for simple linear regression.

n=number of observations.

- High Influence: Any observation with a Cook’s Distance greater than 1 indicates a high influence.

What to Do About Suspected Influential Observations?

See flowchart attached to end of slides

Does removing the observation change the

substantive conclusions?

- If not, can say something like “Observation x has high influence relative to all other observations but we tried refitting the regression without Observation x and our main conclusions didn’t change.”

If removing the observation does change substantive conclusions, is there any reason to believe the observation belongs to a population other than the one under investigation?

- If yes, omit the observation and proceed.
- If no, does the observation have high leverage (outlier in explanatory variable).
- If yes, omit the observation and proceed. Report that conclusions only apply to a limited range of the explanatory variable.
- If no, not much can be said. More data (or clarification of the influential observation) are needed to resolve the questions.

General Principles for Dealing with Influential Observations

- General principle: Delete observations from the analysis sparingly – only when there is good cause (observation does not belong to population being investigated or is a point with high leverage). If you do delete observations from the analysis, you should state clearly which observations were deleted and why.

Influential Points, High Leverage Points, Outliers in Multiple Regression

- As in simple linear regression, we identify high leverage and high influence points by checking the leverages and Cook’s distances (Use save columns to save Cook’s D Influence and Hats).
- High influence points: Cook’s distance > 1
- High leverage points: Hat greater than (3*(# of explanatory variables + 1))/n is a point with high leverage. These are points for which the explanatory variables are an outlier in a multidimensional sense.
- Use same guidelines for dealing with influential observations as in simple linear regression.
- Point that has unusual Y given its explanatory variables: point with a residual that is more than 3 RMSEs away from zero.

Multiple regression, modeling and outliers, leverage and influential pointsPollution Example

- Data set pollution.JMP provides information about the relationship between pollution and mortality for 60 cities between 1959-1961.
- The variables are
- y (MORT)=total age adjusted mortality in deaths per 100,000 population;
- PRECIP=mean annual precipitation (in inches);

EDUC=median number of school years completed for persons 25 and older;

NONWHITE=percentage of 1960 population that is nonwhite; NOX=relative pollution potential of Nox (related to amount of tons of Nox emitted per day per square kilometer);

SO2=relative pollution potential of SO2

Scatterplot Matrix

- Before fitting a multiple linear regression model, it is good idea to make scatterplots of the response variable versus the explanatory variable. This can suggest transformations of the explanatory variables that need to be done as well as potential outliers and influential points.
- Scatterplot matrix in JMP: Click Analyze, Multivariate Methods and Multivariate, and then put the response variable first in the Y, columns box and then the explanatory variables in the Y, columns box.

Crunched Variables

- When an X variable is “crunched – meaning that most of its values are crunched together and a few are far apart – there will be influential points. To reduce the effects of crunching, it is a good idea to transform the variable to log of the variable.

2. a) From the scatter plot of MORT vs. NOX we see that NOX values are crunched very tight. A Log transformation of NOX is needed.

b) The curvature in MORT vs. SO2 indicates a Log transformation for SO2 may be suitable.

After the two transformations we have the following correlations:

Labeling Observations

- To have points identified by a certain column, go the column, click Columns and click Label (click Unlabel to Unlabel).
- To label a row, go to the row, click rows and click label.

Leverage Plots

- A “simple regression view” of a multiple regression coefficient. For xj:

Residual y (w/o xj) vs. Residual xj (vs the rest of x’s)

(both axes are recentered)

- Slope:

coefficient for that variable in the multiple regression

- Distances from the points to the LS line are multiple regression residuals. Distance from point to horizontal line is the residual if the explanatory variable is not included in the model.
- Useful to identify (for xj)

outliers

leverage

influential points

(Use them the same way as in a simple regression to identify the effect of points for the regression coefficient

of a particular variable)

The enlarged observation New Orleans is an outlier for estimating each coefficient and is highly leveraged for estimating the coefficients of interest on log Nox and log SO2. Since New Orleans is both highly leveraged and an outlier, we expect it to be influential.

Download Presentation

Connecting to Server..