1 / 16

Welcome

Welcome. Seventh Lecture for MATH 3330 M Professor G.E. Denzel. Agenda. Begin discussion of multi-predictor models. Learning Objectives. How to find slope and intercept to minimize the error sum of squares (the ‘least squared error’ estimators). Properties of these estimators

burt
Download Presentation

Welcome

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome Seventh Lecture for MATH 3330 M Professor G.E. Denzel

  2. Agenda • Begin discussion of multi-predictor models

  3. Learning Objectives • How to find slope and intercept to minimize the error sum of squares (the ‘least squared error’ estimators). • Properties of these estimators • Linear functions of the data • Caution on conclusions possible without further knowledge of population

  4. Sums of Squares • SSE and SSTOT (= SSY) play same role as for one-predictor models • SSTOT=SSE + (SSTOT-SSE) = SSE + SSR • SSE now has N-k-1 degrees of freedom (df), where k=number of predictors in the model. • SSR now has k df • F*=MSR/MSE=(SSR/k)/MSE will again have an F distribution under the H0: all predictors have coefficient 0, with df =k for numerator and N-k-1 for denominator. • We reject H0 for large values of F*. • The alternative hypothesis is that AT LEAST ONE OF THE COEFFICIENTS IS NON-ZERO

  5. Example using Anscombe data • We will step through the process of fitting the model after the data has been put into a SAS workspace. • First here is a part of the data, along with a description of the variables. Note that this data does not really represent a random sample of 51 observations, except perhaps in the sense of one year’s data sampled from many years. However, we can still fit models as long as we think about what the hypothetical population which we are making inferences about might be.

  6. Here is the input menu after selecting spend as the ‘Y’ and income, prop18, and propurban as predictors.:

  7. And here is the Output screen:

  8. The first output tables

  9. The plot of residuals vs predicted values

  10. Next slide shows what the dataset now looks like with the residual and predicted variables added to the data (using default names; they can be changed).

  11. The Type I SS table:

  12. What happens if we change the order?

More Related