1 / 26

Building a Model

Building a Model . Least-Squares Regression Section 3.3. Why Create a Model?. There are two reasons to create a mathematical model for a set of bivariate data. To predict the response value for a new individual. To find the “average” response value for any explanatory value.

Download Presentation

Building a Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building a Model Least-Squares Regression Section 3.3

  2. Why Create a Model? • There are two reasons to create a mathematical model for a set of bivariate data. • To predict the response value for a new individual. • To find the “average” response value for any explanatory value.

  3. Which Model is “Best” • Since we want to use our model to predict response values for given explanatory values, we will define “best” as the model in which we have the smallest error. (We will define “error/residual” as the vertical distance from an observed value to the prediction line) • Residual = Observed – Predicted • When the variables show a linear relationship, we find that the line of “best” fit is the Least-Squares Regression Line

  4. Least-Squares Regression Line • Why is it called the “Least-Squares Regression Line? • Consider our data set from the Hamburger data • Notice that our line is an “average” line and that it does not intersect each piece of data. • This means that our predictions will have some error associated with it

  5. So Why is it “Best”? • If we find the vertical distance from the actual data point to our prediction line, we can find the amount of error. But if we try to add these errors together, we will find they add to zero since our line is an “average” line. • We can avoid that sum of zero by squaring each of those errors and then finding the sum.

  6. Smallest “Sum of Squared Error” • We find that the line called the Least-Squares Regression Line has the smallest sum of squared error. • This seems to indicate that this model will be the line that does the best job of predicting.

  7. Equation of the LSRL • The LSRL can be found using the means, standard deviations, and the correlation between our explanatory and response variable. Where: yhat = predicted response variable bo = y-intercept b1 = slope x = explanatory variable value

  8. Calculating LSRL using summary statistics • When all you have is the summary statistics, we can use the following equations to calculate • Where b1 and b0 can be found using:

  9. Finding the LSRL • So with the summary statistics for both minutes and points, we can find the line of “best” fit for predicting the number of points we can expect, on average, for a given number of minutes played.

  10. Describing bo in context • b0= the y-intercept: the y-intercept is the value of the response variable when our explanatory variable is zero. Sometimes this has meaning in context and sometimes has only a mathematical meaning. • bo = 210.9682, this would mean that if a hamburger had no grams of fat, it would still have, on average, approximately 211 calories

  11. Describing b1 in context • b1= the slope: the slope of the regression line, tells us, what change in the response variable we expect, on average, for an increase of 1 in the explanatory variable. • Since b1= 11.0551, we can say that, on average, for each additional fat gram in a hamburger, we would expect approximately 11.0551 more calories

  12. Finding the LSRL with raw data • We can find the LSRL using technology---either our TI-calculators or a statistical software. • The program called “StatCrunch” is a web based statistical program that provides statistical calculations and plots. The output is very similar to most statistical programs.

  13. Least-Squares Regression Output Simple linear regression results:Dependent Variable: Calories Independent Variable: Fat Calories = 210.95387 + 11.055512 Fat Sample size: 7 R (correlation coefficient) = 0.9606 R-sq = 0.9228155 Estimate of error standard deviation: 27.333975 Regression Equation Y-Intercept Slope Parameter estimates:

  14. TI-Tips for LSRL • To find the LSRL on a TI-83, 84 calculator, first enter the data into the list editor of the calculator. This can be either named lists or the built in lists.

  15. From the home screen: • STAT • CALC • 8:LinReg(a+bx) • The arguments for this command are simply to tell the calculator where the explanatory and response values are located. • ENTER • Notice that in addition to the values for the y-intercept and slope, the correlation coefficient, r, is also given.

  16. Is a linear model appropriate? • We now know how to create a linear model, but how do we know that this type of model is the appropriate one? • To answer this question, we look at 3 things: • Does a scatterplot of the data appear linear? • How strong is the linear relationship, as measured by the correlation coefficient, “r” ? • What does a graph of the residuals (errors in prediction) look like.

  17. Checking for Linearity • As we can see from the scatterplot the relationship appears fairly linear • The correlation coefficient for the linear relationship is .9606 • Even though both of these things indicate a linear model, we must check a graph of the residuals to make sure the errors associated with a linear model aren’t systematic in some way.

  18. Residuals A parabolic shape indicates the data is not linear • We can look at a graph of the number of minutes (x-values) vs the errors produced by the LSRL. If there is no pattern present, we can use a linear model to represent the relationship. • However, if a pattern is present, (like any of the graphs at the right) we should investigate other possible models. A “trig” looking pattern indicates “auto-correlation”. An increase or decrease in variation is called a mega-phone effect

  19. Hamburger residuals • Notice that there does not appear to be any pattern to the residuals of the least-squares regression line between the fat grams and calories for fast food hamburgers. This would indicate that a linear model is appropriate.

  20. How Good is our Model • Although a linear model may be appropriate, we can also evaluate how much of the differences in our response variable can be explained by the differences in the explanatory variable. • The statisticsthat gives this information is r2. This is the Coefficient of Determination. This statistic helps us to measure the contribution of our explanatory variable in predicting our response variable.

  21. How Good is our Hamburger Model? • Remember from both our stat crunch output and our calculator output, we found that r2=.9228 • Approximately 92% of the differences in the number of calories in a hamburger can be explained by the differences in the amount of fat grams. • An alternative way to say this same thing: • Approximately 92% of the differences in the number of calories can be explained by the least-squares regression of calories on fat grams.

  22. So, how good is it???? • Well it may help to know how r2 is calculated. Yes, r2 is the square of the correlation coefficient r, however it is useful to see it in a different light. • Remember that our goal is to find a model that helps us to predict the response variable, in this case points scored.

  23. Interpreting r2 • When r2 isclose to zero, this indicates that the variable we have chosen to use as a predictor does not contribute much, in other words, it would be just as valuable to use the mean of our response variable. • As r2 gets closer to 1, this indicates that the explanatory variable is contributing much more to our predictions and our regression model will be more useful for predictions than just reporting a mean. • Some models include more than one explanatory variable, this type of model is called multiple-linear regression and we’ll leave the study of these models for another course

  24. Additional Resources • Against All Odds • http://www.learner.org/resources/series65.html • Video #7 Models for Growth • The Practice of Statistics-YMM Pg 137-151 • The Practice of Statistics-YMS Pg 149-165 • The Basic Practice of Statistics-Moore Pg 104-123

  25. What you learned: • Why we create a model • Which model is “best” and why • Finding the LSRL using summary stats • Using technology to find the LSRL • Describing the y-intercept and slope in context • Determining if a LSRL is appropriate • How “good” is our model?

More Related