1 / 30

Building a Model

Building a Model . Least-Squares Regression Section 3.3. Why Create a Model?. There are two reasons to create a mathematical model for a set of bivariate data. To predict the response value for a new individual. To find the “average” response value for any explanatory value.

Download Presentation

Building a Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building a Model Least-Squares Regression Section 3.3

  2. Why Create a Model? • There are two reasons to create a mathematical model for a set of bivariate data. • To predict the response value for a new individual. • To find the “average” response value for any explanatory value.

  3. Which Model is “Best” • Since we want to use our model to predict response values for given explanatory values, we will define “best” as the model in which we have the smallest error. (We will define “error/residual” as the vertical distance from an observed value to the prediction line) • Residual = Observed – Predicted • When the variables show a linear relationship, we find that the line of “best” fit is the Least-Squares Regression Line

  4. Least-Squares Regression Line • Why is it called the “Least-Squares Regression Line? • Consider our data set from the “Dream Team” • Notice that our line is an “average” line and that it does not intersect each piece of data. • This means that our predictions will have some error associated with it

  5. So Why is it “Best”? • If we find the vertical distance from the actual data point to our prediction line, we can find the amount of error. But if we try to add these errors together, we will find they add to zero since our line is an “average” line. • We can avoid that sum of zero by squaring each of those errors and then finding the sum.

  6. Smallest “Sum of Squared Error” • We find that the line called the Least-Squares Regression Line has the smallest sum of squared error. • This seems to indicate that this model will be the line that does the best job of predicting.

  7. Equation of the LSRL • The LSRL can be found using the means, standard deviations, and the correlation between our explanatory and response variable. Where: yhat = predicted response variable bo = y-intercept b1 = slope x = explanatory variable value

  8. Calculating LSRL using summary statistics • When all you have is the summary statistics, we can use the following equations to calculate • Where b1 and b0 can be found using:

  9. Finding the LSRL • So with the summary statistics for both minutes and points, we can find the line of “best” fit for predicting the number of points we can expect, on average, for a given number of minutes played.

  10. Describing bo in context • b0= the y-intercept: the y-intercept is the value of the response variable when our explanatory variable is zero. Sometimes this has meaning in context and sometimes has only a mathematical meaning. • bo = -.8107, this would mean that if a player spent no minutes on the court, he would score a negative .8107 points on average. Since it is impossible to score negative points, we can conclude that the y-intercept has no meaning in this situation.

  11. Describing b1 in context • b1= the slope: the slope of the regression line, tells us, what change in the response variable we expect, on average, for an increase of 1 in the explanatory variable. • Since b1= .6145, we can conclude that for each addition minute spent on the court, a player would, on average, score approximately .6145 more points. • Or equivalently, for each additional 10 minutes, he would score on average an additional 6.145 points.

  12. Finding the LSRL with raw data • We can find the LSRL using technology---either our TI-calculators or a statistical software. • The program called “StatCrunch” is a web based statistical program that provides statistical calculations and plots. The output is very similar to most statistical programs.

  13. Least-Squares Regression Output Regression Equation Y-Intercept Slope Simple linear regression results:Dependent Variable: Points Independent Variable: Minutes Points = -0.81066996 + 0.6145467 Minutes Sample size: 12 R (correlation coefficient) = 0.824 R-sq = 0.6790264 Estimate of error standard deviation: 7.127934 Parameter estimates: Analysis of variance table for regression model:

  14. TI-Tips for LSRL • To find the LSRL on a TI-83, 84 calculator, first enter the data into the list editor of the calculator. This can be either named lists or the built in lists.

  15. From the home screen: • STAT • CALC • 8:LinReg(a+bx) • The arguments for this command are simply to tell the calculator where the explanatory and response values are located. • ENTER • Notice that in addition to the values for the y-intercept and slope, the correlation coefficient, r, is also given.

  16. Is a linear model appropriate? • We now know how to create a linear model, but how do we know that this type of model is the appropriate one? • To answer this question, we look at 3 things: • Does a scatterplot of the data appear linear? • How strong is the linear relationship, as measured by the correlation coefficient, “r” ? • What does a graph of the residuals (errors in prediction) look like.

  17. Checking for Linearity • As we can see from the scatterplot the relationship appears fairly linear • The correlation coefficient for the linear relationship is .824 • Even though both of these things indicate a linear model, we must check a graph of the residuals to make sure the errors associated with a linear model aren’t systematic in some way.

  18. Residuals A parabolic shape indicates the data is not linear • We can look at a graph of the number of minutes (x-values) vs the errors produced by the LSRL. If there is no pattern present, we can use a linear model to represent the relationship. • However, if a pattern is present, (like any of the graphs at the right) we should investigate other possible models. A “trig” looking pattern indicates “auto-correlation”. An increase or decrease in variation is called a mega-phone effect

  19. Dream Team residuals • Notice that there does not appear to be any pattern to the residuals of the least-squares regression line between the number of minutes spent on the court and the number of points scored. This would indicate that a linear model is appropriate.

  20. How Good is our Model • Although a linear model may be appropriate, we can also evaluate how much of the differences in our response variable can be explained by the differences in the explanatory variable. • The statistics that gives this information is r2. This statistic helps us to measure the contribution of our explanatory variable in predicting our response variable.

  21. How Good is our Dream Team Model? • Remember from both our stat crunch output and our calculator output, we found that r2=.68 • Approximately 68% of the differences in the number of points scored by the players can be explained by the differences in the number of minutes the player spent on the court. • An alternative way to say this same thing: • Approximately 68% of the differences in the number of points scored by the players can be explained by the least-squares regression of points on minutes.

  22. So, how good is it???? • Well it may help to know how r2 is calculated. Yes, r2 is the square of the correlation coefficient r, however it is useful to see it in a different light. • Remember that our goal is to find a model that helps us to predict the response variable, in this case points scored.

  23. Understanding r2 • One way to describe the number of points scored for players is to simply give the average number of points scored. • Notice, that this line, as with our regression line, has some error associated with it.

  24. The error associated about the mean is found by finding the vertical distance from each data point to the line represented by the average response value. • To avoid a sum of zero, we again square each of these distances, then find the sum, called the sum of squares total: SST • For our example: we find that SST = 1582.9167 • Remember that our LSRL also measured the vertical distances from each data point to the prediction line and found the line that minimizes this sum, this is the sum of squared error: SSE • For our example: we find that SSE = 508.0744

  25. Now if the explanatory variable we have chosen really does NOT help us in predicting our response variable, then the sum of squares total (SST) will be very close to the sum of squares error (SSE). • The difference between these two is the amount of the variation in the response variable that can be explained by the regression line of y on x. (Sometimes this is referred to the sum of squares regression (SSR) or sum of squares model (SSM).

  26. r2 represents the ratio of Sum of squares Regression (Model) to Sum of Squares Total • It is the proportion of variability that is explained by the least-squares regression of y on x. • Notice that our regression output gives us the ability to calculate r2 directly from the error measurements

  27. Interpreting r2 • When r2 isclose to zero, this indicates that the variable we have chosen to use as a predictor does not contribute much, in other words, it would be just as valuable to use the mean of our response variable. • As r2 gets closer to 1, this indicates that the explanatory variable is contributing much more to our predictions and our regression model will be more useful for predictions than just reporting a mean. • Some models include more than one explanatory variable, this type of model is called multiple-linear regression and we’ll leave the study of these models for another course

  28. Additional Resources • Against All Odds • http://www.learner.org/resources/series65.html • Video #7 Models for Growth • The Practice of Statistics-YMM Pg 137-151 • The Practice of Statistics-YMS Pg 149-165

  29. What you learned: • Why we create a model • Which model is “best” and why • Finding the LSRL using summary stats • Using technology to find the LSRL • Describing the y-intercept and slope in context • Determining if a LSRL is appropriate • How “good” is our model?

More Related