Least Squares Regression and Multiple Regression. Regression: A Simplified Example.
Let’s find the best-fitting equation for predicting new, as yet unknown scores on Y from scores on X. The regression equation takes the form Y = a + bX + e where Y is the dependent or criterion variable we’re trying to predict, a is the intercept or point where the regression line crosses the Y axis, X is the independent or predictor variable, b is the weight by which we multiply the value of X (it is the slope of the regression line, and is how many units Y increases (decreases) for every unit change in X), and e is an error term (basically an estimate of how much our prediction is “off”). a and b are often called “regression coefficients. When Y is an estimated value it is usually symbolized as Y’
This table gives us the regression coefficients. Look in the column called unstandardized coefficients. There are two values of β provided. The first one, labeled the constant, is the intercept a, or the point at which the regression line crosses the y axis. The second one, X, is the unstandardized regression weight or the b from our regression equation. So this output tells us that the best-fitting equation for predicting Y from X is Y = 2 + (4)X. Let’s check that out with a known value of X and Y. According to the equation, if X is 3, Y should be 2 + 4(3), or 14. How about when X = 5?
The constant representing the intercept is the value that the dependent variable would take when all the predictors are at a value of zero. In some treatments this is called B0 instead of a
In the bivariate case, where there is only one X and one Y, the
standardized beta weight will equal the correlation coefficient. Let’s confirm this by seeing what would happen if we convert our raw scores to Z scores
From the scatterplot it would appear that there is a strong positive correlation between X and Y (as daily caloric intake increases, life expectancy increases),and X can be expected to be a good predictor of as-yet unknown cases of Y. (Note, however, that there is a lot of scatter about the line and we may need additional predictors to “soak up” some of the variance left over after this particular X has done its work (also consider loess regression “In the loess method, weighted least squares is used to
fit linear or quadratic functions of the predictors at the
centers of neighborhoods. The radius of each neighborhood
is chosen so that the neighborhood contains
a specified percentage of the data points)”
Significance of constant of little use. Just says that it differs significantlyfrom zero (e.g when x is zero, y is not zero)
This is a standardized partial regression coefficient or beta weight
If the data were expressed in standard scores, the equation would be ZY = .775ZX + e, and .775 is also the correlation between X and Y. This is a standard score regression equation
These weights are called unstandardized partial regression coefficients or weights
Residual SS is the sum of squared deviations of the known values of Y and the predicted values of Y based on the equation
Regression SS is the sum of the squared deviations of the predicted variable about its mean
Looking at the standard error of the standardized coefficient we can see that the estimate R (which is also the standardized version of b) is 775. Thus we could say with 95% confidence that if ZX is the Z score corresponding to a particular calorie level, life expectancy is .775 (Zx) plus or minus 7.255 years
SEE = SD of X multiplied by the square root of the coeffiecient of nondetermination.Says what an error standard score of 1 is equal to in terms of Y units
High Status Medium Status
* Beta X1Y.X2 = r X1Y – (r X2Y)(r X1X2)
1 – r2X1X2
Substituting the correlations we already have in the formula, we find that the beta weight for the predictive effect of variable X1 on Y is equal to .776 – (.869)(.682) / 1 – (.682)2 = .342. To compute the second weight, Beta X2Y.X1, we just switch the first and second terms in the numerator.
Now let’s see that in the context of an SPSS-calculated multiple regression
*Read this as the Beta weight for the regression of Y on X1 when the effects of X2 have been removed
Above are the raw (unstandardized) and standardized regression weights for the regression of female life expectancy on daily calorie intake and percentage of people who read. Consistent with our hand calculation, the standardized regression coefficient (beta weight) for daily caloric intake is .342. The beta weight for percentage of people who read is much larger, .636. What this weight means is that for every unit change in percentage of people who read (that is, for every increase by a factor of one standard deviation on the people who read variable), Y (female life expectancy) will increase by a multiple of .636 standard deviations. Note that both the beta coefficients are significant at p < .001
Above is the model summary, which has some important statistics. It gives us R and R square for the regression of Y (female life expectancy) on the two predictors. R is .905, which is a very high correlation. R square tells us what proportion of the variation in female life expectancy is explained by the two predictors, a very high .818. It gives us the standard error of estimate, which we can use to put confidence intervals around the unstandardized regression coefficients
Next we look at the F test of the significance of the
Regression equation, Y = .342 X1 + .636 X2. Is this so much better a predictor of female literacy (Y) than simply using the mean of Y that the difference is statistically significant? The F test is a ratio of the mean square for the regression equation to the mean square for the “residual” (the departures of the actual scores on Y from what the regression equation predicted). In this case we have a very large value of F, which is significant at p <.001. Thus it is reasonable to conclude that our regression equation is a significantly better predictor than the mean of Y.
Finally, your output provides confidence intervals around the unstandardized regression coefficients. Thus we can say with 95% confidence that the unstandardized weight to apply to daily calorie intake to predict female life expectancy ranges between .004 and .010, and that the undstandardized weight to apply to percentage of people who read ranges between .247 and .383
In the case of our two predictors, there is some indication of multicollinearity but not enough to throw out one of the variables