INTRODUCTORY LINEAR REGRESSION CHAPTER 3:
3.1 SIMPLE LINEAR REGRESSION - Curve fitting - Inferences about estimated parameter - Adequacy of the models - Linear correlation CHAPTER OUTLINE: 3.2 Multiple Linear Regression
Introduction: • Regression – is a statistical procedure for establishing the r/ship between 2 or more variables. • This is done by fitting a linear equation to the observed data. • The regression line is then used by the researcher to see the trend and make prediction of values for the data. • There are 2 types of relationship: • Simple ( 2 variables) • Multiple (more than 2 variables)
3.1 The Simple Linear Regression Model • is an equation that describes a dependent variable (Y) in terms of an independent variable (X) plus random error where, = intercept of the line with the Y-axis = slope of the line = random error • Random error, is the difference of data point from the deterministic value. • This regression line is estimated from the data collected by fitting a straight line to the data set and getting the equation of the straight line,
Example 3.1: 1) A nutritionist studying weight loss programs might wants to find out if reducing intake of carbohydrate can help a person reduce weight. a) X is the carbohydrate intake (independent variable). b) Y is the weight (dependent variable). 2) An entrepreneur might want to know whether increasing the cost of packaging his new product will have an effect on the sales volume. a) X is cost b) Y is sales volume
3.1.1 CURVE FITTING (SCATTER PLOTS) • A scatter plot is a graph or ordered pairs (x,y). • The purpose of scatter plot – to describe the nature of the relationships between independent variable, X and dependent variable, Y in visual way. • The independent variable, x is plotted on the horizontal axis and the dependent variable, y is plotted on the vertical axis.
E(y) x SCATTER DIAGRAM • Positive Linear Relationship Regression line Intercept b0 Slope b1 is positive
E(y) x SCATTER DIAGRAM • Negative Linear Relationship Intercept b0 Regression line Slope b1 is negative
E(y) x SCATTER DIAGRAM • No Relationship Intercept b0 Regression line Slope b1 is 0
LINEAR REGRESSION MODEL • A linear regression can be develop by freehand plot of the data. Example 3.2: The given table contains values for 2 variables, X and Y. Plot the given data and make a freehand estimated regression line.
3.1.2 INFERENCES ABOUT ESTIMATED PARAMETERS Least Squares Method • The least squares method is commonly used to determine values for and that ensure a best fit for the estimated regression line to the sample data points • The straight line fitted to the data set is the line:
LEAST SQUARES METHOD Theorem : • Given the sample data , the coefficients of the least squares line are: • y-Intercept for the Estimated Regression Equation, • and are the mean of x and y respectively.
LEAST SQUARES METHOD ii) Slope for the Estimated Regression Equation, Where,
LEAST SQUARES METHOD • Given any value of the predicted value of the dependent variable , can be found by substituting into the equation
Example 3.3: Students score in history The data below represent scores obtained by ten primary school students before and after they were taken on a tour to the museum (which is supposed to increase their interest in history) • Fit a linear regression model with “before” as the explanatory variable and “after” as the dependent variable. • Predict the score a student would obtain “after” if he scored 60 marks “before”.
3.1.3 ADEQUACY OF THE MODELCOEFFICIENT OF DETERMINATION( ) • The coefficient of determination is a measure of the variation of the dependent variable (Y) that is explained by the regression line and the independent variable (X). • The symbol for the coefficient of determination is or . • If =0.90, then =0.81. It means that 81% of the variation in the dependent variable (Y) is accounted for by the variations in the independent variable (X). • The rest of the variation, 0.19 or 19%, is unexplained and called the coefficient of nondetermination. • Formula for the coefficient of nondetermination is
COEFFICIENT OF DETERMINATION( ) SST = SSR + SSE • Relationship Among SST, SSR, SSE where: SST = total sum of squares SSR = sum of squares due to regression SSE = sum of squares due to error • The coefficient of determination is: where: SSR = sum of squares due to regression SST = total sum of squares
Example 3.4 • If =0.919, find the value for and explain the value. Solution : = 0.84. It means that 84% of the variation in the dependent variable (Y) is explained by the variations in the independent variable (X).
3.1.4 Linear Correlation (r) • Correlation measures the strength of a linear relationship between the two variables. • Also known as Pearson’s product moment coefficient of correlation. • The symbol for the sample coefficient of correlation is , population . • Formula : • @
Properties of : • Values of close to 1 implies there is a strong positive linear relationship between x and y. • Values of close to -1 implies there is a strong negative linear relationship between x and y. • Values of close to 0implies little or no linear relationship between x and y.
Refer Example 3.3: Students score in history c)Calculate the value of r and interpret its meaning Solution: Thus, there is a strong positive linear relationship between score obtain before (x) and after (y).
The sign of b1 in the equation is “+”. Calculate the first. Then, use the second formula of r. Refer example 3.3: r= +.9416
Assumptions About the Error Term e 1. The error is a random variable with mean of zero. 2. The variance of , denoted by 2, is the same for all values of the independent variable. 3. The values of are independent. 4. The error is a normally distributed random variable.
3.1.5 TESTOF SIGNIFICANCE • To determine whether X provides information in predicting Y, we proceed with testing the hypothesis. • Two test are commonly used: i) ii) t Test F Test
1) t-Test 1. Determine the hypotheses. ( no linear r/ship) (exist linear r/ship) 2. Compute Critical Value/ level of significance. / p-value 3. Compute the test statistic.
1) t-Test 4. Determine the Rejection Rule. Reject H0 if : t< -or t> p-value <a 5.Conclusion. There is a significant relationship between variable X and Y.
2) F-Test 1. Determine the hypotheses. ( no linear r/ship) (exist linear r/ship) 2. Specify the level of significance. Fawith degree of freedom (df) in the numerator (1) and degrees of freedom (df) in the denominator (n-2) 3. Compute the test statistic. F = MSR/MSE 4. Determine the Rejection Rule. Reject H0 if : p-value <a F test >
2) F-Test 5.Conclusion. There is a significant relationship between variable X and Y.
Refer Example 3.3: Students score in history d) Test to determine if their scores before and after the trip is related. Use a=0.05 Solution: 1. ( no linear r/ship) (exist linear r/ship) 2. 3.
4. Rejection Rule: 5. Conclusion: Thus, we reject H0. The score before (x) is linear relationship to the score after (y) the trip.
ANALYSIS OF VARIANCE (ANOVA) • The value of the test statistic F for an ANOVA test is calculated as: • F=MSR • MSE • To calculate MSR and MSE, first compute the regression sum of squares (SSR) and the error sum of squares (SSE).
ANALYSIS OF VARIANCE (ANOVA) • General form of ANOVA table: • ANOVA Test • 1) Hypothesis: • 2) Select the distribution to use: F-distribution • 3) Calculate the value of the test statistic: F • 4) Determine rejection and non rejection regions: • 5) Make a decision: Reject Ho/ accept H0
Example 3.5 The manufacturer of Cardio Glide exercise equipment wants to study the relationship between the number of months since the glide was purchased and the length of time the equipment was used last week. • Determine the regression equation. • At , test whether there is a linear relationship between the variables
Solution (1): Regression equation:
Solution (2): • Hypothesis: • F-distribution table: • Test Statistic: F = MSR/MSE = 17.303 or using p-value approach: significant value =0.003 • Rejection region: Since F statistic > F table (17.303>11.2586 ), we reject H0 or since p-value (0.003 < 0.01 )we reject H0 5) Thus, there is a linear relationship between the variables (month X and hours Y).
3.2 MULTIPLE LINEAR REGRESSION • In multiple regression, there are several independent variables (X)and one dependent variable (Y). • The multiple regression model: • This equation that describes how the dependent variable y is related to these independent variables x1, x2, . . . xp. where: are the parameters, and e is a random variable called the error term are the independent variables.
MULTIPLE REGRESSION MODEL • Multiple regression analysis is use when a statistician thinks there are several independent variables contributing to the variation of the dependent variable. • This analysis then can be used to increase the accuracy of predictions for the dependent variable over one independent variable alone.
Estimated Multiple Regression Equation • Estimated Multiple Regression Equation • In multiple regression analysis, we interpret each regression coefficient as follows: bi represents an estimate of the change in y corresponding to a 1-unit increase in xi when all other independent variables are held constant.
MULTIPLE COEFFICIENT OF DETERMINATION (R2) • As with simple regression, R2 is the coefficient of multiple determination, and it is the amount of variation explained by the regression model. • Formula: • In multiple regression, as in simple regression, the strength of the relationship between the independent variable and the dependent variable is measured by correlation coefficient, R. MULTIPLE CORRELATION COEFFICIENT (R)
MODEL ASSUMPTIONS • The errors are normally distributed with mean and variance Var = . • The errors are statistically independent. Thus the error for any value of Y is unaffected by the error for any other Y-value. • The X-variables are linear additive (i.e., can be summed).
ANALYSIS OF VARIANCE (ANOVA) General form of ANOVA table: • Excel’s ANOVA Output SST SSR
TEST OF SIGNIFICANCE • In simple linear regression, the F and t tests provide • the same conclusion. • In multiple regression, the F and t tests have different • purposes. The F test is used to determine whether a significant relationship exists between the dependent variable and the set of all the independent variables. The F test is referred to as the test for overall significance. The t test is used to determine whether each of the individual independent variables is significant. A separate t test is conducted for each of the independent variables in the model. We refer to each of these t tests as a test for individual significance.
Testing for Significance: F Test - Overall Significance H0: 1 = 2 = . . . = p = 0 H1: One or more of the parameters is not equal to zero. Hypotheses F = MSR/MSE Test Statistics Rejection Rule Reject H0 if p-value <a or if F > F , where : F is based on an F distribution With p d.f. in the numerator and n - p - 1 d.f. in the denominator.
Testing for Significance: t Test- Individual Parameters Hypotheses Test Statistics Rejection Rule Reject H0 if p-value <a or t< -tor t>t Where: tis based on a t distribution with n - p - 1 degrees of freedom.
Example: An independent trucking company, The Butler Trucking Company involves deliveries throughout southern California. The managers want to estimate the total daily travel time for their drivers. He believes the total daily travel time would be closely related to the number of miles traveled in making the deliveries. • Determine whether there is a relationship among the variables using • b) Use the t-test to determine the significance of each independent variable. What is your conclusion at the 0.05 level of significance?
Solution: • Hypothesis Statement: • Test Statistics: • Rejection Region: • Since 32.88>4.74, we Reject H0 and conclude that there is a significance relationship between travel time (Y) and two independent variables, miles traveled and number of deliveries.
Solution: • b) Hypothesis Statement: • Test Statistics: • Rejection Region: • Since 6.18>2.365, we Reject H0 and conclude that there is a significance relationship between travel time (Y) and miles traveled (X1).