1 / 53

Principles of Biostatistics Simple Linear Regression

Principles of Biostatistics Simple Linear Regression. PPT based on Dr Chuanhua Yu and Wikipedia. Terminology. Moments, Skewness, Kurtosis Analysis of variance ANOVA Response (dependent) variable Explanatory ( independent) variable Linear regression model

duman
Download Presentation

Principles of Biostatistics Simple Linear Regression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Principles of Biostatistics Simple Linear Regression PPT based on Dr Chuanhua Yu and Wikipedia

  2. Terminology Moments, Skewness, Kurtosis Analysis of variance ANOVA Response (dependent) variable Explanatory (independent) variable Linear regression model Method of least squares Normal equation sum of squares, Error SSE sum of squares, Regression SSR sum of squares, Total SST Coefficient of Determination R2 F-value P-value, t-test, F-test, p-test Homoscedasticity heteroscedasticity

  3. 18.0 Normal distribution and terms 18.1 An Example 18.2 The Simple Linear Regression Model 18.3 Estimation: The Method of Least Squares 18.4 Error Variance and the Standard Errors of Regression Estimators 18.5 Confidence Intervals for the Regression Parameters 18.6 Hypothesis Tests about the Regression Relationship 18.7 How Good is the Regression? 18.8 Analysis of Variance Table and an F Test of the Regression Model 18.9 Residual Analysis 18.10 Prediction Interval and Confidence Interval Contents

  4. Normal Distribution The continuous probability density function of the normal distribution is the Gaussian function where σ > 0 is the standard deviation, the real parameter μ is the expected value, and is the density function of the "standard" normal distribution: i.e., the normal distribution with μ = 0 and σ = 1.

  5. Normal Distribution

  6. Moment About The Mean The kth moment about the mean (or kth central moment) of a real-valued random variable X is the quantity μk = E[(X − E[X]) k], where E is the expectation operator. For a continuous uni-variate probability distribution with probability density function f(x), the moment about the mean μ is The first moment about zero, if it exists, is the expectation of X, i.e. the mean of the probability distribution of X, designated μ. In higher orders, the central moments are more interesting than the moments about zero. μ1 is 0. μ2 is the variance, the positive square root of which is the standard deviation, σ. μ3/σ3 is Skewness, often γ. μ3/σ4 -3 is Kurtosis.

  7. Skewness • Consider the distribution in the figure. The bars on the right side of the distribution taper differently than the bars on the left side. These tapering sides are called tails (or snakes), and they provide a visual means for determining which of the two kinds of skewness a distribution has: • negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed. • positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed.

  8. Skewness Skewness, the third standardized moment, is written as γ1 and defined as where μ3 is the third moment about the mean and σ is the standard deviation. For a sample of n values the sample skewness is

  9. Kurtosis Kurtosis is the degree of peakedness of a distribution. A normal distribution is a mesokurtic distribution. A pure leptokurtic distribution has a higher peak than the normal distribution and has heavier tails. A pure platykurtic distribution has a lower peak than a normal distribution and lighter tails.

  10. Kurtosis The fourth standardized moment is defined as where μ4 is the fourth moment about the mean and σ is the standard deviation. For a sample of n values the sample kurtosis is

  11. 18.1 An example

  12. Scatterplot This scatterplot locates pairs of observations of serum IL-6 on the x-axis and brain IL-6 on the y-axis. We notice that: Larger (smaller) values of brain IL-6 tend to be associated with larger (smaller) values of serum IL-6. The scatter of points tends to be distributed around a positively sloped straight line. The pairs of values of serum IL-6and brain IL-6 are not located exactly on a straight line. The scatter plot reveals a more or less strong tendency rather than a precise linear relationship. The line represents the nature of the relationship on average.

  13. 0 0 Y Y Y 0 0 0 X X X Y Y Y X X X Examples of Other Scatterplots

  14. Data The inexact nature of the relationship between serum and brain suggests that a statistical model might be useful in analyzing the relationship. A statistical model separates the systematic component of a relationship from the random component. In ANOVA, the systematic component is the variation of means between samples or treatments (SSTR) and the random component is the unexplained variation (SSE). In regression, the systematic component is the overall linear relationship, and the random component is the variation around the line. Statistical model Systematic component + Random errors Model Building

  15. 18.2 The Simple Linear Regression Model The population simple linear regression model: y= a + b x +  or my|x=a+b x Nonrandom or Random Systematic Component Component Where y is the dependent (response) variable, the variable we wish to explain or predict; x is the independent (explanatory)variable, also called the predictor variable; and  is the error term, the only random component in the model, and thus, the only source of randomness in y. my|xis the mean of y when x is specified, all called the conditional mean of Y. a is the intercept of the systematic component of the regression relationship. is the slope of the systematic component.

  16. Picturing the Simple Linear Regression Model The simple linear regression model posits an exact linear relationship between the expected or average value of Y, the dependent variable Y, and X, the independent or predictor variable: my|x= a+b x Actual observed values of Y (y)differ from the expected value (my|x) by an unexplained or random error(e): y = my|x +  = a+b x+  Regression Plot Y my|x=a + x { y } } Error:   = Slope 1 { a = Intercept X 0 x

  17. The relationship between X and Y is a straight-Line (linear) relationship. The values of the independent variable X are assumed fixed (not random); the only randomness in the values of Y comes from the error term . The errors  are uncorrelated (i.e.Independent) in successive observations. The errors  are Normallydistributed with mean 0 and variance 2(Equal variance). That is: ~ N(0,2) Assumptions of the Simple Linear Regression Model LINE assumptions of the Simple Linear Regression Model Y my|x=a + x y Identical normal distributions of errors, all centered on the regression line. N(my|x,sy|x2) X x

  18. 18.3 Estimation: The Method of Least Squares Estimation of a simple linear regression relationship involves finding estimated or predicted values of the intercept and slope of the linear regression line. The estimated regression equation: y= a+ bx + e where a estimates the intercept of the population regression line, a ; b estimates the slope of the population regression line,  ; and estands for the observed errors ------- the residuals from fitting the estimated regression line a+ bx to a set of n points. The estima ted regres sion line: $ y = a + b x $ where (y - hat) is th e value of Y lying o n the fitt ed regress ion line f or a given value of X .

  19. Fitting a Regression Line Y Y Data Three errors from the least squares regression line X X Y e Errors from the least squares regression line are minimized Three errors from a fitted line X X

  20. Errors in Regression Y . yi { X xi

  21. Least Squares Regression The sum of squared e rrors in r egression is: n n å å $ = - SSE = e 2 (y y ) 2 SSE: sum of squared errors i i i i = 1 i = 1 The least squa res regres sion line is that which minimizes the SSE with respe ct to the estimates . a and b a SSE Parabola function Least squares a b Least squares b

  22. Normal Equation • S is minimized when its gradient with respect to each parameter is equal to zero. The elements • of the gradient vector are the partial derivatives of S with respect to the parameters: • Since , the derivatives are • Substitution of the expressions for the residuals and the derivatives into the gradient equations gives • Upon rearrangement, the normal equations • are obtained. The normal equations are written in matrix notation as • The solution of the normal equations yields the vector of the optimal parameter values.

  23. Normal Equation

  24. Sums of Sq uares and Cross Products: ( ) å 2 x å å = - = - 2 2 lxx ( x x ) x n ( ) å 2 y å å = - = - lyy ( y y ) 2 y 2 n ( ) å å x ( y ) å å = - - = - lxy ( x x )( y y ) xy n - Least squares re gression estimators: lxy = b lxx = - a y b x Sums of Squares, Cross Products, and Least Squares Estimators

  25. Example 18-1

  26. New Normal Distributions • Since each coefficient estimator is a linear combination of Y (normal random variables), each bi (i = 0,1, ..., k) is normally distributed. • Notation: in 2D special case, when j=0, in 2D special case

  27. Y Y X X What you see when looking at the total variation of Y. What you see when looking along the regression line at the error variance of Y. Total Variance and Error Variance

  28. 18.4 Error Variance and the Standard Errors of Regression Estimators Y Square and sum all regression errors to find SSE. X

  29. Standard Errors of Estimates in Regression

  30. T distribution Student's distribution arises when the population standard deviation is unknown and has to be estimated from the data.

  31. 18.5 Confidence Intervals for the Regression Parameters

  32. Y Y Y X X X 18.6 Hypothesis Tests about the Regression Relationship Constant Y Unsystematic Variation Nonlinear Relationship H0:b =0 H0:b =0 H0:b =0 A hypothes is test fo r the exis tence of a linear re lationship between X and Y: b = H : 0 0 b ¹ 0 H : 1 Test stati stic for t he existen ce of a li near relat ionship be tween X an d Y: sb b b where is the le ast - squares es timate of the regres sion slope and is the s tandard er ror of When the null hypot hesis is t rue, the stati stic has a t distribu tion with n - 2 degrees o f freedom.

  33. T-test A test of the null hypothesis that the means of two normally distributed populations are equal. Given two data sets, each characterized by its mean, standard deviation and number of data points, we can use some kind of t test to determine whether the means are distinct, provided that the underlying distributions can be assumed to be normal. All such tests are usually called Student's t tests

  34. T-test

  35. T test Table

  36. 18.7 How Good is the Regression? The coefficient of determination, R2, is a descriptive measure of the strength of the regression relationship, a measure how well the regression line fits the data. R2:coefficient of determination Y . } { Unexplained Deviation Total Deviation { Explained Deviation Percentage of total variation explained by the regression. R2= X

  37. The Coefficient of Determination Y Y Y X X X SST SST SST S S E SSR SSR SSE R2=0 SSE R2=0.50 R2=0.90

  38. Another Test • Earlier in this section you saw how to perform a t-test to compare a sample mean to an accepted value, or to compare two sample means. In this section, you will see how to use the F-test to compare two variances or standard deviations. • When using the F-test, you again require a hypothesis, but this time, it is to compare standard deviations. That is, you will test the null hypothesis H0: σ12 = σ22 against an appropriate alternate hypothesis.

  39. F-test T test is used for every single parameter. If there are many dimensions, all parameters are independent. Too verify the combination of all the paramenters, we can use F-test. The formula for an F- test in multiple-comparison ANOVA problems is: F = (between-group variability) / (within-group variability)

  40. F test table

  41. 18.8 Analysis of Variance Table and an F Test of the Regression Model

  42. F-test T-test and R • 1. In 2D case, F-test and T-test are same. It can be proved that f = t2 • So in 2D case, either F or T test is enough. This is not true for more variables. • 2. F-test and R have the same purpose to measure the whole regressions. They • are co-related as • 3. F-test are better than R became it has better metric which has distributions • for hypothesis test. • Approach: • First F-test. If passed, continue. • T-test for every parameter, if some parameter can not pass, then we can • delete it can re-evaluate the regression. • Note we can delete only one parameters(which has least effect on regression) • at one time, until we get all the parameters with strong effect.

  43. Residuals Residuals 0 0 Homoscedasticity: Residuals appear completely random. No indication of model inadequacy. Heteroscedasticity: Variance of residuals changes when x changes. Residuals Residuals 0 0 Time Curved pattern in residuals resulting from underlying nonlinear relationship. Residuals exhibit a linear trend with time. 18.9 Residual Analysis

  44. Example 18-1: Using Computer-Excel Residual Analysis. The plot shows the a curve relationship between the residuals and the X-values (serum IL-6).

  45. Prediction Interval • samples from a normally distributed population. • The mean and standard deviation of the population are unknown except insofar as they can be estimated based on the sample. It is desired to predict the next observation. • Let n be the sample size; let μ and σ be respectively the unobservable mean and standard deviation of the population. Let X1, ..., Xn, be the sample; let Xn+1 be the future observation to be predicted. Let • and

  46. Prediction Interval • Then it is fairly routine to show that • It has a Student's t-distribution with n − 1 degrees of freedom. Consequently we have • where Tais the 100((1 + p)/2)th percentile of Student's t-distribution with n − 1 degrees of freedom. Therefore the numbers • are the endpoints of a 100p% prediction interval for Xn + 1.

  47. Point Prediction A single-valued estimate of Y for a given value of X obtained by inserting the value of X in the estimated regression equation. Prediction Interval For a value of Y given a value of X Variation in regression line estimate Variation of points around regression line For confidence interval of an average value of Y given a value of X Variation in regression line estimate 18.10 Prediction Interval and Confidence Interval

  48. confidence interval of an average value of Y given a value of X

  49. Confidence Interval for the Average Value of Y

  50. Prediction Interval For a value of Y given a value of X

More Related