prediction variance in linear regression n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Prediction variance in Linear Regression PowerPoint Presentation
Download Presentation
Prediction variance in Linear Regression

Loading in 2 Seconds...

play fullscreen
1 / 12

Prediction variance in Linear Regression - PowerPoint PPT Presentation


  • 98 Views
  • Uploaded on

Prediction variance in Linear Regression. Assumptions on noise in linear regression allow us to estimate the prediction variance due to the noise at any point. Prediction variance is usually large when you are far from a data point.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Prediction variance in Linear Regression


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
prediction variance in linear regression
Prediction variance in Linear Regression
  • Assumptions on noise in linear regression allow us to estimate the prediction variance due to the noise at any point.
  • Prediction variance is usually large when you are far from a data point.
  • We distinguish between interpolation, when we are in the convex hull of the data points, and extrapolation where we are outside.
  • Extrapolation is associated with larger errors, and in high dimensions it usually cannot be avoided.
linear regression
Linear Regression
  • Surrogate is linear combination of given shape functions
  • For linear approximation
  • Difference (error) between data and surrogate
  • Minimize square error
  • Differentiate to obtain
model based error for linear regression
Model based error for linear regression
  • The common assumptions for linear regression
    • The true function is described by the functional form of the surrogate.
    • The data is contaminated with normally distributed error with the same standard deviation at every point.
    • The errors at different points are not correlated.
  • Under these assumptions, the noise standard deviation (called standard error) is estimated as
  • is used as estimate of the prediction error.
prediction variance
Prediction variance
  • Linear regression model
  • Define then
  • With some algebra
  • Standard error
interpolation extrapolation and regression
Interpolation, extrapolation and regression
  • Interpolation is often contrasted to regression or least-squares fit
  • As important is the contrast between interpolation and extrapolation
  • Extrapolation occurs when we are outside the convex hull of the data points
  • For high dimensional spaces we must have extrapolation!
2d example of convex hull
2D example of convex hull
  • By generating 20 points at random in the unit square we end up with substantial region near the origin where we will need to use extrapolation
  • Using the data in the notes, give a couple of

alternative sets of alphas

Approximately for the point

(0.4,0.4)

example of prediction variance
Example of prediction variance
  • For a linear polynomial RS y=b1+b2x1+b3x2find the prediction variance in the region
  • (a) For data at three vertices (omitting (1,1))
interpolation vs extrapolation
Interpolation vs. Extrapolation
  • At origin . At 3 vertices . At (1,1)
standard error contours
Standard error contours
  • Minimum error obtained by setting to zero derivative of prediction variance with respect to .
  • What is special about this point
  • Contours of prediction variance provide more detail.
data at four vertices
Data at four vertices
  • Now
  • And
  • Error at vertices
  • At the origin minimum is
  • How can we reduce error without adding points?
homework
Homework
  • Redo the four point example, when the data points are not at the corners but inside the domain, at +-0.8. What does the difference in the results tells you?
  • For a grid of 3x3 data points, compare the standard errors for a linear and quadratic fits.