linearity and ordinary least squares fitting n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Linearity and Ordinary Least Squares Fitting PowerPoint Presentation
Download Presentation
Linearity and Ordinary Least Squares Fitting

Loading in 2 Seconds...

play fullscreen
1 / 21

Linearity and Ordinary Least Squares Fitting - PowerPoint PPT Presentation


  • 123 Views
  • Uploaded on

Linearity and Ordinary Least Squares Fitting. ABE425 Engineering Measurement Systems. First, we need to talk about linearity. In Mathematics, there are linear and non-linear operations : If an operation is linear, the superposition principle can be applied:. Examples.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Linearity and Ordinary Least Squares Fitting' - madeleine-bertrand


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
linearity and ordinary least squares fitting

Linearity and Ordinary Least Squares Fitting

ABE425 Engineering Measurement Systems

first we need to talk about linearity
First, we need to talk about linearity
  • In Mathematics, there are linear and non-linear operations:
  • If an operation is linear, the superposition principle can be applied:
examples
Examples
  • Multiplication by a constant c (linear)
examples1
Examples
  • Differentiation (linear)
examples2
Examples
  • Integration (linear)
examples3
Examples
  • Squaring (non-linear)
examples4
Examples
  • Square root (non-linear)
you collected a set of data pairs for example temperature versus time
You collected a set of data pairs (for example temperature versus time
  • >> x=[0:1:10]'
  • >> y = [0.5 0.75 1.25 1.3 2.1 2.0 3.1 3.05 4.0 4.5 5]'
model is some function of the independent variable and the parameter vector
Model is some function of the independent variable and the parameter vector
  • The error is the difference between a data point and the corresponding model
the idea of using the sum of least squared residuals came from legendre
The idea of using the sum of least squared residuals came from Legendre:
  • How can we minimize this error with respect to the parameter vector? In other words, how can we find the parameter vector that minimizes the error and maximizes the fit?

“Sur la Méthode des moindres quarrés” in Legendre’s Nouvelles

méthodes pour la détermination des orbites des comètes, Paris 1805.

the minimum value of the sum of squares is obtained by setting this partial derivative to zero
The minimum value of the sum of squares is obtained by setting this partial derivative to zero
  • The derivative is partial, because the Sum of residuals S is a function of the error and the error itself is a function of the parameter vector (remember the chain rule):
the minimum value of the sum of squares is obtained by setting this partial derivative to zero1
The minimum value of the sum of squares is obtained by setting this partial derivative to zero
  • Substitution of the results from the previous slide gives:
  • Now, we need to find out what is
the proposed model is linear in the parameters here is an polynomial example
The proposed model is linear in the parameters. Here is an polynomial example:
  • For each ith measurement this can be written using a matrix and a parameter vector as follows:
  • This can also be written as:
for all measurement points we obtain
For all measurement points we obtain:
  • This can also be written in vector form as:
from the model definition we can obtain the partial derivative with respect to the parameter vector
From the model definition we can obtain the partial derivative with respect to the parameter vector
slide19
Determine whether the data needs an intercept. Often physical constraints demand that the fit curve passes through the origin!

No intercept!

ols lab
OLS lab

function [theta, msq] = fitols(x,y,Ovec)

% Fit polynomial function on data in OLS sense

% Author :

% Date :

% Revision :

%

% Syntax : [theta,msq] = fitols(x,y,Ovec)

%

% theta : Parameter vector

% msq : Mean square error

% x : Independent variable

% y : Dependent variable

%

% Ovec indicates terms [1 x x^2 ..]*Ovec'

% Example Ovec = [1 0 1] gives [1 x^2] and not x

% If vectors x,y are horizontal, transpose them to make them vertical

% Make sure the x and y vector have the same length. If not alert the user

% with an error dialog box (type help errordlg )

% Build the matrix of regressors. Check each entry of Ovec, and if it is a

% 1, add another column to the regression matrix A.

A = [];

% Compute the parameter vector theta using the OLS formula

% Compute the error vector

% Compute the mean square error which indicates how good the fit is

% Plot y (Temperature in C) versus x (Current in A). Add labels and title.

% Your output should look as shown in the handout.