method of least squares
Download
Skip this Video
Download Presentation
Method of Least Squares

Loading in 2 Seconds...

play fullscreen
1 / 24

Method of Least Squares - PowerPoint PPT Presentation


  • 136 Views
  • Uploaded on

Method of Least Squares. Least Squares. Method of Least Squares : Deterministic approach The inputs u(1), u(2), ..., u(N) are applied to the system The outputs y(1), y(2), ..., y(N) are observed Find a model which fits the input - output relation to a ( linear ?) curve , f(n,u(n))

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Method of Least Squares' - nuncio


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
least squares
Least Squares
  • Method of LeastSquares:
    • Deterministicapproach
  • Theinputs u(1), u(2), ..., u(N) areappliedtothesystem
  • Theoutputs y(1), y(2), ..., y(N) areobserved
    • Find a model whichfitstheinput-outputrelationto a (linear?) curve, f(n,u(n))
    • ‘best’ fit byminimisingthesum of thesqures of thedifference f - y
least squares1
Least Squares
  • The curve fitting problem can be formulated as
  • Error:
  • Sum-of-error-squares:
  • Minimum (least-squares of error) is achieved when the gradient is zero

observations

model

variable

problem statement
Problem Statement
  • Fortheinputstothesystem, u(i)
  • Theobserveddesiredresponse is, d(i)
  • Relation is assumedto be linear
  • Unobservablemeasurementerror
    • Zeromean
    • White
problem statement1
Problem Statement
  • Design a transversalfilterwhichfindstheleastsquaressolution
  • Then, sum of errorsquares is
data windowing
Data Windowing
  • We will express the input in matrix form
  • Depending on the limits i1 and i2 this matrix changes

Covariance Method

i1=M, i2=N

Prewindowing Method

i1=1, i2=N

Postwindowing Method

i1=M, i2=N+M1

Autocorr. Method

i1=1, i2=N+M1

principle of orthogonality
Principle of Orthogonality
  • Error signal
  • Least squares (minimum of sum of squares) is achieved when
  • i.e., when
  • The minimum-error time series emin(i) is orthogonal to the time series of the input u(i-k) applied to tap k of a transversal filter of length M for k=0,1,...,M-1 when the filter is operating in its least-squares condition.

!Time averaging!

(For Wiener filtering)

(this was ensemble average)

corollary of principle of orthogonality
Corollary of Principle of Orthogonality
  • LS estimate of the desired response is
  • Multiply principle of orthogonality by wk* and take summation over k
  • Then
  • When a transversal filter operates in its least-squares condition, the least-squares estimate of the desired response -produced at the output of the filter- and the minimum estimation error time series are orthogonal to each other over time i.
energy of minimum error
Energy of Minimum Error
  • Due to the principle of orthogonality, second and third terms are orthogonal, hence

where

  • , when eo(i)= 0 for all i, impossible
  • , when the problem is underdetermined fewer data points than parameters infinitely many solutions (no unique soln.)!
normal equations
Normal Equations

Principle of Orthogonality

Minimum error:

  • Hence,

Expanded system of the normal equations for linear least-squares filters.

z(-k), 0 ≤k ≤M-1

time-average

cross-correlation bw

the desired response

and the input

(t,k), 0≤(t,k) ≤M-1

time-average

autocorrelation function

of the input

normal equations matrix formulation
Normal Equations (Matrix Formulation)
  • Matrix form of the normal equations for linear least-squares filters:
  • Linear least-squares counterpart of the Wiener-Hopf eqn.s.
  • Here  and z are time averages, whereas in Wiener-Hopf eqn.s they were ensemble averages.

(if -1 exists!)

minimum sum of error squares
Minimum Sum of Error Squares
  • Energy contained in the time series is
  • Or,
  • Then the minimum sum of error squares is
properties of the time average correlation matrix
Properties of the Time-Average Correlation Matrix 
  • Property I: The correlation matrix  is Hermitian symmetric,
  • Property II: The correlation matrix  is nonnegative definite,
  • Property III: The correlation matrix  is nonsingular iff det() is nonzero
  • Property IV: The eigenvalues of the correlation matrix  are real and non-negative.
properties of the time average correlation matrix1
Properties of the Time-Average Correlation Matrix 
  • Property V: The correlation matrix  is the product of two rectangular Toeplitz matrices that are Hermitian transpose of each other.
normal equations reformulation
Normal Equations (Reformulation)
  • But we know that

which yields

  • Substituting into the minimum sum of error squares expression gives

then

! Pseudo-inverse !

projection
Projection
  • The LS estimate of d is given by
  • The matrix

is a projection operator

    • onto the linear space spanned by the columns of data matrix A
    • i.e. the space Ui.
  • The orthogonal complement projector is
projection example
Projection - Example
  • M=2 tap filter, N=4 → N-M+1=3
  • Let
  • Then
  • And

orthogonal

uniqueness of the ls solution
Uniqueness of the LS Solution
  • LS always has a solution, is that solution unique?
  • The least-squares estimate is unique if and only if the nullity (the dimension of the null space) of the data matrix A equals zero.
  • AKxM, (K=N-M+1)
  • Solution is unique when A is of full column rank, K≥M
    • All columns of A are linearly independent
    • Overdetermined system (more eqns. than variables (taps))
    • (AHA)-1 nonsingular → exists and unique
    • Infinitely many solutions when A has linearly dependent columns, K<M
    • (AHA)-1 is singular
properties of the ls estimates
Properties of the LS Estimates
  • Property I: Theleast-squaresestimate is unbiased, providedthatthemeasurementerrorprocesseo(i) has zeromean.
  • Property II: Whenthemeasurementerrorprocesseo(i) is whitewithzeromeanandvariance2, thecovariancematrix of theleast-squaresestimateequals2-1.
  • Property III: Whenthemeasurementerrorprocesseo(i) is whitewithzeromean, theleastsquaresestimate is thebestlinearunbiasedestimate.
  • Property IV: Whenthemeasurementerrorprocesseo(i) is whiteandGaussianwithzeromean, theleast-squaresestimateachievestheCramer-Raolowerboundforunbiasedestimates.
computation of the ls estimates
Computation of the LS Estimates
  • The rank (W) of an KxN (K≥N or K<N) matrix A gives
    • The number of linearly independent columns/rows
    • The number of non-zero eigenvalues/singular values
  • The matrix is said to be full rank (full column or row rank) if
    • Otherwise, it is said to be rank-deficient
  • Rank is an important parameter for matrix inversion
    • If K=N (square matrix) and the matrix is full rank (W=K=N) (non-singular) inverse of the matrix can be calculated, A-1=adj(A)/det(A)
    • If the matrix is not square (K≠N), and/or it is rank-deficient (singular), A-1 does not exist, instead we can use the pseudo-inverse (a projection of the inverse), A+
slide22
SVD
  • We can calculate the pseudo-inverse using SVD.
  • Any KxN matrix (K≥N or K<N) can be decomposed using the Singular Value Decomposition (SVD) as follows:
slide23
SVD
  • The system of eqn.s,
    • is overdetermined if K>N, more eqn.s than unknowns,
      • Unique solution (if A is full-rank)
      • Non-unique, infinitely many solutions (if A is rank-deficient)
    • is underdetermined if K<N, more unknowns than eqn.s,
      • Non-unique, infinitely many solutions
    • In either case the solution(s) is(are)

where

computation of the ls estimates1
Computation of the LS Estimates
  • Find the solution of (A: KxM)
  • If K>M and rank(A)=M, ( ) the unique solution is
  • Otherwise , infinitely many solutions, but pseudo-inverse gives the minimum-norm solution to the least squares problem.
    • Shortest length possible in the Euclidean norm sense.
ad