1 / 24

Method of Least Squares - PowerPoint PPT Presentation

Method of Least Squares. Least Squares. Method of Least Squares : Deterministic approach The inputs u(1), u(2), ..., u(N) are applied to the system The outputs y(1), y(2), ..., y(N) are observed Find a model which fits the input - output relation to a ( linear ?) curve , f(n,u(n))

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

PowerPoint Slideshow about ' Method of Least Squares' - nuncio

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Method of Least Squares

• Method of LeastSquares:

• Deterministicapproach

• Theinputs u(1), u(2), ..., u(N) areappliedtothesystem

• Theoutputs y(1), y(2), ..., y(N) areobserved

• Find a model whichfitstheinput-outputrelationto a (linear?) curve, f(n,u(n))

• ‘best’ fit byminimisingthesum of thesqures of thedifference f - y

• The curve fitting problem can be formulated as

• Error:

• Sum-of-error-squares:

• Minimum (least-squares of error) is achieved when the gradient is zero

observations

model

variable

• Fortheinputstothesystem, u(i)

• Theobserveddesiredresponse is, d(i)

• Relation is assumedto be linear

• Unobservablemeasurementerror

• Zeromean

• White

• Design a transversalfilterwhichfindstheleastsquaressolution

• Then, sum of errorsquares is

• We will express the input in matrix form

• Depending on the limits i1 and i2 this matrix changes

Covariance Method

i1=M, i2=N

Prewindowing Method

i1=1, i2=N

Postwindowing Method

i1=M, i2=N+M1

Autocorr. Method

i1=1, i2=N+M1

• Error signal

• Least squares (minimum of sum of squares) is achieved when

• i.e., when

• The minimum-error time series emin(i) is orthogonal to the time series of the input u(i-k) applied to tap k of a transversal filter of length M for k=0,1,...,M-1 when the filter is operating in its least-squares condition.

!Time averaging!

(For Wiener filtering)

(this was ensemble average)

• LS estimate of the desired response is

• Multiply principle of orthogonality by wk* and take summation over k

• Then

• When a transversal filter operates in its least-squares condition, the least-squares estimate of the desired response -produced at the output of the filter- and the minimum estimation error time series are orthogonal to each other over time i.

• Due to the principle of orthogonality, second and third terms are orthogonal, hence

where

• , when eo(i)= 0 for all i, impossible

• , when the problem is underdetermined fewer data points than parameters infinitely many solutions (no unique soln.)!

Principle of Orthogonality

Minimum error:

• Hence,

Expanded system of the normal equations for linear least-squares filters.

z(-k), 0 ≤k ≤M-1

time-average

cross-correlation bw

the desired response

and the input

(t,k), 0≤(t,k) ≤M-1

time-average

autocorrelation function

of the input

• Matrix form of the normal equations for linear least-squares filters:

• Linear least-squares counterpart of the Wiener-Hopf eqn.s.

• Here  and z are time averages, whereas in Wiener-Hopf eqn.s they were ensemble averages.

(if -1 exists!)

• Energy contained in the time series is

• Or,

• Then the minimum sum of error squares is

• Property I: The correlation matrix  is Hermitian symmetric,

• Property II: The correlation matrix  is nonnegative definite,

• Property III: The correlation matrix  is nonsingular iff det() is nonzero

• Property IV: The eigenvalues of the correlation matrix  are real and non-negative.

• Property V: The correlation matrix  is the product of two rectangular Toeplitz matrices that are Hermitian transpose of each other.

• But we know that

which yields

• Substituting into the minimum sum of error squares expression gives

then

! Pseudo-inverse !

• The LS estimate of d is given by

• The matrix

is a projection operator

• onto the linear space spanned by the columns of data matrix A

• i.e. the space Ui.

• The orthogonal complement projector is

Projection - Example

• M=2 tap filter, N=4 → N-M+1=3

• Let

• Then

• And

orthogonal

• LS always has a solution, is that solution unique?

• The least-squares estimate is unique if and only if the nullity (the dimension of the null space) of the data matrix A equals zero.

• AKxM, (K=N-M+1)

• Solution is unique when A is of full column rank, K≥M

• All columns of A are linearly independent

• Overdetermined system (more eqns. than variables (taps))

• (AHA)-1 nonsingular → exists and unique

• Infinitely many solutions when A has linearly dependent columns, K<M

• (AHA)-1 is singular

• Property I: Theleast-squaresestimate is unbiased, providedthatthemeasurementerrorprocesseo(i) has zeromean.

• Property II: Whenthemeasurementerrorprocesseo(i) is whitewithzeromeanandvariance2, thecovariancematrix of theleast-squaresestimateequals2-1.

• Property III: Whenthemeasurementerrorprocesseo(i) is whitewithzeromean, theleastsquaresestimate is thebestlinearunbiasedestimate.

• Property IV: Whenthemeasurementerrorprocesseo(i) is whiteandGaussianwithzeromean, theleast-squaresestimateachievestheCramer-Raolowerboundforunbiasedestimates.

• The rank (W) of an KxN (K≥N or K<N) matrix A gives

• The number of linearly independent columns/rows

• The number of non-zero eigenvalues/singular values

• The matrix is said to be full rank (full column or row rank) if

• Otherwise, it is said to be rank-deficient

• Rank is an important parameter for matrix inversion

• If K=N (square matrix) and the matrix is full rank (W=K=N) (non-singular) inverse of the matrix can be calculated, A-1=adj(A)/det(A)

• If the matrix is not square (K≠N), and/or it is rank-deficient (singular), A-1 does not exist, instead we can use the pseudo-inverse (a projection of the inverse), A+

• We can calculate the pseudo-inverse using SVD.

• Any KxN matrix (K≥N or K<N) can be decomposed using the Singular Value Decomposition (SVD) as follows:

• The system of eqn.s,

• is overdetermined if K>N, more eqn.s than unknowns,

• Unique solution (if A is full-rank)

• Non-unique, infinitely many solutions (if A is rank-deficient)

• is underdetermined if K<N, more unknowns than eqn.s,

• Non-unique, infinitely many solutions

• In either case the solution(s) is(are)

where

• Find the solution of (A: KxM)

• If K>M and rank(A)=M, ( ) the unique solution is

• Otherwise , infinitely many solutions, but pseudo-inverse gives the minimum-norm solution to the least squares problem.

• Shortest length possible in the Euclidean norm sense.