Loading in 5 sec....

Tutorial 7 SVD Total Least SquaresPowerPoint Presentation

Tutorial 7 SVD Total Least Squares

- By
**sumi** - Follow User

- 121 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Tutorial 7 SVD Total Least Squares' - sumi

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Tutorial 7 SVD Total Least Squares

We already know that the eigenvectors of a matrix A form a convenient basis for working with A.

However, for rectangular matrices A (m x n), dim(Ax) ≠ dim(x) and the concept of eigenvectors doesn’t exist.

Yet, ATA (n x n) is symmetric real matrix (A is real) and therefore, there is an orthonormal basis of eigenvectors {uK}.

Consider the vectors {vK}

They are also orthonormal, since:

Since ATA is positive semidefinite, its {λk≥0}.

Define the singular values of A as

and order them in a non-increasing order:

Motivation: One can see, that if A itself square and symmetric, than {uk, σk} are the set of its own eigenvectors and eigenvalues.

For a general matrix A, assume {σ1 ≥ σ2 ≥… σR >0= σr+1 = σr+2 =…= σn }.

Now we can write:

Let us find SVD for the matrix

In order to find V, we are calculating eigenvectors of ATA:

(5-λ)2-9=0;

The corresponding eigenvectors are found by:

Consider again (see Tutorial 4) the set of data points ,

and the problem of linear approximation of this set:

In the Least Squares (LS) approach, we defined a set of equations:

If , then the LS solution minimizes the sum of squared errors:

This approach assumes that in the set of points the values of bi are measured with errors while the values of ti are exact, as demonstrated on the figure.

Assume, that we rewrite the line equation in the form:

. Then the corresponding LS equation becomes:

Corresponding to minimization of

Which means noise in ti instead of bi, and this generally leads to different solution.

Consider the following Matlab code:

% Create the data

x=(0:0.01:2)'; y=0.5*x+4;

xn=x+randn(201,1)*0.3;

yn=y+randn(201,1)*0.3;

figure(1); clf; plot(x,y,'r'); hold on;

grid on; plot(xn,yn,'+');

% LS - version 1 - horizontal is fixed

A=[ones(201,1),xn]; b=yn;

param=inv(A'*A)*A'*b;

plot(xn,A*param,'g');

% LS - version 2 - vertical is fixed

C=[ones(201,1),yn]; t=xn;

param=inv(C'*C)*C'*t;

plot(C*param,yn,'b');

To solve the problem with the noise along both ti and bi, we rewrite the line equation as:

where

Now we can write:

The exact solution of this system is possible if ti, bi lie on the same line, in this case rank(A)=1. This formulation is symmetric relatively to t and b.

The rank of A is 2 since the points are noisy and do not lie on the same line. SVD factorization, and zeroing of the second singular value allow to construct the matrix A1, closest to A with rank(A1)=1.

The geometric interpretation of the TLS method is finding a constant a and a set of points , such that the points lie closest in the L2 to the data set :

xnM=mean(xn);

ynM=mean(yn);

A=[xn-xnM,yn-ynM];

[U,D,V]=svd(A);

D(2,2)=0;

Anew=U*D*V';

plot(Anew(:,1)+xnM;

Anew(:,2)+ynM,‘r');

Download Presentation

Connecting to Server..