1 / 50

SYSTEMS Identification

SYSTEMS Identification. Ali Karimpour Assistant Professor Ferdowsi University of Mashhad. Reference: “System Identification Theory For The User” Lennart Ljung. Lecture 7. Parameter Estimation Method. Topics to be covered include : Guiding Principles Behind Parameter Estimation Method.

denisesmith
Download Presentation

SYSTEMS Identification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SYSTEMSIdentification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart Ljung

  2. Lecture 7 Parameter Estimation Method Topics to be covered include: • Guiding Principles Behind Parameter Estimation Method. • Minimizing Prediction Error. • Linear Regressions and the Least-Squares Method. • A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method. • Correlation Prediction Errors with Past Data. • Instrumental Variable Methods.

  3. Models of linear time invariant system Topics to be covered include: • Guiding Principles Behind Parameter Estimation Method. • Minimizing Prediction Error. • Linear Regressions and the Least-Squares Method. • A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method. • Correlation Prediction Errors with Past Data. • Instrumental Variable Methods.

  4. Guiding Principles Behind Parameter Estimation Method Parameter Estimation Method Suppose that we have selected a certain model structure M. The set of models defined as: Suppose the system is: For each θ , model represents a way of predicting future outputs. The predictor is a linear filter as: Where:

  5. Guiding Principles Behind Parameter Estimation Method Suppose that we collect a set of data from system as: Formally we are going to find a map from the data ZNto the set DM Such a mapping is a parameter estimation method.

  6. Based on Zt we can compute the prediction error ε(t,θ). Select so that the prediction error t=1, 2, … , N, becomes as small as possible. ? • Make uncorrelated with a given data sequence. 7_5 and 7_6 Guiding Principles Behind Parameter Estimation Method Evaluating the candidate model Let us define the prediction error as: When the data set ZNis known, these errors can be computed for t=1, 2 , … , N A guiding principle for parameter estimation is: • Form a scalar-valued criterion function that measure the size of ε.7_2 till 7_4 We describe two approaches

  7. Models of linear time invariant system Topics to be covered include: • Guiding Principles Behind Parameter Estimation Method. • Minimizing Prediction Error. • Linear Regressions and the Least-Squares Method. • A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method. • Correlation Prediction Errors with Past Data. • Instrumental Variable Methods.

  8. Clearly the size of prediction error is the same as ZN The estimate is then defined by: Minimizing Prediction Error Let to filter the prediction error by a stable linear filter L(q) Then use the following norm Where l(.) is a scalar-valued positive function.

  9. Minimizing Prediction Error Generally the term prediction error identification methods (PEM) is used for the family of this approaches. • Choice of l(.) Particular methods with specific names are used according to: • Choice of L(.) • Choice of model structure • Method by which the minimization is realized

  10. Minimizing Prediction Error Choice of L The effect of L is best understood in a frequency-domain interpretation. Thus L acts like frequency weighting. See also >> 14.4 Prefiltering Exercise: Consider following system Show that the effect of prefiltering by L is identical to changing the noise model from

  11. Minimizing Prediction Error Choice of l A standard choice, which is convenient both for computation and analysis. See also >> 15.2 Choice of norms: Robustness (against bad data) One can also parameterize the norm independent of the model parameterization.

  12. Models of linear time invariant system Topics to be covered include: • Guiding Principles Behind Parameter Estimation Method. • Minimizing Prediction Error. • Linear Regressions and the Least-Squares Method. • A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method. • Correlation Prediction Errors with Past Data. • Instrumental Variable Methods.

  13. Linear Regressions and the Least-Squares Method We introduce linear regressions before as: φ is the regression vector and for the ARX structure it is μ(t) is a known data dependent vector. For simplicity let it zero in the reminder of this section. Least-squares criterion Now let L(q)=1 and l(ε)= ε2/2 then This is Least-squares criterion for the linear regression

  14. Linear Regressions and the Least-Squares Method Least-squares criterion The least square estimate (LSE) is:

  15. Linear Regressions and the Least-Squares Method Properties of LSE The least square method is a special case of PEM (prediction error method) So we have

  16. Linear Regressions and the Least-Squares Method Properties of LSE

  17. Linear Regressions and the Least-Squares Method Weighted Least Squares Different measurement could be assigned different weights or The resulting estimate is the same as previous.

  18. We show that in a difference equation Linear Regressions and the Least-Squares Method Colored Equation-error Noise if the disturbance v(t) is not white noise, then the LSE will not converge to the true value ai and bi . To deal with this problem, we may incorporate further modeling of the equation error v(t) as discussed in chapter 4, let us say Now e(t) is white noise, but the new model take us out from LS environment, except in two cases: • Known noise properties • High-order models

  19. Linear Regressions and the Least-Squares Method Colored Equation-error Noise • Known noise properties Suppose the values of ai and bi are unknown, but k is a known filter (not too realistic a situation), so we have Filtering through k-1(q) gives where Since e(t) is white, the LS method can be applied without problems. Notice that this is equivalent to applying the filter L(q)=k-1(q) .

  20. Linear Regressions and the Least-Squares Method Colored Equation-error Noise • High-order models Suppose that the noise v can be well described by k(q)=1/D(q) where D(q) is a polynomial of order r. So we have or Now we can apply LS method. Note that nA=na+r, nB=nb+r

  21. To derive the system Linear Regressions and the Least-Squares Method Consider a state space model as 1- Parameterize A, B, C, D as in section 4.3 2- We have no insight into the particular structure and we would like to find any suitable matrices A, B, C, D. Note: Since there are infinite number of such matrices that describe the same system (similarity transformation), we will have to fix the coordinate basis of the state space realization.

  22. Linear Regressions and the Least-Squares Method Consider a state space model as Note: Since there are infinite number of such matrices that describe the same system (similarity transformation), we will have to fix the coordinate basis of the state space realization. Let us for a moment that not only y and u are measured the states are also measured. This would, by the way, fix the state-space realization coordinate basis. Now with known y, u, x the model becomes a linear regression Then But there is some problem? States are not available to measure!

  23. Linear Regressions and the Least-Squares Method Estimating State Space Models Using Least Squares Techniques(Subspace Methods) By subspace algorithm x(t+1) derived from observations. Chapter 10

  24. Models of linear time invariant system Topics to be covered include: • Guiding Principles Behind Parameter Estimation Method. • Minimizing Prediction Error. • Linear Regressions and the Least-Squares Method. • A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method. • Correlation Prediction Errors with Past Data. • Instrumental Variable Methods.

  25. A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Estimation and the Principle of Maximum Likelihood The area of statistical inference, deals with the problem of extracting information from observations that themselves could be unreliable. Suppose that observation yN=(y(1), y(2),…,y(N)) has following probability density function (PDF) That is: θ is a d-dimensional parameter vector. The propose of the observation is in fact to estimate the vector θ using yN. Suppose the observed value of yN is yN*, then

  26. A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Estimation and the Principle of Maximum Likelihood Many such estimator functions are possible. A particular one >>>>>>>>> maximum likelihood estimator (MLE) . The probability that the realization(=observation) indeed should take the value yN* is proportional to This is a deterministic function of θ once the numerical value yN* isinserted and it is called Likelihood function. A reasonable estimator of θ could then be where the maximization performed for fixed yN* . This function is known asthe maximum likelihood estimator (MLE).

  27. A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Example: Let Be independent random variables with normal distribution with unknown means θ0 and known variances λi A common estimator is the sample mean: To calculate MLE, we start to determine the joint PDF for the observations. The PDF for y(i) is: Joint PDF for the observations is: (since y(i) are independent)

  28. Joint PDF for the observations is: (since y(i) are independent) A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Example: Let Be independent random variables with normal distribution with unknown means θ0 and known variances λi A common estimator is the sample mean: So the likelihood function is: Maximizing likelihood function is the same as maximizing its logarithm. So

  29. A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Example: Let Be independent random variables with normal distribution with unknown means θ0 and known variances λi Suppose N=15 and y(i) is derived from a random generation (normal distribution) such that the means is 10 but variances are: 10, 2, 3, 4, 61, 11, 0.1, 121, 10, 1, 6, 9, 11, 13, 15 The estimated means for 10 different experiments are shown in the figure: Exercise:Do the same procedure for another experiments and draw the corresponding figure. Exercise:Do the same procedure for another experiments and draw the corresponding figure. Suppose all variances as 10.

  30. A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Maximum likelihood estimator (MLE) Relationship to the Maximum A Posteriori (MAP) Estimate The Bayesian approach is used to derive another parameter estimation problem. In the Bayesian approach the parameter itself is thought of as a random variable. Let the prior PDF for θis: After some manipulation the Maximum A Posteriori (MAP) estimate is:

  31. True value of θ A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Cramer-Rao Inequality The quality of an estimator can be assessed by its mean-square error matrix: We may be interested in selecting estimators that make P small. Cramer-Rao inequality give a lower bound for P M is Fisher Information matrix

  32. Suppose also that the distribution of yN is given by fy(θ0;xN) for some value θ0. Then tends to θ0 with probability 1 as N tends to infinity, and A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Asymptotic Properties of the MLE Calculation of Is not an easy task. Therefore, limiting properties as the sample size tends to infinity are calculated instead. For the MLE in case of independent observations, Wald and Cramer obtain Suppose that the random variable {y(i)} are independent and identically distributed, so that converges in distribution to the normal distribution with zero mean covariance matrix given by Cramer-Rao lower bound M-1.

  33. A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Probabilistic Models of Dynamical Systems Suppose Recall this kind of model a complete probabilistic model. Likelihood function for Probabilistic Models of Dynamical Systems We note that, the output is: Now we must determine the likelihood function

  34. A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Lemma: Suppose ut is given as a deterministic sequence, and assume that the generation of yt is described by the model Then the joint probability density function for yt , given ut is: Proof: CPDF of y(t), given Zt-1 , is Using Bayes’s rule, the joint CPDF of y(t) and y(t-1), given Zt-2 can be expressed as: Similarly we derive (I)

  35. A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Probabilistic Models of Dynamical Systems Suppose Now we must determine the likelihood function By previous lemma Maximizing this function is the same as maximizing If we define

  36. Exercise: Derive a lower bound for A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method Probabilistic Models of Dynamical Systems Maximizing this function is the same as maximizing If we define We may write The ML method can thus be seen as a special case of the PEM. Exercise: Find the Fisher information matrix for this system.

  37. Models of linear time invariant system Topics to be covered include: • Guiding Principles Behind Parameter Estimation Method. • Minimizing Prediction Error. • Linear Regressions and the Least-Squares Method. • A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method. • Correlation Prediction Errors with Past Data. • Instrumental Variable Methods.

  38. If ε(t,θ) is correlated with Zt-1 then there was more information available in Zt-1 about y(t) than picked up by Uncorrelated with All possible function of Zt-1 All transformation of ε(t,θ) Correlation Prediction Errors with Past Data Ideally, the prediction error ε(t,θ) for good model should be independent of the past data Zt-1 To test if ε(t,θ) is independent of the data set Zt-1we must check This is of course not feasible in practice. Instead, we may select a certain finite-dimensional vector sequence {ζ(t)} derived from Zt-1 and a certain transformation of {ε(t,θ)} to be uncorrelated with this sequence. This would give Derived θ would be the best estimate based on the observed data.

  39. Correlation Prediction Errors with Past Data Choose a linear filter L(q) and let Choose a sequence of correlation vectors Choose a function α(ε) and define Then calculate Instrumental variable method (next section) is the best known representative of this family.

  40. Exercise: Show that the prediction-error estimate obtained from can be also seen as a correlation estimate for a particular choice of L, ζ and α. Correlation Prediction Errors with Past Data Normally, the dimension of ξ would be chosen so that fN is a d-dimensional vector. Then there is many equations as unknowns. Sometimes one use ξ with higher dimension than d so there is an over determined set of equations, typically without solution. so

  41. Correlation Prediction Errors with Past Data Pseudolinear Regressions We saw in chapter 4 that a number of common prediction models could be written as: Pseudo-regression vector φ(t,θ) contains relevant past data, it is reasonable to require the resulting prediction errors be uncorrelated with φ(t,θ) so: Which the term PLR estimate.

  42. Models of linear time invariant system Topics to be covered include: • Guiding Principles Behind Parameter Estimation Method. • Minimizing Prediction Error. • Linear Regressions and the Least-Squares Method. • A Statistical Framework for Parameter Estimation and the Maximum Likelihood Method. • Correlation Prediction Errors with Past Data. • Instrumental Variable Methods.

  43. We found in section 7.3 that LSE will not tend to θ0 in typical cases. Instrumental Variable Methods Consider linear regression as: The least-square estimate of θ is given by So it is a kind of PEM with L(q)=1 and ξ(t,θ)=φ(t) Now suppose that the data actually described by

  44. We found in section 7.3 that LSE will not tend to θ0 in typical cases. Instrumental Variable Methods Such an application to a linear regressioniscalledinstrumental-variablemethod. The elements of ξ are then calledinstruments or instrumental variables. Estimated θ is:

  45. Exercise: Show that will be exist and tend to θ0 if following equations exists. We found in section 7.3 that LSE will not tend to θ0 in typical cases. Instrumental Variable Methods

  46. A natural idea is to generate the instruments similarly to above model. But at the same time not let them be influenced by this leads to Instrumental Variable Methods Choices of instruments Suppose an ARX model: Where K is a linear filter and x(t) is generated from the input through a linear system

  47. Most instruments used in practice are generated in this way. Obviously, is obtained from past inputs by linear filtering and can be written, consequently, as Instrumental Variable Methods Here

  48. If the input is generated in open loop so that is does not depend on the noise in the system. Then clearly the following property holds: Since both the -vector and -vector are generated form the same input sequence, it might be expected that the following property should hold in general. Instrumental Variable Methods

  49. Instrumental Variable Methods Model-dependent Instruments It may be desirable to choose the filetrs N and M to those of the true system They are clearly not known, but we may let the instruments depend on the parameters in the obvious way

  50. In general, we could write the generation of Where is a d-dimentional column vector of linear filters Instrumental Variable Methods The IV method could be summarized as follows where

More Related