1 / 17

CHAPTER 3 R ECURSIVE E STIMATION FOR L INEAR M ODELS

Slides for Introduction to Stochastic Search and Optimization ( ISSO ) by J. C. Spall. CHAPTER 3 R ECURSIVE E STIMATION FOR L INEAR M ODELS. Organization of chapter in ISSO Linear models Relationship between least-squares and mean-square LMS and RLS estimation

mercia
Download Presentation

CHAPTER 3 R ECURSIVE E STIMATION FOR L INEAR M ODELS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Slides for Introduction to Stochastic Search and Optimization (ISSO)by J. C. Spall CHAPTER 3RECURSIVE ESTIMATION FOR LINEAR MODELS Organization of chapter in ISSO Linear models Relationship between least-squares and mean-square LMS and RLS estimation Applications in adaptive control LMS, RLS, and Kalman filter for time-varying solution Case study: Oboe reed data

  2. Basic Linear Model • Consider estimation of vector in model that is linear in • Model has classical linear form where zkis kth measurement, hk is corresponding “design vector,” and vk is unknown noise value • Model used extensively in control, statistics, signal processing, etc. • Many estimation/optimization criteria based on “squared-error”-type loss functions • Leads to criteria that are quadratic in • Unique (global) estimate 

  3. Least-Squares Estimation • Most common method for estimating  in linear model is by method of least squares • Criterion (loss function) has form where Zn = [z1, z2,…, zn]T and Hnis n  p concatenated matrix of hkT row vectors • Classical batch least-squares estimate is • Popular recursive estimates (LMS, RLS, Kalman filter) may be derived from batch estimate

  4. Geometric Interpretation of Least-Squares Estimate when p = 2 and n = 3

  5. Recursive Estimation • Batch form not convenient in many applications • E.g., data arrive over time and want “easy” way to update estimate at time k to estimate at time k+1 • Least-mean-squares (LMS) method is very popular recursive method • Stochastic analogue of steepest descent algorithm • LMS recursion: • Convergence theory based on stochastic approximation (e.g., Ljung, et al., 1992; Gerencsér, 1995) • Less rigorous theory based on connections to steepest descent (ignores noise) (Widrow and Stearns, 1985; Haykin, 1996)

  6. LMS in Closed-Loop Control • Suppose process is modeled according to autoregressive (AR) form: where xk represents state,  and iare unknown parameters, uk is control, and wk is noise • Let target (“desired”) value for xk be dk • Optimal control law known (minimizes mean-square tracking error): • Certainty equivalence principle justifies substitution of parameter estimates for unknown true parameters • LMS used to estimate  and iin closed-loop mode

  7. LMS in Closed-Loop Control for First-Order AR Model

  8. Recursive Least Squares (RLS) • Alternative to LMS is RLS • Recall LMS is stochastic analogue of steepest descent (“first order” method) • RLS is stochastic analogue of Newton-Raphson (“second order” method)  faster convergence than LMS in practice • RLS algorithm (2 recursions): • NeedP0andto initialize RLS recursions

  9. Recursive Methods for Estimation of Time-Varying Parameters • It is common to have the underlying true  evolve in time (e.g., target tracking, adaptive control, sequential experimental design, etc.) • Time-varying parameters implies  replaced with k • Consider modified linear model • Prototype recursive form for estimating k is where choice of Ak and k depends on specific algorithm

  10. Three Important Algorithms for Estimation of Time-Varying Parameters • LMS • Goal is to minimize instantaneous squared-error criteria across iterations • General form for evolution of true parameters k • RLS • Goal is to minimize weighted sum of squared errors • Sum criterion creates “inertia” not present in LMS • General form for evolution of k • Kalman filter • Minimizes instantaneous squared-error criteria • Requires precise statistical description of evolution of k via state-space model • Details for above algorithms in terms of prototype algorithm (previous slide) are in Section 3.3 of ISSO

  11. Case Study: LMS and RLS with Oboe Reed Data …an ill wind that nobody blows good. —Comedian Danny Kaye in speaking of the oboe in the “The Secret Life of Walter Mitty” (1947) • Section 3.4 of ISSO reports on linear and curvilinear models for predicting quality of oboe reeds • Linear model has 7 parameters; curvilinear has 4 parameters • This study compares LMS and RLS with batch least-squares estimates • 160 data points for fitting models (reeddata-fit); 80 (independent) data points for testing models (reeddata-test) • reeddata-fit and reeddata-test data sets available from ISSO Web site

  12. Oboe with Attached Reed

  13. Comparison of Fitting Results for reeddata-fit and reeddata-test • To test similarity offitandtestdata sets, performed model fitting using testdata set • This comparison is for checking consistency of the two data sets; not for checking accuracy of LMS or RLS estimates • Compared model fits for parameters in • Basic linear model (eqn. (3.25) in ISSO) (p = 7) • Curvilinear model (eqn. (3.26) in ISSO) (p = 4) • Results on next slide for basic linear model

  14. Comparison of Batch Parameter Estimates for Basic Linear Model. Approximate 95% Confidence Intervals Shown in [·, ·]

  15. Comparison of Batch and RLS with Oboe Reed Data • Compared batch and RLS using 160 data points in reeddata-fit and 80 data points for testing models in reeddata-test • Two slides to follow present results • First slide compares parameter estimates in pure linear model • Second slide compares prediction errors for linear and curvilinear models

  16. Batch and RLS Parameter Estimates for Basic Linear Model (Data from reeddata-fit)

  17. Mean and Median Absolute Prediction Errors for the Linear and Curvilinear Models (Model fits from reeddata-fit; Prediction Errors from reeddata-test)

More Related