Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Recursive Least Squares RLS

In this case we assume that e is white noise, i.e., the e s are identically and independently distributed normally with zero mean and a constant variance cr. Thus, the model equation can be rewritten as [Pg.219]

Whenever a new measurement, yn, becomes available, the parameter vector is updated to 0 by the formula [Pg.219]

The new estimate of the normalized parameter covariance matrix, P , is obtained from [Pg.220]

The updated quantities 0 and P represent our best estimates of the unknown parameters and their covariance matrix with information up to and including time tn. Matrix Pn represents an estimate of the parameter covariance matrix since, [Pg.220]

The above equations are developed using the theory of least squares and making use of the matrix inversion lemma [Pg.220]


Equations 13.14 to 13.16 constitute the well known recursive least squares (RLS) algorithm. It is the simplest and most widely used recursive estimation method. It should be noted that it is computationally very efficient as it does not require a matrix inversion at each sampling interval. Several researchers have introduced a variable forgetting factor to allow a more precise estimation of 0 when the process is not "sensed" to change. [Pg.221]

The above equation cannot be used directly for RLS estimation. Instead of the true error terms, e , we must use the estimated values from Equation 13.35. Therefore, the recursive generalized least squares (RGLS) algorithm can be implemented as a two-step estimation procedure ... [Pg.224]


See other pages where Recursive Least Squares RLS is mentioned: [Pg.219]    [Pg.17]    [Pg.207]    [Pg.233]    [Pg.221]    [Pg.429]    [Pg.219]    [Pg.17]    [Pg.207]    [Pg.233]    [Pg.221]    [Pg.429]   


SEARCH



Recursion

Recursive

Recursive least squares

© 2024 chempedia.info