Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

GENERALIZED LEAST SQUARES ALGORITHM

Applying Equation (5.36), the trajectory of the true step response for m = 0,1. iV — 1 lies inside the envelope given by p x S(m) with probability P(p). This envelope provides the confidence bound on the estimated step response model. [Pg.119]

In this section, we present an iterative algorithm in the spirit of the generalized least squares approach (Goodwin and Payne, 1977), for simultaneous estimation of an FSF process model and an autoregressive (AR) noise model. The unique features of our algorithm are the application of the PRESS statistic introduced in Chapter 3 for both process and noise model structure selection to ensure whiteness of the residuals, and the use of covariance matrix information to derive statistical confidence bounds for the final process step response estimates. An important assumption in this algorithm is that the noise term k) can be described by an AR time series model given by [Pg.119]

The algorithm will be presented as a step-by-step procedure for identification of a MISO system. The user must first provide estimates for the times to steady state for the individual subsystems given by iVj, i = 1,2.p, the maximum values to be considered for the reduced model orders n, i = 1,2. p, and the maximum noise model order m. [Pg.119]

Step 3 Determine least squares estimates for the FSF process model parameters using the PRESS statistic to select the model order for each subsystem. [Pg.119]

Step 5 Prom C, estimate the noise model F z) using the least squares method along with the PRESS statistic to determine the value for m. [Pg.119]


The key point of the method consists in the calculation of the continuum states. This is obtained by means of a general least-squares algorithm to compute eigenvectors at any prefixed energy in the continuum spectrum. If we assume that a proper basis set is chosen, the overlap and hamiltonian matrices are evaluated ... [Pg.308]

The field points must then be fitted to predict the activity. There are generally far more field points than known compound activities to be fitted. The least-squares algorithms used in QSAR studies do not function for such an underdetermined system. A partial least squares (PLS) algorithm is used for this type of fitting. This method starts with matrices of field data and activity data. These matrices are then used to derive two new matrices containing a description of the system and the residual noise in the data. Earlier studies used a similar technique, called principal component analysis (PCA). PLS is generally considered to be superior. [Pg.248]

The above equation cannot be used directly for RLS estimation. Instead of the true error terms, e , we must use the estimated values from Equation 13.35. Therefore, the recursive generalized least squares (RGLS) algorithm can be implemented as a two-step estimation procedure ... [Pg.224]

Obviously, a linear least squares algorithm described in section 5.13.1 is not directly applicable to find the best solution of Eq. 6.8. In some instances, it may be possible to convert each equation in (6.8) into a linear form by appropriate substitutions of variables and thus reduce the problem to a linear case. In general, the least squares solution of Eq. 6.8 is obtained by expanding the left hand side of every equation using Taylor s series and truncating the expansion after the first partial derivatives of the respective functions. Hence, Eq. 6.8 maybe converted into ... [Pg.508]

Both equations are useful to obtain well-defined D values in each experiment based on a fitting method. Although we understand that the form in Eq. (33.9) is more general, the numerical data from FCS measurement is not sufficient to obtain the full lineshape of D(t) in Eq. (33.9). Seki et al. obtained an analytical solution of autocorrelation curves for D(L) in a step function [39]. They proved that the solution lineshape is different from that of normal diffusion with a non-linear least square algorithm if the deviation from Eq. (33.17) is too small. Even in this case of moderate anomalous diffusion, the observed value of D changes sensitively,depending on f or I. [Pg.381]

The conditional first-order method of Lindstrom and Bates uses a first-order Taylor series expansion about conditional estimates of interindividual random effects (62). Estimation involves an iterative generalized least-squares type algorithm. This estimation method is available in S-Plus (Insightful, Seattle, WA) as the function NLME (63). [Pg.277]

However, for this model, the derivative is not continuous at x = xo and nonlinear least squares algorithms, ones that do not require derivatives, such as the Nelder Mead algorithm (which is discussed later in the chapter), are generally required to estimate the model parameters (Bartholomew, 2000). [Pg.93]

Within NONMEM, a generalized least-squares-like (GLS-like) estimation algorithm can be developed by iterating separate, sequential models. In the first step, the model is fit using one of the estimation algorithms (FO-approximation, FOCE, etc.). The individual predicted values are saved in a data set that is formatted the same as the input data set, i.e., the output data set contains the original data set plus one more variable the individual predicted values. The second step then models the residual error based on the value of the individual predicted values given in the previous step. So, for example, suppose the residual error was modeled as a proportional error model... [Pg.230]

The model estimates were obtained using FOCE-I. Of interest was how these values would compare using a different estimation algorithm. Hence, the model was estimated using FO-approximation, FOCE, Laplacian, and generalized least-squares (GLS). The results are shown in Table 9.18. FOCE and Laplacian produced essentially the same results. Similarly, FOCE-I and GLS produced essentially the same results as well. Conditional methods tend to produce different results from... [Pg.334]

Characterization of the variogram from actual observations permits the estimation of concentrations at points on the site which were not sampled by application of generalized least-squares type statistical regression algorithms. This type of estimation has come to be referred to as "kriging"(2). Thus, once the similarity of observations with distance has been described in terms of the variogram, contamination across the site can be estimated and... [Pg.247]

FORTRAN source code in which the maximum likelihood is evaluated with one of two different first-order expansions (FO or FOCE) and a second-order expansion about the conditional estimates of the random effects (Laplacian) S-PLUS algorithm utilizing a generalized least-squares (GLS) procedure and Taylor series expansion about the conditional estimates of the interindividual random effects... [Pg.329]

Most of the applications of fuzzy cluster analysis in chemistry apply the fuzzy-c-means algorithm. It relies on the general least-squares error functional... [Pg.1097]

The parameters of the generalized linear models can be estimated by two procedures. In the first procedure, the parameters are estimated by maximizing the likelihood function through an iterative least squares algorithm, as ... [Pg.984]

In engineering we often encounter conditionally linear systems. These were defined in Chapter 2 and it was indicated that special algorithms can be used which exploit their conditional linearity (see Bates and Watts, 1988). In general, we need to provide initial guesses only for the nonlinear parameters since the conditionally linear parameters can be obtained through linear least squares estimation. [Pg.138]

Albarede, F. Provost, A. (1977). Petrological and geochemical mass balance an algorithm for least-squares fitting and general error analysis. Comp. Sci., 3, 309-26. [Pg.526]

Chapter 4 is an introduction to linear and non-linear least-squares fitting. The theory is developed and exemplified in several stages, each demonstrated with typical applications. The chapter culminates with the development of a very general Newton-Gauss-Levenberg/Marquardt algorithm. [Pg.336]


See other pages where GENERALIZED LEAST SQUARES ALGORITHM is mentioned: [Pg.99]    [Pg.119]    [Pg.119]    [Pg.126]    [Pg.209]    [Pg.531]    [Pg.99]    [Pg.119]    [Pg.119]    [Pg.126]    [Pg.209]    [Pg.531]    [Pg.102]    [Pg.32]    [Pg.58]    [Pg.92]    [Pg.493]    [Pg.6317]    [Pg.277]    [Pg.277]    [Pg.120]    [Pg.112]    [Pg.203]    [Pg.6316]    [Pg.743]    [Pg.329]    [Pg.1872]    [Pg.337]    [Pg.133]    [Pg.284]    [Pg.104]    [Pg.3]    [Pg.198]    [Pg.337]    [Pg.95]    [Pg.165]   


SEARCH



General Algorithms

General Least Squares

Generalization algorithm

Generalized least squares

Least algorithm

Least squares algorithm

© 2024 chempedia.info